Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes, pronounced “koo-ber-net-eez,” the name is Greek for “helmsman of a ship.” Build, deliver, and scale containerized apps faster with Kubernetes, sometimes referred to as “k8s”
In this blog, you explore what Kubernetes is, the advantages and disadvantages.
Let’s start with some definitions.
Kubernetes provides an open source API that controls how and where those containers will run.
Get acquainted with Kubernetes definitions
Kubernetes orchestrates clusters of virtual machines and schedules containers to run on virtual machines cluster based on available compute resources and the resource requirements of each container. Containers are grouped into pods, the basic operational unit for Kubernetes, and those pods scale to your desired state.
Let’s unpack that.
Container provides resource isolation so that a process can run separated from the host system. This means you can to package up an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package.
A modern container is more than just an isolation mechanism: it also includes a container image —the files that make up the application that runs inside the container.
In this blog, we’ll explore Docker.
Docker is a tool designed to make it easier to create, deploy, and run applications by using containers.
Docker allows applications to use the same Linux or Windows kernel as the system that they’re running on and only requires applications be shipped with things not already running on the host computer.
Docker Engine is an open source containerization technology for building and containerizing your applications. Docker Engine acts as a client-server application with:
- A server with a long-running daemon process
- APIs which specify interfaces that programs can use to talk to and instruct the Docker daemon.
- A command line interface (CLI) client
A container image represents binary data that encapsulates an application and all its software dependencies. Container images are executable software bundles that can run standalone and that make very well defined assumptions about their runtime environment.
You typically create a container image of your application and push it to a registry before referring to it in a Pod.
Kubernetes gives you the platform to schedule and run containers on clusters of physical or virtual machines. Kubernetes architecture divides a cluster into components that work together to maintain the cluster’s defined state.
A cluster is a set of worker machines, called nodes, that run containerized applications. Every cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.
The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.
This layer is composed by many different components, such as (but not restricted to):
These components can be run as traditional operating system services (daemons) or as containers.
The smallest and simplest Kubernetes object. A Pod represents a set of running containers on your cluster.
A Pod is typically set up to run a single primary container. It can also run optional sidecar containers that add supplementary features like logging. Pods are commonly managed by a Deployment.
You describe the set of pods as an API object (think of it as a JSON file). Each set of pods can be replicated by Kubernetes. Each replica is represented by a Pod, and the Pods are distributed among the nodes of a cluster.
Once you define your deployment, Kubernetes watches the state of your cluster, then makes or requests changes to the cluster as needed. It uses a controller to move the current cluster state closer to the desired state, that you specified in your deployment.
Azure Container Service (AKS)
Azure Kubernetes Service (AKS) simplifies the a managed Kubernetes cluster in Azure. As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes controllers/masters are managed by Azure. You only manage and maintain the agent nodes.
You can explore more definitions in the Kubernetes standardized glossary.
Advantages and Disadvantages of Kubernetes
Daniel Thiry of the DevSpace blog has expressed enthusiasm for Kubernetes. Here are some key advantages.
- Using Kubernetes and its huge ecosystem can improve your productivity
- Kubernetes and a cloud-native tech stack attracts talent
- Kubernetes is a future proof solution
- Kubernetes helps to make your application run more stable
- Kubernetes can be cheaper than its alternatives
And some disadvantages:
- Kubernetes can be an overkill for simple applications
- Kubernetes is very complex and can reduce productivity
- The transition to Kubernetes can be cumbersome
- Kubernetes can be more expensive than its alternatives
Summary of Kubernetes features
Kubernetes has the following features:
- Manages cluster of containers
- Provide tools for deploying applications
- Scale applications as and when needed
- Manage changes to the existing containerized applications
- Helps to optimize the use of underlying hardware beneath your container
- Enables the application component to restart and move across the system as and when needed
To learn more about Kubernetes, see:
- Borg, Omega, and Kubernetes
- Kubernetes documentation
- What are Linux containers?
- Docker Engine overview
- What is Kubernetes? on the Azure documentation
- Kubernetes in 10 minutes: A Complete Guide to Look For
- Azure Kubernetes Service (AKS)
- Pros and Cons of Kubernetes