In this post, learn how microservices, containers, and Kubernetes are all related. One is an architecture, one is a deployment mechanism, and one orchestrates how those deployments will function in production.
A microservice is a program that runs on a server or a virtual computer and responds to some request. Microservices give you a way to build applications that are resilient, highly scalable, independently deployable, and able to evolve quickly.
Microservices have a more narrow scope and focus on doing smaller tasks well.
A container is just a process spawned from an executable file, running on a Linux machine, which has some restrictions applied to it.
Kubernetes (aka, K8s) help you increase your infrastructure utilization through the efficient sharing of computing resources across multiple processes. Kubernetes is the master of dynamically allocating computing resources to fill the demand. The side benefits of K8s that make the transition to microservices much easier.
Let’s see how that works.
You can think of microservices as small, independent, and loosely coupled. A single small team of developers writes and maintains a microservice.
Each service is a separate codebase and can be deployed independently. Update your service without rebuilding and redeploying the entire application.
Services are responsible for saving their own data. Services communicate over well-defined APIs.
Each team building a microservice can choose their own technology stack, library, or frameworks — even versions do not need to match what other microservices or other development teams are doing.
Benefits and challenges of microservices
Microservices offers several key benefits:
- Small, focused teams
- Small code base
- Mix of technologies
- Fault isolation
- Data isolation
However you do get some challenges:
- Development and testing
- Lack of governance
- Network congestion and latency
- Data integrity
To learn more about the benefits and challenges, see Building microservices on Azure.
The following illustration (from Microsoft) shows how microservices can integrate into your services deployment.
Containers are a process for an executable file. But by separating out the process from the computer and its operating system, you get several key benefits.
- A container is not allowed to “see” all of the filesystem, it can only access a designated part of it.
- A container is not allowed to use all of the CPU or RAM.
- A container is restricted in how it can use the network.
Containers also refer to how the executable is packaged and stored. There are several ways to package processes. One of the most popular is Docker.
Use Docker to take your executable, and its dependencies, plus any other files you want, and package them all together into a single file. Docker includes some additional instructions and configuration to run the packaged executables using containers and images.
Containers and Images
An instance of an image is called a container. You have an image, which is a set of layers as you describe. If you start this image, you have a running container of this image. You can have many running containers of the same image.
- A container is a Linux process with enforced restrictions
- A container image is a Linux executable packaged with its dependencies and configuration
In other virtual machine environments, images would be called something like “snapshots.”
eShopOnContainers reference application
Microsoft, in partnership with leading community experts, has produced a full-featured cloud-native microservices reference application, eShopOnContainers. This application is built to showcase using .NET Core and Docker, and optionally Azure, Kubernetes, and Visual Studio, to build an online storefront.
The following illustration shows the reference application development architecture of microservices.
Each of the different microservices is designed differently, based on their individual requirements. This means their technology stack may differ, including the way the data is persisted.
For more information on the business requirements and the non-functional requirements, see Introducing eShopOnContainers reference app.
Microservices on Kubernetes
Kubernetes is an open-source container orchestration system for automating deployments, and scaling and management of containerized applications. In the preceding example, the Docker Host could be Kubernetes.
Kubernetes provides the following services for maintaining your containerized application:
- Immutable infrastructure. Containers and Kubernetes have made it possible to run pre-built and configured components as part of every release. Users do not make any manual configuration changes. With every deployment, a new container is deployed.
- Self healing systems. Kubernetes ensures that the Desired state and Actual state are always in sync. Kubernetes continuously monitors the health of the cluster and ensures that the system is self-healing.
- Declarative configuration. You provide a manifest file (typically in YAML) that describes how you want the cluster to look. This forms the basis of the cluster desired state.
- Autoscaling. Kubernetes can monitor your workload and scale it up or down based on the CPU utilization or memory consumption. This automatic scaling is great for applications having spikes in load and usage for a period of time. You can scale:
- Vertically. Increasing the amount of CPU the pods can use
- Horizontally. Increasing the number of pods
Kubernetes is the master of dynamically allocating computing resources to fill the demand. This helps your organization from paying for computing resources your users are not using.
Kubernetes as a part of your microservices architecture
Kubernetes is one part of the system that you will use to deploy your microservices. You will also need:”
- API Gateway. An API gateway sits between external clients and the microservices. It acts as a reverse proxy, routing requests from clients to microservices.
- Data storage. In a microservices architecture, services should not share data storage. Each service should own its own private data in a separate logical storage, to avoid hidden dependencies among services.
- Kubernetes Service object provides a set of capabilities that match the microservices requirements for service discoverability:
- IP Address
- Load balancing
- Service discovery
- Ingress controller might implement the API gateway pattern, which:
- Routes client request to the right backend services.
- Aggregates multiple requests into a single request, reducing chattiness
- Offload functionality from backend services, such as SSL termination, authentication, IP restrictions, or client rate limiting (throttling).
- TLS/SSL encryption
- Namespaces. Use namespaces to organize services within the cluster.
- Autoscaling. Kubernetes supports scale-out at two levels:
- Scale the number of pods allocated to a deployment.
- Scale the nodes in the cluster, to increase the total compute resources available to the cluster.
- Health probes. Kubernetes defines two types of health probe that a pod can expose:
- Readiness probe: Tells Kubernetes whether the pod is ready to accept requests.
- Liveness probe: Tells Kubernetes whether a pod should be removed and a new instance started.
- Role based access control (RBAC)
- Secrets management and application credentials
- Container and Orchestrator security
- Deployment (CI/CD) considerations
- Load balancing
The following illustration provided in the Microsoft documentation, shows a reference application for a combination of services used to deploy Kubernetes.
For more information, see Microservices architecture on Azure Kubernetes Service (AKS).
In this post, you learned how microservices can be deployed into containers that can then be orchestrated using Kubernetes. You learned how Kubernetes works with other services in an overall architecture. And you learned the high level pros and cons or microservice architecture
- Microservices, Containers and Kubernetes in 10 minutes
- Why you should choose the microservices architecture
- Building microservices on Azure
- Microservices architecture on Azure Kubernetes Service (AKS)
- Introducing eShopOnContainers reference app
- Microservices on Kubernetes
To learn more about microservices best practices and how to architect microservice-based applications, read the companion book, .NET Microservices: Architecture for Containerized .NET Applications.