You need orchestration when transitioning from deploying containers individually on a single host to deploying complex multi-container apps on many machines.
The following describes some of the most popular:
- Docker Swarm
- Mesos DC/OS
The purpose of this post is to define the terms and to surface the main features. The goal of the post is not to compare, but to provide definitions of the container technologies.
In general, these container solutions run and support Linux containers.
Docker’s Swarm Mode
Beginning with Docker 1.12, the core Docker Engine can provide multi-host and multi-container orchestration, called swarm mode. API objects, like Service and Node will let you use the Docker API to deploy and manage apps on a group of Docker Engines called a swarm.
(Image from Docker)
According to the Docker site, Swarm Mode features:
- Cluster management integrated with Docker Engine: Use the Docker Engine CLI to create a swarm of Docker Engines. This is where you deploy application services. You don’t need additional orchestration software to create or manage a swarm.
- Decentralized design: Instead of handling differentiation between node roles at deployment time, the Docker Engine handles any specialization at runtime. You can deploy both kinds of nodes, managers and workers, using the Docker Engine. This means you can build an entire swarm from a single disk image.
- Declarative service model: Docker Engine uses a declarative approach to let you define the desired state of the various services in your application stack. For example, you might describe an application comprised of a web front end service with message queueing services and a database backend.
- Scaling: For each service, you can declare the number of tasks you want to run. When you scale in or out, the swarm manager automatically adapts by adding or removing tasks to maintain the desired state.
- Desired state reconciliation: The swarm manager node constantly monitors the cluster state and reconciles any differences between the actual state and your expressed desired state. For example, if you set up a service to run 10 replicas of a container, and a worker machine hosting two of those replicas crashes, the manager will create two new replicas to replace the replicas that crashed. The swarm manager assigns the new replicas to workers that are running and available.
- Multi-host networking: You can specify an overlay network for your services. The swarm manager automatically assigns addresses to the containers on the overlay network when it initializes or updates the application.
- Service discovery: Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. You can query every container running in the swarm through a DNS server embedded in the swarm.
- Load balancing: You can expose the ports for services to an external load balancer. Internally, the swarm lets you specify how to distribute service containers between nodes.
- Secure by default: Each node in the swarm enforces TLS mutual authentication and encryption to secure communications between itself and all other nodes. You have the option to use self-signed root certificates or certificates from a custom root CA.
- Rolling updates: At rollout time you can apply service updates to nodes incrementally. The swarm manager lets you control the delay between service deployment to different sets of nodes. If anything goes wrong, you can roll-back a task to a previous version of the service.
Docker Swarm on Azure
Docker Swarm is supported in Microsoft Azure and can be deployed using these two Azure resource manager deployment templates which already have the highly-available baseline configuration details worked out for you:
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes satisfies a number of common needs of applications running in production, such as:
- Co-locating helper processes, facilitating composite applications and preserving the one-application-per-container model
- Mounting storage systems
- Distributing secrets
- Checking application health
- Replicating application instances
- Using Horizontal Pod Autoscaling
- Naming and discovering
- Balancing loads
- Rolling updates
- Monitoring resources
- Accessing and ingesting logs
- Debugging applications
- Providing authentication and authorization
Kubernetes follows the master-slave architecture.
(Image from Cognitree)
Master is the main controlling unit of the Kubernetes cluster. As administrator, it what you manage.
The worker node is the server thar performs the work. This is where your containers are deployed.
For a deeper dive, see Ankit’s post Overview of Kubernetes Architecture.
Kubernetes on Azure
Azure Container Service (AKS) makes it simple to create, configure, and manage a cluster of virtual machines that are preconfigured to run containerized applications.
AKS exposes the standard Kubernetes API endpoints.
If you have a setup that is more than 200 servers, you can really benefit from something like open source Apache Mesos or the commercially supported Mesosphere.
Mesos operates on a different level than Kubernetes/Marathon/Chronos. It can run Hadoop, Jenkins, Spark, Aurora, and other frameworks on a dynamically shared pool of nodes.
Meson 1.0.0 introduced experimental support for Windows.
(Image from Mesosphere)
Mesosphere Enterprise DC/OS is an enterprise grade datacenter-scale operating system, providing a single platform for running containers, big data, and distributed apps in production. Mesosphere DC/OS runs Docker containers and traditional apps alongside data services on any infrastructure.
DC/OS is a distributed operating system based on the Apache Mesos distributed systems kernel. It enables the management of multiple machines as if they were a single computer. It automates resource management, schedules process placement, facilitates inter-process communication, and simplifies the installation and management of distributed services. Its included web interface and available command-line interface (CLI) facilitate remote management and monitoring of the cluster and its services.
Apache Mesos is the open-source distributed systems kernel at the heart of the Mesosphere DC/OS. It abstracts the entire datacenter into a single pool of computing resources, simplifying running distributed systems at scale. Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively.
Users can either launch a Docker image as a Task or as an Executor.
Mesos consists of a master daemon that manages agent daemons running on each cluster node, and Mesos frameworks that run tasks on these agents.
(Image from Apache)
Containers in Mesos
Mesos containerizer uses native OS features directly to provide isolation between containers, while Docker containerizer delegates container management to the Docker engine. Mesos containerizer now supports launching containers that specify container images, such as Docker and AppC images.
You can choose Marathon or Kubernetes.
Marathon is a production-grade container orchestration platform for Mesosphere’s Datacenter Operating System (DC/OS) and Apache Mesos. According to its documentation, it provides:
- High Availability. Marathon runs as an active/passive cluster with leader election for 100% uptime.
- Multiple container runtimes. Marathon has first-class support for both Mesos containers (using cgroups) and Docker.
- Stateful apps. Marathon can bind persistent storage volumes to your application. You can run databases like MySQL and Postgres, and have storage accounted for by Mesos.
- Beautiful and powerful UI.
- Constraints. e.g. Only one instance of an application per rack, node, etc.
- Service Discovery & Load Balancing. Several methods available.
- Health Checks. Evaluate your application’s health using HTTP or TCP checks.
- Event Subscription. Supply an HTTP endpoint to receive notifications – for example to integrate with an external load balancer.
- Metrics. Query them at /metrics in JSON format or push them to systems like graphite, statsd and Datadog.
- Complete REST API for easy integration and scriptability.
Kubernetes on DC/OS
Mesosphere DC/OS runs Kubernetes-as-a-Service alongside traditional apps and data services on any infrastructure.
You can run Kubernetes alongside other data services and traditional apps on the same cluster. You get load balancing, overlay and ingress integration, and other features.
Marathon & Mesosphere
- Mesos Architecture
- Marathon container orchestration platform
- Mesosphere product guide
- Documentation: Production-Grade Container Orchestration
- Tutorial: Kubernetes Basics
- Tutorial: Deploy an Azure Container Service (AKS) cluster
- Overview of Kubernetes Architecture
Marathon vs Kubernetes vs Docker Swarm on DC/OS with Docker containers from 2015.