Docker Container Concepts, Architecture, Overview

Containers are key to the modern datacenter.

Docker is an open platform for developing, shipping, and running applications in containers.
This post describes the conceptual parts that you will use in setting up Docker. Here are the primary parts:

  • Docker Image is a read-only template for creating a Docker container. You can create your own Dockerfile to define the steps to create an image.
  • Docker Container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI.
  • You talk to the Container through the Docker Engine that provides the Docker client which talks to the Docker daemon. The Docker daemon listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes.
  • Docker Registries stores Docker images. You pull Docker images from the registries. There are public registries and private registries. One private registry is Azure Container Registry provides a private registry for your containers.
  • Task of automating and managing a large number of containers and how they interact is known as orchestration.


In the previous post on Value Proposition for Containers, I described how containers compare to virtual machines. Containers are a lot lighter because they share the operating system. Containers provide value of:

  • App density.
  • Scalability.
  • Portability. Containers work the same whether your production environment is a local data center, a cloud provider, or a hybrid of the two.

Docker provides tooling and a platform to manage the lifecycle of containers:

  • Develop your application and its supporting components using containers.
  • Use the container as the unit to distribute and test your application.

Docker Architecture

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.


(Image from Docker)

Docker Engine

Docker Engine is a client-server application with these major components:

  • Docker daemon. A server which is a type of long-running program called a daemon process (the dockerd command).
    The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes. A daemon can also communicate with other daemons to manage Docker services.
  • Docker client. A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
    The Docker client (docker) is the primary way that many Docker users interact with Docker. When you use commands such as docker run, the client sends these commands to dockerd, which carries them out. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.
  • A command line interface (CLI) client (the docker command) is what you use to communicate with Docker containers.

So it is what you imagine. A container that runs (the Docker daemon) with REST API that you can call through scripts. In the next post, I’ll explain how to set up your scripting environment.


(Image from Docker)

You create and manage the Docker objects:

  • Images
  • Containers
  • Networks
  • Volumes

Docker Registries

Docker CloudA Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.

If you use Docker Datacenter (DDC), it includes Docker Trusted Registry (DTR). DTR securely stores and scanes your Docker images.

Cloud providers support container registries, including Azure and AWS.

Azure Container Registry

You manage a Docker private registry as a first-class Azure resource using Azure Container Registry. With this service, you store and manage container images across all types of Azure deployments. You can keep container images near deployments to reduce latency and costs. Maintain Windows and Linux container images in a single Docker registry.
Azure Container Registry uses familiar, open-source Docker command line interface (CLI) tools. It intgrates access management with Azure Active Directory. You can manage storage account caching to meet throughput and API calls.

Amazon EC2 Container Registry

Amazon EC2 Container Registry (ECR)

is a fully-managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR is integrated with Amazon EC2 Container Service (ECS), simplifying your development to production workflow. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. Amazon ECR hosts your images in a highly available and scalable architecture, allowing you to reliably deploy containers for your applications. Integration with AWS Identity and Access Management (IAM) provides resource-level control of each repository.

Docker Objects

When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects.


An image is a read-only template with instructions for creating a Docker container.
Often you will base an image from another image with some additional customization.
For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application, as well as the configuration details needed to make your application run.


You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.


Services allow you to scale containers across multiple Docker daemons, which all work together as a swarm with multiple managers and workers.


A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

Underlying Technology

Docker is written in Go and takes advantage of several features of the Linux kernel to deliver its functionality. (Incidentally, Go is one of the technologies I’ll recommend for you to install when you get started with Docker.)
The underlying technology takes advantage of several Linux features:

  • Namespaces (namespaces provide the isolated workspace called the container.
  • Control Groups (cgroups) limits an application to specific set of resources. This lets the Docker Engine share hardware to containers and limit resources, such as the memory for a specific container.
  • Union File System operate by creating layers, making them very lightweight and fast. They are used to:
    • Avoid duplicating a complete set of files each time you run an image as a new container.
    • Isolate changes to a container filesystem in its own layer, allowing for that same container to be restarted from a known content (since the layer with the changes will have been dismissed when the container is removed).
  • Docker Engine combines the namespaces, control groups, and UnionFS into a wrapper called a container format. The default container format is libcontainer.

Docker Editions

Docker is available as Docker CE and EE. The Docker product and tools manual page describes the supported platforms.

Explore the Docker product that is right for you.

Docker Community Edition

The free Docker products continue to be available as the Docker Community Edition (Docker CE).

Docker CE is ideal for developers and small teams looking to get started with Docker and experimenting with container-based apps. Available for many popular infrastructure platforms like desktop, cloud and open source operating systems, Docker CE provides an installer for a simple and quick install so you can start developing immediately. Docker CE is integrated and optimized to the infrastructure so you can maintain a native app experience while getting started with Docker. Build the first container, share with team members and automate the dev pipeline, all with Docker Community Edition.

Docker Enterprise Edition

Docker Enterprise Edition (Docker EE) is designed for enterprise development and IT teams who build, ship, and run business-critical applications in production and at scale. Docker EE is integrated, certified, and supported to provide enterprises with the most secure container platform in the industry. For more info about Docker EE, including purchasing options, see Docker Enterprise Edition.