This post details my experience working with Docker Datacenter (DDC)/Mirantis Docker Enterprise - an integrated container management and security solution and now part of Docker Enterprise Edition (EE) offering. Docker EE is a certified solution which is commercially supported. Refer to
https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/dee-intro.html for more information on Docker EE. I've worked with both production implementation as well as Proof Of Concept (PoC) solution of DDC. This blog post mainly contains my experience while doing PoC.
Obviously, the first step, while building the DDC is to define/design the architecture of DDC. Docker provides a Docker Reference architecture (
https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Docker_EE_Best_Practices_and_Design_Considerations) and it is a good starting point. For this PoC, we will be creating a DDC based on reference architecture by taking a subset of it. Instead of three
Universal Control Plane (UCP) nodes, we'll have just one, instead of three
Docker Trusted Registry (DTR) nodes, we'll have one and lastly instead of four
UCP worker nodes for application, we'll have just two. As an application load balancer, we'll use Dockerized
HA-Proxy. It'll be a separate node (not managed by UCP). For this PoC, we'll use CentOS 7.x powered virtual machines created using 'Oracle VirtualBox'. I'm also going to highlight a few tricks and tips associated with VirtualBox. We'll also create a client Docker node to communicate with DDC components remotely.
Since, we have decided to take the subset of Docker reference architecture and work on it, we can go ahead and start building the infrastructure. For this POC, the entire infrastructure consists of a Windows 10 laptop with 16 GB memory and an Oracle VM VirtualBox.
1. Create Virtual Machines (VMs)
Let's first create a VM with Centos 7.x Linux. Download CentOS 7-1611 VirtualBox image from
osboxes.org and create the first node with 2 GB of memory.
Once the VM is ready, make 6 clones of it. For each clone follow the steps below:
1.1 Network setting:
We'll do few things to emulate a static
IP for each virtual machine, otherwise the UCP and DTR will experience an issue if IP changes after the installation. See the recommendation from Docker regarding the static IP and hostname
here. Enable two network adapter and configure as below:
- Adapter 1: NAT - to allow the VM (guest) to communicate with the outside world through host computer's network connection.
- Adapter 2: Host-only Adapter - to allow connection between host and guest. It also helps us to set static IP.
Refer to
https://gist.github.com/pjdietz/5768124 for details on how to set this up. One more thing to remember, if you are using CentOS 7, and want to set Permanent static IP, you need to use Interface Configuration file (refer to
https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-networkscripts-interfaces.html) instead of /etc/network/interfaces. In my case, I used the following:
1.1.1) Identified the interface used for HostOnly Adapter. See below highlighted in yellow:
1.1.2) Created file "/etc/sysconfig/network-scripts/ifcfg-enp0s8" on guest. where
enp0s8 is the network interface name with content like this:
Note: make sure to create unique IP for each clone like 192.168.56.101, 192.168.56.102, 192.168.56.103, ... etc.
1.2 Hostname setup
In order to set hostname on CentOS, follow:
1.3) [optional] Update /etc/hosts file
For easy access to each node, add the mapping entries in /etc/hosts file of each VM. The following entries are per my configuration.
2. Install & Configure Commercially Supported (CS) Docker Engine
2.1) Installation:
Official installation document:
https://docs.docker.com/engine/installation/linux/centos/#install-using-the-repository
You can install either using the repository or install from a package. Here, we will install using the repository. To install Docker Enterprise Edition (Docker EE) using the repository, you need to know the Docker EE repository URL associated with your licensed or trial subscription. To get this information:
- Go to https://store.docker.com/?overlay=subscriptions.
- Choose Get Details / Setup Instructions within the Docker Enterprise Edition for CentOS section.
- Copy the URL from the field labeled Copy and paste this URL to download your Edition.
- set up Docker’s repositories and install from there, for ease of installation and upgrade tasks. This is the recommended approach.
3. Install & Configure Universal Control Plane (UCP)
UCP is a cluster management solution for Docker Enterprise. In a nutshell, it itself is a containerized application that runs on (CS) Docker Engine and facilitates user interaction (deploy, configure, monitor etc) through (API, Docker CLI, GUI etc) with other containerized applications managed by DDC.
3.1 Prepare for Installation
- Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Prepare UCP node:
3.2 Install Docker Universal Control Plane (UCP)
That concludes the installation and configuration of UCP. Next is to install the Docker Trusted Registry (DTR).
4. Install & Configure Docker Trusted Registry (DTR)
4.1 Installation steps
- Start the virtual machine for DTR node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Add this (DTR) node to DDC UCP:
- Access the UCP Web UI.
- Click on "+ Add node" link.
- It shows you command to run from the node. Copy the command, it looks something like:
- Run the command from DTR node:
- Generate DTR Installation command string from UCP Web UI:
- Access UCP Web UI.
- Under Install DTR (in the newer version of UCP, you have to navigate to Admin Settings --> Docker Trusted Registry), click on Install Now, select appropriate selection and it gives you command to copy. Command looks something like:
docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
--ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
--ucp-url https://192.168.56.101
Note: Where the --ucp-node is the hostname of the UCP node where you want to deploy DTR
Here is a screen shot that shows DTR installation command string:
- Start the installation.
Note: DTR installation details can be found at https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/#step-3-install-dtr
Troubleshooting note: if you find your DTR is having some issue and need to remove and re-install, follow this like
https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/uninstall/
# Follow the installation steps (4, 5, 6) again.
5. Setup DDC client node
In order to access all DDC nodes (UCP, DTR, Worker) and perform operation remotely, you need to have a Docker client, configured to communicate with DDC securely.
5.1 Installation steps
- Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Download UCP client certificate bundle from UCP and extract it on client host machine.
- Access UCP Web UI and navigate to User Management
- Click on User
- Click on « Create a Client Bundle » as shown below in the screen shot:
- Configure client so that it can securely connect to UCP:
Configure client so that it can securely connect to DTR and push/pull images.
Note: If DTR is using the auto generated self signed cert, your client Docker Engine
need to configure to trust the certificate presented by DTR, otherwise, you get "x509: certificate signed by unknown authority" error.
Refer to: https://docs.docker.com/datacenter/dtr/2.1/guides/repos-and-images/#configure-your-host for detail.
For CentOS, you can install the DTR certificate in the client trust store as follows:
5.2 Configure Notary client
By configuring Notary client, you'll be able to sign Docker image(s) with the private keys in your UCP client bundle, trusted by UCP and easily traced back to your user account. Read details here;
https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/configure-your-notary-client/
Note: By default, CLI does not sign an image while pushing to DTR. In order to sign image while # pushing, set the environment variable DOCKER_CONTENT_TRUST=1
5.3 Install Docker Compose:
Note: Docker for Mac, and Windows may already include docker-compose. In order to find out whether the Docker Compose is already installed, just run the the docker-compose --version command.
Install:
6. Setup Worker node(s):
Worker node is real work horse in DDC setup where production application runs. Below are installation steps.
6.1 Installation steps
- Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Add this (worker) node to DDC UCP:
- Access the UCP Web UI.
- Click on "+ Add node" link.
- It shows you command to run from the node. Copy the command, it looks something like:
- Run the command from worker node:
- Repeat steps 1 to 4 for each additional worker node
- Once all nodes join the swarm, run the following command from client (while connected to UCP) to confirm and list all the nodes:
- Optional: Put node like DTR or UCP in "drain" mode, where you don't want application container to deploy. Here, we put DTR in "drain" mode
Here is how node listing of our PoC appears on UCP Web UI:
7. Create Additional User, Access Label and Network:
7.1 Create additional user, team and permission label as necessary from UCP Web UI.
7.2 Create network.
As per Docker documentation, containers connected to the default bridge network can communicate with each other by IP address. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. Traditionally, you could link two containers together using the legacy 'docker run --link ...' option, but here, we are going to define a network and attach our containers, so that they can communicate as required. Details can be found here
https://docs.docker.com/engine/userguide/networking/#user-defined-networks
Note: docker links feature has been deprecated in favor of user defined networks as explained
here.
Service discovery in Docker is network-scoped, meaning the embedded DNS functionality in Docker
can be used only by containers or tasks using the same network to resolve each other's addresses, so our plan here is to deploy a set of services that can communicate to each other using DNS.
Note: If the destination and source container or service are not on the same network, Docker Engine forwards the DNS query to the default DNS server.
Note: if you get error
Error response from daemon: Error response from daemon: Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component
while creating network, check your network name. Make sure it does not contain any dot '.'. Error message itself is little bit confusing. Refer to issue
31772 for details.
7.3 Enable "HTTP Routing Mesh" if necessary
- Login to the UCP web Web UI.
- Navigate to Admin Settings > Routing Mesh.
- Check Enable HTTP Routing Mesh.
- Configure the ports for HRM to listen on, with the defaults being 9080 and 9443. The HTTPS port defaults to 8443 so that it doesn't interfere with the default UCP management port (443).
Note: If it is a NEW network with label '--label com.docker.ucp.mesh.http=true', you need to disable and then re-enable "HTTP Routing Mesh". It can be done through UCP UI:
- Disable: Admin Settings --> Routing Mesh --> Uncheck "Enable HTTP routing mesh". Click on Update button.
- Enable: Admin Settings --> Routing Mesh --> check "Enable HTTP routing mesh". Click on Update button.
8. Docker Application Deployment
8.1 Preparation:
For this PoC, we're going to build the custom image of Lets-Chat app and deploy using Docker Compose. Here is how our Dockerfile looks like:
Note: All the steps listed in step 8.x are executed on or from client node.
8.1.1 Create Dockerfile for lets-chat:
8.1.2 Create lets-chat image using Dockerfile. Run 'docker build ...' command from the same directory where the Dockerfile is located.
8.1.3 Pull the Mongo DB image:
8.1.4 Rename/Tag the images as per DTR namespace and Push the images to DTR:
8.1.5 Push the images to DTR:
Note: before pushing the image, you need to create "repository" for the images (if one doesn't exist already). Create corresponding repo from DTR Web UI:
8.1.5 Pull the images to all DDC worker nodes where the images will be instantiated into corresponding containers.
8.1.6 Create Docker Compose file:
Here is how our docker-compose.yml looks like:
Few things to notice in the docker-compose.yml above are:
- placement constraints: [node.role == manager] for mongo. We are giving instruction to docker to instantiate the mongo container only on the node which has manager role. role can be “worker” or “manager”.
- Label: com.docker.ucp.access.label=dev; define access constraint by label. See for details. https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/
- Label: com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"; Here lets-chat application is configured for HRM and to accessed using host mydockertest.com on port 8080, which will be our HA-Proxy's host and port. Docker uses DNS for service discovery as services are created. Docker has different built in routing meshes for high availability. HTTP Routing Mesh (HRM) is an application layer routing mesh that routes HTTP traffic based on DNS hostname is part of UCP 2.0.
- Also, note that we are not exposing (explicitly) port for mongo, as mongo and lets-chat are in the same network 'my_hrm_network', they will be able to communicate even though, they will be instantiated in different Host (nodes). For lets-chat, the application listens on port 8080, but we are not publishing it explicitly because we are implementing containers with scaling in mind and relaying on Docker HRM. If you publish port explicitly to host (e.g. -p 8080:8080), then it will be an issue if you have to instantiate more than one replica in the same host, because there will be port conflict as only one process can listen into the same port on the same IP. More detail about HRM and service discovery: https://docs.docker.com/engine/swarm/ingress/#configure-an-external-load-balancer and https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/. Good read about service discovery, load balancing and also Swarm, Ingress and HRM: https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Service_Discovery_and_Load_Balancing_with_Docker_Universal_Control_Plane_(UCP)
8.2 Deployment:
8.2.1 Validate docker-compose.yml:
Note: the above mentioned WARNING is obvious, because if you deploy your service/container using regular 'docker-compose up -d ...' command, it does not support the 'deploy' key.
In our case, we are going to use 'docker stack deploy ...' instead. So, we can safely ignore the warning.
8.2.1 Deploy
As you can see one instance of mongo and 4 instances of lets-chat have been created.
9. Setup HA-Proxy node:
Here we will have a simple configuration of HA-Proxy just to show the working idea. Refer to HA-Proxy
documentation and
Docker documentation for HA-Proxy for details.
Note: for this PoC, we are deploying ha-proxy outside of swarm cluster.
9.1 Setup steps
- Start the virtual machine for HA-Proxy node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Prepare the configuration file for HA-Proxy.
Few notes about haproxy.cfg above.
- Backend connection. We have 4 replicas (2 replica per node), but as you can see only two back-end connections are mentioned in the configuration file. It is the beauty of using Docker Swarm HRM. As long as the traffic reaches to any of the HRM node, whether the actual replica is running or not there, swarm automatically directs traffic to one of the replicas running in one of the available nodes. Docker swarm also takes care of load balancing among all replicas.
- The check option at the end of the server directives specifies that health checks should be performed on those back-end servers.
- Frontend section defines bind (ip and port) configuration for the proxy and reference to the corresponding backend configuration. In this case, it is listening to all available IPs on port 8080.
- 'stats uri' defines the status URI.
Now, we have our ha-proxy configuration file is ready, let's build the custom ha-proxy image and instantiate it.
9.2 Create Dockerfile for HA-Proxy
Note: In this case, haproxy.cfg and Dockerfile are located in the same directory from where we are executing 'docker build ...' command as shown below.
9.3) Create custom image
9.4 Verify the configuration and Instantiate ha-proxy container:
10. Access and Verify Application
10.1) Accessing application:
Once the ha-proxy is running, access the application. Make sure firewall is not blocking the port (that ha-proxy is listening) .
http://<ha-proxy-host>:<ha-proxy-port>/<application-uri>
Important: In order to access application, you need to make sure that the '
' in the above URL matches the 'host' part of the external-route configuration of HRM.
In our case it is '
mydockertest.com', so make sure 'mydockertest.com' resolves to the IP address of the ha-proxy. It is the way how HRM along with Swarm discover the services and route the requests in Ingress cluster and we are able to scale containers dynamically.
10.2) Application verification:
10.2.1) Get the stat from haproxy. Along with other things, stat shows the request count and which Swarm node is serving the request:
http://
:/haproxy?stats
10.2.2) First access lets-chat through Web-UI (http://mydockertest.com:8080). Create your account. Log using your credential. Once you create the account and able to login, in order to verify that the lets-chat is making successful connection to the mongo db, you can do the following:
# Inspect the lets-chat instance:
Once you have Docker Datacenter up and running, upgrade it to Docker EE 2.0 and UCP 3.x to have choice of Swarm or Kubernetes orchestration. See my post
Upgrade to Docker EE 2.0 and UCP 3.x for Choice of Swarm or Kubernetes Orchestration.
Looks like you're really into Docker, see my other related blog posts below: