This post details my experience working with Docker Datacenter (DDC)/Mirantis Docker Enterprise - an integrated container management and security solution and now part of Docker Enterprise Edition (EE) offering. Docker EE is a certified solution which is commercially supported. Refer to https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/dee-intro.html for more information on Docker EE. I've worked with both production implementation as well as Proof Of Concept (PoC) solution of DDC. This blog post mainly contains my experience while doing PoC.
Obviously, the first step, while building the DDC is to define/design the architecture of DDC. Docker provides a Docker Reference architecture (https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Docker_EE_Best_Practices_and_Design_Considerations) and it is a good starting point. For this PoC, we will be creating a DDC based on reference architecture by taking a subset of it. Instead of three Universal Control Plane (UCP) nodes, we'll have just one, instead of three Docker Trusted Registry (DTR) nodes, we'll have one and lastly instead of four UCP worker nodes for application, we'll have just two. As an application load balancer, we'll use Dockerized HA-Proxy. It'll be a separate node (not managed by UCP). For this PoC, we'll use CentOS 7.x powered virtual machines created using 'Oracle VirtualBox'. I'm also going to highlight a few tricks and tips associated with VirtualBox. We'll also create a client Docker node to communicate with DDC components remotely.
Since, we have decided to take the subset of Docker reference architecture and work on it, we can go ahead and start building the infrastructure. For this POC, the entire infrastructure consists of a Windows 10 laptop with 16 GB memory and an Oracle VM VirtualBox.
Once the VM is ready, make 6 clones of it. For each clone follow the steps below:
1.1.1) Identified the interface used for HostOnly Adapter. See below highlighted in yellow:
1.1.2) Created file "/etc/sysconfig/network-scripts/ifcfg-enp0s8" on guest. where enp0s8 is the network interface name with content like this:
Note: make sure to create unique IP for each clone like 192.168.56.101, 192.168.56.102, 192.168.56.103, ... etc.
Official installation document: https://docs.docker.com/engine/installation/linux/centos/#install-using-the-repository
You can install either using the repository or install from a package. Here, we will install using the repository. To install Docker Enterprise Edition (Docker EE) using the repository, you need to know the Docker EE repository URL associated with your licensed or trial subscription. To get this information:
That concludes the installation and configuration of UCP. Next is to install the Docker Trusted Registry (DTR).
# Follow the installation steps (4, 5, 6) again.
Obviously, the first step, while building the DDC is to define/design the architecture of DDC. Docker provides a Docker Reference architecture (https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Docker_EE_Best_Practices_and_Design_Considerations) and it is a good starting point. For this PoC, we will be creating a DDC based on reference architecture by taking a subset of it. Instead of three Universal Control Plane (UCP) nodes, we'll have just one, instead of three Docker Trusted Registry (DTR) nodes, we'll have one and lastly instead of four UCP worker nodes for application, we'll have just two. As an application load balancer, we'll use Dockerized HA-Proxy. It'll be a separate node (not managed by UCP). For this PoC, we'll use CentOS 7.x powered virtual machines created using 'Oracle VirtualBox'. I'm also going to highlight a few tricks and tips associated with VirtualBox. We'll also create a client Docker node to communicate with DDC components remotely.
Since, we have decided to take the subset of Docker reference architecture and work on it, we can go ahead and start building the infrastructure. For this POC, the entire infrastructure consists of a Windows 10 laptop with 16 GB memory and an Oracle VM VirtualBox.
1. Create Virtual Machines (VMs)
Let's first create a VM with Centos 7.x Linux. Download CentOS 7-1611 VirtualBox image from osboxes.org and create the first node with 2 GB of memory.Once the VM is ready, make 6 clones of it. For each clone follow the steps below:
1.1 Network setting:
We'll do few things to emulate a static IP for each virtual machine, otherwise the UCP and DTR will experience an issue if IP changes after the installation. See the recommendation from Docker regarding the static IP and hostname here. Enable two network adapter and configure as below:- Adapter 1: NAT - to allow the VM (guest) to communicate with the outside world through host computer's network connection.
- Adapter 2: Host-only Adapter - to allow connection between host and guest. It also helps us to set static IP.
1.1.1) Identified the interface used for HostOnly Adapter. See below highlighted in yellow:
$>ip a | grep inet |
1.1.2) Created file "/etc/sysconfig/network-scripts/ifcfg-enp0s8" on guest. where enp0s8 is the network interface name with content like this:
#/etc/sysconfig/network-scripts/ifcfg-enp0s8 |
Note: make sure to create unique IP for each clone like 192.168.56.101, 192.168.56.102, 192.168.56.103, ... etc.
1.2 Hostname setup
In order to set hostname on CentOS, follow:
#Update the hostname: $>sudo hostnamectl set-hostname centosddcucp |
1.3) [optional] Update /etc/hosts file
For easy access to each node, add the mapping entries in /etc/hosts file of each VM. The following entries are per my configuration.
#/etc/hosts 192.168.56.101 centosddcucp mydockertest.com
|
2. Install & Configure Commercially Supported (CS) Docker Engine
2.1) Installation:
You can install either using the repository or install from a package. Here, we will install using the repository. To install Docker Enterprise Edition (Docker EE) using the repository, you need to know the Docker EE repository URL associated with your licensed or trial subscription. To get this information:
- Go to https://store.docker.com/?overlay=subscriptions.
- Choose Get Details / Setup Instructions within the Docker Enterprise Edition for CentOS section.
- Copy the URL from the field labeled Copy and paste this URL to download your Edition.
- set up Docker’s repositories and install from there, for ease of installation and upgrade tasks. This is the recommended approach.
# 2.1.1 Remove any existing Docker repositories (like docker-ce.repo, docker-ee.repo) from /etc/yum.repos.d/.
# Note: Replace
# Note: DOCKER-EE-URL looks something like: # https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
$>sudo yum install docker-ee
|
3. Install & Configure Universal Control Plane (UCP)
UCP is a cluster management solution for Docker Enterprise. In a nutshell, it itself is a containerized application that runs on (CS) Docker Engine and facilitates user interaction (deploy, configure, monitor etc) through (API, Docker CLI, GUI etc) with other containerized applications managed by DDC.3.1 Prepare for Installation
- Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Prepare UCP node:
- Check overall UCP System requirement. Refer to https://docs.docker.com/datacenter/ucp/2.1/guides/admin/install/system-requirements/#hardware-and-software-requirements
- Open firewall ports to prepare for UCP install. Refer to https://docs.docker.com/datacenter/ucp/2.1/guides/admin/install/system-requirements/#network-requirements for all ports that need to be opened. On CentOS, you can use firewall-cmd command to open the firewall port(s) as shown below:
# Get the firewall zones
$>sudo firewall-cmd --get-zones
# Open the following ports for public zone # tcp_ports="443 80 2376 2377 4789 7946 12376 12379 12380 12381 12382 12383 12384 12385 12386 12387"# udp_ports="4789 7946"
# sudo firewall-cmd --permanent --zone=public --add-port=${_port}/ |
3.2 Install Docker Universal Control Plane (UCP)
# 3.2.1 Pull UCP image
...
# Note: in order to apply the license, launch the UCP Web UI https://ucp-host: # and it gives you an option - either to upload the existing license "Upload License" option or "Get free trial or# purchase license". |
That concludes the installation and configuration of UCP. Next is to install the Docker Trusted Registry (DTR).
4. Install & Configure Docker Trusted Registry (DTR)
- Make sure system requirements are met. Refer to https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/system-requirements/#software-requirements
- Make sure port 80/tcp and 443/tcp are open (firewall).
4.1 Installation steps
- Start the virtual machine for DTR node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Add this (DTR) node to DDC UCP:
- Access the UCP Web UI.
- Click on "+ Add node" link.
- It shows you command to run from the node. Copy the command, it looks something like:
docker swarm join --token
192.168.56.101:2377
Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx
Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.
- Run the command from DTR node:
$>docker swarm join --token \
SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377
This node joined a swarm as a worker.
- Generate DTR Installation command string from UCP Web UI:
- Access UCP Web UI.
- Under Install DTR (in the newer version of UCP, you have to navigate to Admin Settings --> Docker Trusted Registry), click on Install Now, select appropriate selection and it gives you command to copy. Command looks something like:
docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
--ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
--ucp-url https://192.168.56.101
Note: Where the --ucp-node is the hostname of the UCP node where you want to deploy DTR
Here is a screen shot that shows DTR installation command string:
- Start the installation.
Note: DTR installation details can be found at https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/#step-3-install-dtr
# 1. Pull the latest version of DTR >>docker pull docker/dtr
# 2. Run installation command:
$>docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
--ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
--ucp-url https://192.168.56.101
INFO[0000] Beginning Docker Trusted Registry installation
ucp-password:
INFO[0009] Validating UCP cert
INFO[0009] Connecting to UCP
INFO[0009] UCP cert validation successful
INFO[0010] The UCP cluster contains the following nodes: centosddcucp, centosddcdtr01
INFO[0017] verifying [80 443] ports on centosddcdtr01
INFO[0000] Validating UCP cert
INFO[0000] Connecting to UCP
INFO[0000] UCP cert validation successful
INFO[0000] Checking if the node is okay to install on
INFO[0000] Connecting to network: dtr-ol
INFO[0000] Waiting for phase2 container to be known to the Docker daemon
INFO[0001] Starting UCP connectivity test
INFO[0001] UCP connectivity test passed
INFO[0001] Setting up replica volumes...
INFO[0001] Creating initial CA certificates
INFO[0001] Bootstrapping rethink...
...
...
INFO[0115] Installation is complete
INFO[0115] Replica ID is set to: fc27c2f482e5
INFO[0115] You can use flag '--existing-replica-id fc27c2f482e5' when joining other replicas to your Docker Trusted Registry Cluster
# 3. Make sure DTR is running:
In your browser, navigate to the Docker Universal Control Plane web UI, and navigate to the Applications screen. DTR should be listed as an application.
# 4. Access DTR Web UI:
https://
# 1. Uninstall DTR:
|
5. Setup DDC client node
In order to access all DDC nodes (UCP, DTR, Worker) and perform operation remotely, you need to have a Docker client, configured to communicate with DDC securely.5.1 Installation steps
- Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Download UCP client certificate bundle from UCP and extract it on client host machine.
- Access UCP Web UI and navigate to User Management
- Click on User
- Click on « Create a Client Bundle » as shown below in the screen shot:
- Configure client so that it can securely connect to UCP:
# 1. Extract client bundle on Client node:
$> unzip ucp-bundle-osboxes.zip
Archive: ucp-bundle-osboxes.zip
extracting: ca.pem
extracting: cert.pem
extracting: key.pem
extracting: cert.pub
extracting: env.ps1
extracting: env.cmd
extracting: env.sh
# 2. load the DDC environment
$>eval $(<env.sh)
# 3. Make sure docker is connected to UCP:
$>docker ps
# Note: based on your access level, you should see the Docker processes running on UCP, DTR and Worker(s) node(s).
Note: If DTR is using the auto generated self signed cert, your client Docker Engine
need to configure to trust the certificate presented by DTR, otherwise, you get "x509: certificate signed by unknown authority" error.
Refer to: https://docs.docker.com/datacenter/dtr/2.1/guides/repos-and-images/#configure-your-host for detail.
For CentOS, you can install the DTR certificate in the client trust store as follows:
# 1. Pull the DTR certificate. Here 192.168.56.102 is my DTR node.
$>sudo curl -k https://192.168.56.102/ca -o /etc/pki/ca-trust/source/anchors/centosddcdtr01.crt $>sudo update-ca-trust # 3. Start the Docker Engine |
5.2 Configure Notary client
By configuring Notary client, you'll be able to sign Docker image(s) with the private keys in your UCP client bundle, trusted by UCP and easily traced back to your user account. Read details here; https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/configure-your-notary-client/
# 5.2.1 Download notary
# 5.2.3 Move to /usr/bin # 5.2.4. Import UCP private key into notary key database: |
Note: By default, CLI does not sign an image while pushing to DTR. In order to sign image while # pushing, set the environment variable DOCKER_CONTENT_TRUST=1
5.3 Install Docker Compose:
Docker Compose is a very handy tool that can be used to define and manage multi-container Docker application(s). For details refer to https://docs.docker.com/compose/overview/
Note: Docker for Mac, and Windows may already include docker-compose. In order to find out whether the Docker Compose is already installed, just run the the docker-compose --version command.
$> docker-compose --version |
Install:
$>sudo curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose |
6. Setup Worker node(s):
Worker node is real work horse in DDC setup where production application runs. Below are installation steps.
6.1 Installation steps
- Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Add this (worker) node to DDC UCP:
- Access the UCP Web UI.
- Click on "+ Add node" link.
- It shows you command to run from the node. Copy the command, it looks something like:
docker swarm join --token
192.168.56.101:2377
Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx
Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.
Note:
- Run the command from worker node:
$>docker swarm join --token \
SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377
This node joined a swarm as a worker.
- Repeat steps 1 to 4 for each additional worker node
- Once all nodes join the swarm, run the following command from client (while connected to UCP) to confirm and list all the nodes:
$> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ivczlaqfyjmtvs0xqk6aivy8p centosddcwrk02 Ready Active
usl2otwy3u6hj3vls9r77i45r centosddcwrk01 Ready Active
ywkywo08e6dagbe45aprmbhlc * centosddcucp Ready Active Leader
z2cobi7ag2qqsevfjsvye3d19 centosddcdtr01 Ready Active
- Optional: Put node like DTR or UCP in "drain" mode, where you don't want application container to deploy. Here, we put DTR in "drain" mode
#command format: docker node update --availability drain
$> docker node update --availability drain z2cobi7ag2qqsevfjsvye3d19
z2cobi7ag2qqsevfjsvye3d19
$> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
ivczlaqfyjmtvs0xqk6aivy8p centosddcwrk02 Ready Active
usl2otwy3u6hj3vls9r77i45r centosddcwrk01 Ready Active
ywkywo08e6dagbe45aprmbhlc * centosddcucp Ready Active Leader
z2cobi7ag2qqsevfjsvye3d19 centosddcdtr01 Ready Drain
7. Create Additional User, Access Label and Network:
7.1 Create additional user, team and permission label as necessary from UCP Web UI.
Follow Docker documentation (https://docs.docker.com/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-users/) to create user, team and permission levels as required.
7.2 Create network.
As per Docker documentation, containers connected to the default bridge network can communicate with each other by IP address. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. Traditionally, you could link two containers together using the legacy 'docker run --link ...' option, but here, we are going to define a network and attach our containers, so that they can communicate as required. Details can be found here https://docs.docker.com/engine/userguide/networking/#user-defined-networks
Note: docker links feature has been deprecated in favor of user defined networks as explained here.
Service discovery in Docker is network-scoped, meaning the embedded DNS functionality in Docker
can be used only by containers or tasks using the same network to resolve each other's addresses, so our plan here is to deploy a set of services that can communicate to each other using DNS.
Note: If the destination and source container or service are not on the same network, Docker Engine forwards the DNS query to the default DNS server.
# Create network # From client node, first connect to UCP: eval $(<env.sh) $> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network naf8hvyx22n6lsvb4bq43z968 # Verify the network $> docker network ls NETWORK ID NAME DRIVER SCOPE eea917ac864c centosddcdtr01/bridge bridge local 065f72b45f37 centosddcdtr01/docker_gwbridge bridge local 229f9949f85f centosddcucp/bridge bridge local d8a09aed43ae centosddcucp/docker_gwbridge bridge local a18d616d5fed centosddcucp/host host local fc423b7dc25f centosddcucp/none null local o40j6xknr6ax dtr-ol overlay swarm vwtzfva8q8r3 ingress overlay swarm naf8hvyx22n6 my_hrm_network overlay swarm tbmwjleceolg ucp-hrm overlay swarm |
Note: if you get error
Error response from daemon: Error response from daemon: Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component
while creating network, check your network name. Make sure it does not contain any dot '.'. Error message itself is little bit confusing. Refer to issue 31772 for details.7.3 Enable "HTTP Routing Mesh" if necessary
- Login to the UCP web Web UI.
- Navigate to Admin Settings > Routing Mesh.
- Check Enable HTTP Routing Mesh.
- Configure the ports for HRM to listen on, with the defaults being 9080 and 9443. The HTTPS port defaults to 8443 so that it doesn't interfere with the default UCP management port (443).
- Disable: Admin Settings --> Routing Mesh --> Uncheck "Enable HTTP routing mesh". Click on Update button.
- Enable: Admin Settings --> Routing Mesh --> check "Enable HTTP routing mesh". Click on Update button.
8. Docker Application Deployment
8.1 Preparation:
For this PoC, we're going to build the custom image of Lets-Chat app and deploy using Docker Compose. Here is how our Dockerfile looks like:Note: All the steps listed in step 8.x are executed on or from client node.
8.1.1 Create Dockerfile for lets-chat:
From sdelements/lets-chat:latest |
8.1.2 Create lets-chat image using Dockerfile. Run 'docker build ...' command from the same directory where the Dockerfile is located.
$> docker build -t lets-chat:1.0 . Sending build context to Docker daemon 4.608 kB Step 1/2 : FROM sdelements/lets-chat:latest latest: Pulling from sdelements/lets-chat 6a5a5368e0c2: Pull complete 7b9457ec39de: Pull complete ... ... 876c39157780: Pull complete Digest: sha256:5b923d428176250653530fdac8a9f925043f30c511b77701662d7f8fab74961c Status: Downloaded newer image for sdelements/lets-chat:latest ---> 296501fb5b70 Step 2/2 : CMD (sleep 60; npm start) ---> Running in 194eb91d5f59 ---> 14e03b359b1d Removing intermediate container 194eb91d5f59 Successfully built 14e03b359b1d |
8.1.3 Pull the Mongo DB image:
$> docker pull mongo Using default tag: latest latest: Pulling from library/mongo f5cc0ee7a6f6: Pull complete d99b18c5f0ce: Pull complete ... ... 72dc91cfe502: Pull complete d610498cfcc7: Pull complete Digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d Status: Downloaded newer image for mongo:latest |
8.1.4 Rename/Tag the images as per DTR namespace and Push the images to DTR:
# Tag lets-chat image $> docker tag lets-chat:1.0 192.168.56.102/osboxes/lets-chat:1.0 # Tag mongo image docker tag mongo:latest 192.168.56.102/osboxes/mongo:latest
# List the images
$> docker images REPOSITORY TAG IMAGE ID CREATED SIZE 192.168.56.102/osboxes/lets-chat 1.0 14e03b359b1d 5 minutes ago 255 MB lets-chat 1.0 14e03b359b1d 5 minutes ago 255 MB mongo latest 71c101e16e61 6 days ago 358 MB 192.168.56.102/osboxes/mongo latest 71c101e16e61 6 days ago 358 MB .... |
8.1.5 Push the images to DTR:
Note: before pushing the image, you need to create "repository" for the images (if one doesn't exist already). Create corresponding repo from DTR Web UI:
# Login to DTR $ docker login 192.168.56.102 -u osboxes -p Login Succeeded # Push the mongo image to DTR
$ docker push 192.168.56.102/osboxes/mongo |
8.1.5 Pull the images to all DDC worker nodes where the images will be instantiated into corresponding containers.
# Connect to UCP. # Note: make sure you run the eval command below from the directory where # the client bundle was extracted $>eval $(<env.sh) #Pull lets-chat $> docker pull 192.168.56.102/osboxes/lets-chat:1.0 centosddcwrk01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded centosddcucp: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded centosddcwrk02: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded centosddcdtr01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded # Pull mongo $> docker pull 192.168.56.102/osboxes/mongo Using default tag: latest centosddcwrk01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded centosddcucp: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded centosddcwrk02: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded centosddcdtr01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded |
8.1.6 Create Docker Compose file:
Here is how our docker-compose.yml looks like:
version: "3" services: mongo: image: 192.168.56.102/osboxes/mongo:latest networks: - my_hrm_network deploy: placement: constraints: [node.role == manager] restart_policy: condition: on-failure max_attempts: 3 window: 60s labels: - "com.docker.ucp.access.label=dev" lets-chat: image: 192.168.56.102/osboxes/lets-chat:1.0 networks: - my_hrm_network ports: - "8080" deploy: placement: constraints: [node.role == worker] mode: replicated replicas: 4 restart_policy: condition: on-failure max_attempts: 3 window: 60s labels: - "com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080" - "com.docker.ucp.access.label=dev" networks: my_hrm_network: external: name: my_hrm_network |
Few things to notice in the docker-compose.yml above are:
- placement constraints: [node.role == manager] for mongo. We are giving instruction to docker to instantiate the mongo container only on the node which has manager role. role can be “worker” or “manager”.
- Label: com.docker.ucp.access.label=dev; define access constraint by label. See for details. https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/
- Label: com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"; Here lets-chat application is configured for HRM and to accessed using host mydockertest.com on port 8080, which will be our HA-Proxy's host and port. Docker uses DNS for service discovery as services are created. Docker has different built in routing meshes for high availability. HTTP Routing Mesh (HRM) is an application layer routing mesh that routes HTTP traffic based on DNS hostname is part of UCP 2.0.
- Also, note that we are not exposing (explicitly) port for mongo, as mongo and lets-chat are in the same network 'my_hrm_network', they will be able to communicate even though, they will be instantiated in different Host (nodes). For lets-chat, the application listens on port 8080, but we are not publishing it explicitly because we are implementing containers with scaling in mind and relaying on Docker HRM. If you publish port explicitly to host (e.g. -p 8080:8080), then it will be an issue if you have to instantiate more than one replica in the same host, because there will be port conflict as only one process can listen into the same port on the same IP. More detail about HRM and service discovery: https://docs.docker.com/engine/swarm/ingress/#configure-an-external-load-balancer and https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/. Good read about service discovery, load balancing and also Swarm, Ingress and HRM: https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Service_Discovery_and_Load_Balancing_with_Docker_Universal_Control_Plane_(UCP)
8.2 Deployment:
8.2.1 Validate docker-compose.yml:
# Validate docker-compose.yml, run the following command from the same directory # where docker-compose.yml is located. $>docker-compose -f docker-compose.yml config WARNING: Some services (lets-chat) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
|
In our case, we are going to use 'docker stack deploy ...' instead. So, we can safely ignore the warning.
8.2.1 Deploy
# Execute docker stack deploy command using the compose-file $> docker stack deploy --compose-file docker-compose.yml dev_lets-chat Creating service dev_lets-chat_lc-mongo Creating service dev_lets-chat_lets-chat # Verify service(s) are created: $> docker stack ls NAME SERVICES dev_lets-chat 2 # See the service details: $> docker stack services dev_lets-chat ID NAME MODE REPLICAS IMAGE kib7peniroci dev_lets-chat_mongo replicated 1/1 192.168.56.102/osboxes/mongo:latest t7s5xpgdxncs dev_lets-chat_lets-chat replicated 4/4 192.168.56.102/osboxes/lets-chat:1.0 |
As you can see one instance of mongo and 4 instances of lets-chat have been created.
If you want to learn more about stack deployment, refer to https://docs.docker.com/engine/swarm/stack-deploy/#deploy-the-stack-to-the-swarm
9. Setup HA-Proxy node:
Here we will have a simple configuration of HA-Proxy just to show the working idea. Refer to HA-Proxy documentation and Docker documentation for HA-Proxy for details.Note: for this PoC, we are deploying ha-proxy outside of swarm cluster.
9.1 Setup steps
- Start the virtual machine for HA-Proxy node (created in step #1. Create Virtual Machines (VMs))
- Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
- Make sure Docker is running.
- Prepare the configuration file for HA-Proxy.
# /etc/haproxy/haproxy.cfg, version 1.7 global maxconn 4096 defaults mode http timeout connect 5000ms timeout client 50000ms timeout server 50000ms frontend http bind *:8080 option http-server-close stats uri /haproxy?stats default_backend bckendsrvs backend bckendsrvs balance roundrobin server worker1 192.168.56.103:8080 check server worker2 192.168.56.104:8080 check |
Few notes about haproxy.cfg above.
- Backend connection. We have 4 replicas (2 replica per node), but as you can see only two back-end connections are mentioned in the configuration file. It is the beauty of using Docker Swarm HRM. As long as the traffic reaches to any of the HRM node, whether the actual replica is running or not there, swarm automatically directs traffic to one of the replicas running in one of the available nodes. Docker swarm also takes care of load balancing among all replicas.
- The check option at the end of the server directives specifies that health checks should be performed on those back-end servers.
- Frontend section defines bind (ip and port) configuration for the proxy and reference to the corresponding backend configuration. In this case, it is listening to all available IPs on port 8080.
- 'stats uri' defines the status URI.
9.2 Create Dockerfile for HA-Proxy
FROM haproxy:1.7
|
Note: In this case, haproxy.cfg and Dockerfile are located in the same directory from where we are executing 'docker build ...' command as shown below.
9.3) Create custom image
# Create Image
|
9.4 Verify the configuration and Instantiate ha-proxy container:
# Verify the configuration file: $> docker run -it --rm --name haproxy-syntax-check my_haproxy:1.7 haproxy -c \ -f /usr/local/etc/haproxy/haproxy.cfg
haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -c -f /usr/local/etc/haproxy/haproxy.cfg -Ds
|
10. Access and Verify Application
10.1) Accessing application:Once the ha-proxy is running, access the application. Make sure firewall is not blocking the port (that ha-proxy is listening) .
http://<ha-proxy-host>:<ha-proxy-port>/<application-uri>
Important: In order to access application, you need to make sure that the '
In our case it is 'mydockertest.com', so make sure 'mydockertest.com' resolves to the IP address of the ha-proxy. It is the way how HRM along with Swarm discover the services and route the requests in Ingress cluster and we are able to scale containers dynamically.
10.2) Application verification:
10.2.1) Get the stat from haproxy. Along with other things, stat shows the request count and which Swarm node is serving the request:
http://
10.2.2) First access lets-chat through Web-UI (http://mydockertest.com:8080). Create your account. Log using your credential. Once you create the account and able to login, in order to verify that the lets-chat is making successful connection to the mongo db, you can do the following:
# Inspect the lets-chat instance:
$> docker inspect 3d046c183b6d | grep mongo
|
Once you have Docker Datacenter up and running, upgrade it to Docker EE 2.0 and UCP 3.x to have choice of Swarm or Kubernetes orchestration. See my post Upgrade to Docker EE 2.0 and UCP 3.x for Choice of Swarm or Kubernetes Orchestration.
Looks like you're really into Docker, see my other related blog posts below:
- Using Docker Secrets with IBM WebSphere Liberty Profile Application Server
- Make your container deployment portable
- Experience sharing - Dock Datacenter (this post)
- Setting up CLI environment for IBM Bluemix
- Quick start with IBM Datapower Gateway Docker Edition