Showing posts with label DTR. Show all posts
Showing posts with label DTR. Show all posts

Experience Sharing - Docker Datacenter/Mirantis Docker Enterprise

This post details my experience working with Docker Datacenter (DDC)/Mirantis Docker Enterprise - an integrated container management and security solution and now part of Docker Enterprise Edition (EE) offering. Docker EE is a certified solution which is commercially supported. Refer to https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/dee-intro.html for more information on Docker EE. I've worked with both production implementation as well as Proof Of Concept (PoC) solution of DDC. This blog post mainly contains my experience while doing PoC.
Obviously, the first step, while building the DDC is to define/design the architecture of DDC. Docker provides a Docker Reference architecture (https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Docker_EE_Best_Practices_and_Design_Considerations) and it is a good starting point. For this PoC, we will be creating a DDC based on reference architecture by taking a subset of it. Instead of three Universal Control Plane (UCP) nodes, we'll have just one, instead of three Docker Trusted Registry (DTR) nodes, we'll have one and lastly instead of four UCP worker nodes for application, we'll have just two. As an application load balancer, we'll use Dockerized HA-Proxy. It'll be a separate node (not managed by UCP). For this PoC, we'll use CentOS 7.x powered virtual machines created using 'Oracle VirtualBox'. I'm also going to highlight a few tricks and tips associated with VirtualBox.  We'll also create a client Docker node to communicate with DDC components remotely.

Since, we have decided to take the subset of Docker reference architecture and work on it, we can go ahead and start building the infrastructure. For this POC, the entire infrastructure consists of a Windows 10 laptop with 16 GB memory and an Oracle VM VirtualBox.

1. Create Virtual Machines (VMs)

Let's first create a VM with Centos 7.x Linux. Download CentOS 7-1611 VirtualBox image from osboxes.org and create the first node with 2 GB of memory.

Once the VM is ready, make 6 clones of it. For each clone follow the steps below:

1.1 Network setting:

We'll do few things to emulate a static IP for each virtual machine, otherwise the UCP and DTR will experience an issue if IP changes after the installation. See the recommendation from Docker regarding the static IP and hostname here. Enable two network adapter and configure as below:

  • Adapter 1: NAT - to allow the VM (guest) to communicate with the outside world through host computer's network connection.
  • Adapter 2: Host-only Adapter - to allow connection between host and guest. It also helps us to set static IP.
Refer to https://gist.github.com/pjdietz/5768124 for details on how to set this up. One more thing to remember, if you are using CentOS 7, and want to set Permanent static IP, you need to use Interface Configuration file (refer to https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-networkscripts-interfaces.html) instead of /etc/network/interfaces. In my case, I used the following:

1.1.1) Identified the interface used for HostOnly Adapter. See below highlighted in yellow:

$>ip a | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
...
    inet 192.168.56.101/24 brd 192.168.56.255 scope global enp0s8
...
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
...

1.1.2) Created file "/etc/sysconfig/network-scripts/ifcfg-enp0s8" on guest. where enp0s8 is the network interface name with content like this:

#/etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.56.0
NETMASK=255.255.255.0
IPADDR=192.168.56.101
BROADCAST=192.168.56.255
USERCTL=no

Note: make sure to create unique IP for each clone like 192.168.56.101, 192.168.56.102, 192.168.56.103, ... etc.

1.2 Hostname setup

In order to set hostname on CentOS, follow:
#Update the hostname:
#Set hostname for DDC UCP node 

$>sudo hostnamectl set-hostname centosddcucp
# restart the systemd-hostnamed daemon
$>sudo systemctl restart systemd-hostnamed

1.3) [optional] Update /etc/hosts file

For easy access to each node, add the mapping entries in /etc/hosts file of each VM. The following entries are per my configuration.

#/etc/hosts 
192.168.56.101 centosddcucp
192.168.56.102 centosddcdtr01
192.168.56.103 centosddcwrk01
192.168.56.104 centosddcwrk02
192.168.56.105 centosddcclnt
192.168.56.106 centosddchaproxy 
mydockertest.com



2. Install & Configure Commercially Supported (CS) Docker Engine

2.1) Installation:

Official installation document: https://docs.docker.com/engine/installation/linux/centos/#install-using-the-repository
You can install either using the repository or install from a package. Here, we will install using the repository.  To install Docker Enterprise Edition (Docker EE) using the repository, you need to know the Docker EE repository URL associated with your licensed or trial subscription. To get this information:

  • Go to https://store.docker.com/?overlay=subscriptions.
  • Choose Get Details / Setup Instructions within the Docker Enterprise Edition for CentOS section.
  • Copy the URL from the field labeled Copy and paste this URL to download your Edition.
  • set up Docker’s repositories and install from there, for ease of installation and upgrade tasks. This is the recommended approach.

# 2.1.1 Remove any existing Docker repositories (like docker-ce.repo, docker-ee.repo) from /etc/yum.repos.d/.

# 2.1.2 Store your Docker EE repository URL in a yum variable in /etc/yum/vars/. 

# Note: Replace with the URL you noted from your subscription.

$>sudo sh -c 'echo "" > /etc/yum/vars/dockerurl'

# Note: DOCKER-EE-URL looks something like:
# https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

# Note: I've replaced the actual text with 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' for confidentiality reason.


# 2.1.3 Install required packages:
$>sudo yum install -y yum-utils device-mapper-persistent-data lvm2
...
Updated:
  lvm2.x86_64 7:2.02.166-1.el7_3.4

Dependency Updated:
  device-mapper.x86_64 7:1.02.135-1.el7_3.4              device-mapper-event.x86_64 7:1.02.135-1.el7_3.4         device-mapper-event-libs.x86_64 7:1.02.135-1.el7_3.4
  device-mapper-libs.x86_64 7:1.02.135-1.el7_3.4         lvm2-libs.x86_64 7:2.02.166-1.el7_3.4

Complete!

# 2.1.4 Add stable repository:


$>sudo yum-config-manager --add-repo /docker-ee.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/docker-ee.repo
grabbing file https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/docker-ee.repo to /etc/yum.repos.d/docker-ee.repo
repo saved to /etc/yum.repos.d/docker-ee.repo

# 2.1.5 Update Yum package index:


$>sudo yum makecache fast
...
Loading mirror speeds from cached hostfile
 * base: mirror.its.sfu.ca
 * extras: muug.ca
 * updates: muug.ca
Metadata Cache Created

# 2.1.6 Install Docker CS Engine 

$>sudo yum install docker-ee
...
Installed:
  docker-ee.x86_64 0:17.03.2.ee.4-1.el7.centos

Dependency Installed:
  docker-ee-selinux.noarch 0:17.03.2.ee.4-1.el7.centos

# 2.1.7 Add following content (if doesn't exist) or edit (if required) in /etc/docker/daemon.json
   
   {
      "storage-driver": "device-mapper"
   }
 
# 2.1.8 Check the docker service status, enable (if required) and start

$>sudo systemctl status docker
? docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: https://docs.docker.com


$>sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.


$>sudo systemctl start docker


$>sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

# 2.1.9 Manage Docker as non-root user
# 9.1 Add docker group (if doesn't exist)
$>sudo groupadd docker


# 2.1.10) add user to the 'docker' group
$>sudo usermod -aG docker $USER


# logout and login again, and run:


$>docker ps



3. Install & Configure Universal Control Plane (UCP)

UCP is a cluster management solution for Docker Enterprise. In a nutshell, it itself is a containerized application that runs on (CS) Docker Engine and facilitates user interaction (deploy, configure, monitor etc) through (API, Docker CLI, GUI etc) with other containerized applications managed by DDC.

3.1 Prepare for Installation

  1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Prepare UCP node:

# Get the firewall zones
$>sudo firewall-cmd --get-zones
work drop internal external trusted home dmz public block


# Open the following ports for public zone
tcp_ports="443 80 2376 2377 4789 7946 12376 12379 12380 12381 12382 12383 12384 12385 12386 12387"
# udp_ports="4789 7946"
# sudo firewall-cmd --permanent --zone=public --add-port=${_port}/;
# For example:
$> sudo firewall-cmd --permanent --zone=public --add-port=2376/tcp;
# Once all ports are added, restart the firewall.
$> sudo firewall-cmd --reload;


3.2 Install Docker Universal Control Plane (UCP)


# 3.2.1 Pull UCP image

$>docker pull docker/ucp:latest
latest: Pulling from docker/ucp
709515475419: Pull complete
6beede3f81f7: Pull complete
37a4fec5e659: Pull complete
Digest: sha256:b8c4a162b5ec6224b31be9ec52c772a8ba3f78995f691237365cfa728341e942
Status: Downloaded newer image for docker/ucp:latest

# 3.2.2 Install UCP
Note: 192.168.56.101 is my UCP node:


$>sudo docker run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp install --host-address 192.168.56.101 --interactive
INFO[0000] Verifying your system is compatible with UCP 2.1.4 (10e6c44)
INFO[0000] Your engine version 17.03.2-ee-4, build 1e6d71e (3.10.0-514.el7.x86_64) is compatible
WARN[0000] Your system uses devicemapper.  We can not accurately detect available storage space.  Please make sure you have at least 3.00 GB available in /var/lib/docker
Admin Username: osboxes
Admin Password:
Confirm Admin Password:
INFO[0033] All required images are present

...

INFO[0001] Initializing a new swarm at 192.168.56.101
INFO[0018] Establishing mutual Cluster Root CA with Swarm
...
INFO[0021] Deploying UCP Service
INFO[0085] Installation completed on centosddcucp (node ywkywo08e6dagbe45aprmbhlc)
INFO[0085] UCP Instance ID: IJUU:N6K6:KVJK:W3BO:LXVL:FBB4:RKF5:XNHM:HTQI:TZVL:XFIO:Z253
INFO[0085] UCP Server SSL: SHA-256 Fingerprint=D2:68:F3:...........:BD
INFO[0085] Login to UCP at https://192.168.56.101:443
INFO[0085] Username: osboxes
INFO[0085] Password: (your admin password)

# 3.2.3 Very UCP containers are running:


$>docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED              STATUS                        PORTS                                                                             NAMES
f615d8c1d5ba        docker/ucp-controller:2.1.4   "/bin/controller s..."   55 seconds ago       Up 55 seconds (healthy)       0.0.0.0:443->8080/tcp                                                             ucp-controller
255119a3444e        docker/ucp-swarm:2.1.4        "/bin/swarm manage..."   About a minute ago   Up 56 seconds                 0.0.0.0:2376->2375/tcp                                                            ucp-swarm-manager
08a6e3789fed        docker/ucp-auth:2.1.4         "/usr/local/bin/en..."   About a minute ago   Up 57 seconds (healthy)       0.0.0.0:12386->4443/tcp                                                           ucp-auth-worker
caa4f9b4543a        docker/ucp-metrics:2.1.4      "/bin/entrypoint.s..."   About a minute ago   Up 58 seconds                 0.0.0.0:12387->12387/tcp                                                          ucp-metrics
58fa77c78bb2        docker/ucp-auth:2.1.4         "/usr/local/bin/en..."   About a minute ago   Up 58 seconds                 0.0.0.0:12385->4443/tcp                                                           ucp-auth-api
867f6aec884c        docker/ucp-auth-store:2.1.4   "rethinkdb --bind ..."   About a minute ago   Up About a minute             0.0.0.0:12383-12384->12383-12384/tcp                                              ucp-auth-store
b3e536f9309b        docker/ucp-etcd:2.1.4         "/bin/etcd --data-..."   About a minute ago   Up About a minute (healthy)   2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:12379->2379/tcp   ucp-kv
927cc6a7b5c8        docker/ucp-cfssl:2.1.4        "/bin/ucp-ca serve..."   About a minute ago   Up About a minute             0.0.0.0:12381->12381/tcp                                                          ucp-cluster-root-ca
ef00fe9f7200        docker/ucp-cfssl:2.1.4        "/bin/ucp-ca serve..."   About a minute ago   Up About a minute             0.0.0.0:12382->12382/tcp                                                          ucp-client-root-ca
b56a56aeeddd        docker/ucp-agent:2.1.4        "/bin/ucp-agent pr..."   About a minute ago   Up About a minute             0.0.0.0:12376->2376/tcp                                                           ucp-proxy
403f88c79f46        docker/ucp-agent:2.1.4        "/bin/ucp-agent agent"   About a minute ago   Up About a minute             2376/tcp                                                                          ucp-agent.ywkywo08e6dagbe45aprmbhlc.18vdyg9uqfuepzslnu0uclxzj

# 3.2.4 Apply license

# Note: in order to apply the license, launch the UCP Web UI https://ucp-host:
# and it gives you an option - either to upload the existing license "Upload License" option or "Get free trial or
# purchase license".

$> docker node ls
ID                           HOSTNAME      STATUS  AVAILABILITY  MANAGER STATUS
ywkywo08e6dagbe45aprmbhlc *  centosddcucp  Ready   Active        Leader


That concludes the installation and configuration of UCP. Next is to install the Docker Trusted Registry (DTR).

4. Install & Configure Docker Trusted Registry (DTR)

4.1 Installation steps

  1. Start the virtual machine for DTR node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Add this (DTR) node to DDC UCP:
    • Access the UCP Web UI.
    • Click on "+ Add node" link.
    • It shows you command to run from the node. Copy the command, it looks something like:
      docker swarm join --token 192.168.56.101:2377 

      Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx

      Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.


    • Run the command from DTR node:
      $>docker swarm join --token \
      SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377

      This node joined a swarm as a worker.
  5. Generate DTR Installation command string from UCP Web UI:
    • Access UCP Web UI.
    • Under Install DTR (in the newer version of UCP, you have to navigate to Admin Settings --> Docker Trusted Registry), click on Install Now, select appropriate selection and it gives you command to copy. Command looks something like:
      docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
      --ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
      --ucp-url https://192.168.56.101

      Note: Where the --ucp-node is the hostname of the UCP node where you want to deploy DTR

      Here is a screen shot that shows DTR installation command string:
  6. Start the installation.
    Note: DTR installation details can be found at https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/#step-3-install-dtr

    # 1. Pull the latest version of DTR >>docker pull docker/dtr
    # 2. Run installation command: 


    $>docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
    --ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
    --ucp-url https://192.168.56.101

    INFO[0000] Beginning Docker Trusted Registry installation
    ucp-password:
    INFO[0009] Validating UCP cert
    INFO[0009] Connecting to UCP
    INFO[0009] UCP cert validation successful
    INFO[0010] The UCP cluster contains the following nodes: centosddcucp, centosddcdtr01
    INFO[0017] verifying [80 443] ports on centosddcdtr01
    INFO[0000] Validating UCP cert
    INFO[0000] Connecting to UCP
    INFO[0000] UCP cert validation successful
    INFO[0000] Checking if the node is okay to install on
    INFO[0000] Connecting to network: dtr-ol
    INFO[0000] Waiting for phase2 container to be known to the Docker daemon
    INFO[0001] Starting UCP connectivity test
    INFO[0001] UCP connectivity test passed
    INFO[0001] Setting up replica volumes...
    INFO[0001] Creating initial CA certificates
    INFO[0001] Bootstrapping rethink...
    ...
    ...
    INFO[0115] Installation is complete
    INFO[0115] Replica ID is set to: fc27c2f482e5
    INFO[0115] You can use flag '--existing-replica-id fc27c2f482e5' when joining other replicas to your Docker Trusted Registry Cluster

    # 3. Make sure DTR is running:
       In your browser, navigate to the Docker Universal Control Plane web UI, and navigate to the Applications screen. DTR should be listed as an application.

    # 4. Access DTR Web UI:
    https://

Troubleshooting note: if you find your DTR is having some issue and need to remove and re-install, follow this like https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/uninstall/

# 1. Uninstall DTR:
$>sudo docker run -it --rm docker/dtr destroy --ucp-insecure-tls


INFO[0000] Beginning Docker Trusted Registry replica destroy
ucp-url (The UCP URL including domain and port): https://192.168.56.101:443
ucp-username (The UCP administrator username): osboxes
ucp-password:
INFO[0049] Validating UCP cert
INFO[0049] Connecting to UCP
INFO[0049] UCP cert validation successful
INFO[0049] No replicas found in this cluster. If you are trying to clean up a broken replica, provide its replica ID manually.
Choose a replica to destroy: bd02a612d0c0
INFO[0109] Force removing replica
INFO[0110] Stopping containers
INFO[0110] Removing containers
INFO[0110] Removing volumes
INFO[0110] Replica removed.

# 2. Remove DTR node from UCP
$>docker node ls
$>docker node rm

# Follow the installation steps (4, 5, 6) again.

5. Setup DDC client node

In order to access all DDC nodes (UCP, DTR, Worker) and perform operation remotely, you need to have a Docker client, configured to communicate with DDC securely.

5.1 Installation steps

  1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Download UCP client certificate bundle from UCP and extract it on client host machine.     
  • Access UCP Web UI and navigate to User Management
  • Click on User
  • Click on « Create a Client Bundle » as shown below in the screen shot:
  1. Configure client so that it can securely connect to UCP:
    # 1. Extract client bundle on Client node:
    $> unzip ucp-bundle-osboxes.zip
    Archive:  ucp-bundle-osboxes.zip
     extracting: ca.pem
     extracting: cert.pem
     extracting: key.pem
     extracting: cert.pub
     extracting: env.ps1
     extracting: env.cmd
     extracting: env.sh
    # 2. load the DDC environment
    $>eval $(<env.sh)
    # 3. Make sure docker is connected to UCP:
    $>docker ps
    # Note: based on your access level, you should see the Docker processes running on UCP, DTR and Worker(s) node(s).








  • Configure client so that it can securely connect to DTR and push/pull images.
    Note: If DTR is using the auto generated self signed cert, your client Docker Engine
    need to configure to trust the certificate presented by DTR, otherwise, you get "x509: certificate signed by unknown authority" error.
    Refer to: https://docs.docker.com/datacenter/dtr/2.1/guides/repos-and-images/#configure-your-host for detail.
    For CentOS, you can install the DTR certificate in the client trust store as follows:
    # 1. Pull the DTR certificate. Here 192.168.56.102 is my DTR node.
    $>sudo curl -k https://192.168.56.102/ca -o /etc/pki/ca-trust/source/anchors/centosddcdtr01.crt
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  2009  100  2009    0     0   8999      0 --:--:-- --:--:-- --:--:--  9049
    # 2. Update CA Trust

    $>sudo update-ca-trust
    # 3. Start the Docker Engine
    $> sudo systemctl restart docker
    # 4. Test the connectivity from client node to DTR node:
    $> docker login 192.168.56.102
    Username: osboxes
    Password:
    Login Succeeded

  • 5.2 Configure Notary client

    By configuring Notary client, you'll be able to sign Docker image(s) with the private keys in your UCP client bundle, trusted by UCP and easily traced back to your user account. Read details here; https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/configure-your-notary-client/

    # 5.2.1 Download notary
    $>curl -L https://github.com/docker/notary/releases/download/v0.4.3/notary-Linux-amd64 -o notary


      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   591    0   591    0     0   1184      0 --:--:-- --:--:-- --:--:--  1184
    100 9518k  100 9518k    0     0  3300k      0  0:00:02  0:00:02 --:--:-- 5115k

    # 5.2.2 Give execution permission
    $>chmod +x notary


    # 5.2.3 Move to /usr/bin
    $>sudo mv notary /usr/bin

    # 5.2.4. Import UCP private key into notary key database:
    $> notary key import ./key.pem
    Enter passphrase for new delegation key with ID 4e672ee (tuf_keys):
    Repeat passphrase for new delegation key with ID 4e672ee (tuf_keys):

    # 5.2.5 List key list
    $>notary key list

    ROLE          GUN    KEY ID                                                              LOCATION
    ----          ---    ------                                                              --------
    delegation           4e672ee5f4de7bf132d03554a8f592236ae6054026efc6b01873fc1b45a61dca    /home/osboxes/.docker/trust/private

    # 5.2.6 Configure notary CLI so that it can talk with the Notary server that’s part of DTR
    # There are few ways it can be accomplished. Easiest one is to configure Notary by creating a ~/.notary/config.json file with the following content:

    {
      "trust_dir" : "~/.docker/trust",
      "remote_server": {
        "url": "",
        "root_ca": ""
      }
    }


    # 5.2.7. [optional} Sign image while pushing to DTR:



    Note: By default, CLI does not sign an image while pushing to DTR. In order to sign image while # pushing, set the environment variable DOCKER_CONTENT_TRUST=1

    5.3 Install Docker Compose:

    Docker Compose is a very handy tool that can be used to  define and manage multi-container Docker application(s). For details refer to https://docs.docker.com/compose/overview/
    Note: Docker for Mac, and Windows may already include docker-compose. In order to find out whether the Docker Compose is already installed, just run the the docker-compose --version command.
      $> docker-compose --version
      ash: docker-compose: command not found...

    Install:

    $>sudo curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    $>chmod +x /usr/local/bin/docker-compose
    $> docker-compose --version
    docker-compose version 1.14.0, build c7bdf9e

    6. Setup Worker node(s):

    Worker node is real work horse in DDC setup where production application runs. Below are installation steps.

    6.1 Installation steps

    1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
    2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
    3. Make sure Docker is running.
    4. Add this (worker) node to DDC UCP:
      • Access the UCP Web UI.
      • Click on "+ Add node" link.
      • It shows you command to run from the node. Copy the command, it looks something like:
        docker swarm join --token 192.168.56.101:2377 
        Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx
        Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.

        Note: 
      • Run the command from worker node:
        $>docker swarm join --token \
        SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377

        This node joined a swarm as a worker.
    5. Repeat steps 1 to 4 for each additional worker node
    6. Once all nodes join the swarm, run the following command from client (while connected to UCP) to confirm and list all the nodes:
      $> docker node ls
      ID                           HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS
      ivczlaqfyjmtvs0xqk6aivy8p    centosddcwrk02  Ready   Active
      usl2otwy3u6hj3vls9r77i45r    centosddcwrk01  Ready   Active
      ywkywo08e6dagbe45aprmbhlc *  centosddcucp    Ready   Active        Leader
      z2cobi7ag2qqsevfjsvye3d19    centosddcdtr01  Ready   Active   
    7. Optional: Put node like DTR or UCP in "drain" mode, where you don't want application container to deploy. Here, we put DTR in "drain" mode
      #command format: docker node update --availability drain
      $> docker node update --availability drain z2cobi7ag2qqsevfjsvye3d19
      z2cobi7ag2qqsevfjsvye3d19
      $> docker node ls
      ID                           HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS
      ivczlaqfyjmtvs0xqk6aivy8p    centosddcwrk02  Ready   Active
      usl2otwy3u6hj3vls9r77i45r    centosddcwrk01  Ready   Active
      ywkywo08e6dagbe45aprmbhlc *  centosddcucp    Ready   Active        Leader
      z2cobi7ag2qqsevfjsvye3d19    centosddcdtr01  Ready   Drain   
    Here is how node listing of our PoC appears on UCP Web UI:

    UCP node listing


    7. Create Additional User, Access Label and Network:

    7.1 Create additional user, team and permission label as necessary from UCP Web UI.

    Follow Docker documentation (https://docs.docker.com/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-users/) to create user, team and permission levels as required. 

    7.2 Create network.

    As per Docker documentation, containers connected to the default bridge network can communicate with each other by IP address. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. Traditionally, you could link two containers together using the legacy 'docker run --link ...' option, but here, we are going to define a network and attach our containers, so that they can communicate as required. Details can be found here https://docs.docker.com/engine/userguide/networking/#user-defined-networks
    Note: docker links feature has been deprecated in favor of user defined networks as explained here.
    Service discovery in Docker is network-scoped, meaning the embedded DNS functionality in Docker 
    can be used only by containers or tasks using the same network to resolve each other's addresses, so our plan here is to deploy a set of services that can communicate to each other using DNS. 
    Note: If the destination and source container or service are not on the same network, Docker Engine forwards the DNS query to the default DNS server.


    # Create network 
    # From client node, first connect to UCP:
    eval $(<env.sh)

    $> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
    naf8hvyx22n6lsvb4bq43z968

    # Verify the network   
    $> docker network ls
    NETWORK ID          NAME                             DRIVER              SCOPE
    eea917ac864c        centosddcdtr01/bridge            bridge              local
    065f72b45f37        centosddcdtr01/docker_gwbridge   bridge              local
    229f9949f85f        centosddcucp/bridge              bridge              local
    d8a09aed43ae        centosddcucp/docker_gwbridge     bridge              local
    a18d616d5fed        centosddcucp/host                host                local
    fc423b7dc25f        centosddcucp/none                null                local
    o40j6xknr6ax        dtr-ol                           overlay             swarm
    vwtzfva8q8r3        ingress                          overlay             swarm
    naf8hvyx22n6        my_hrm_network                   overlay             swarm
    tbmwjleceolg        ucp-hrm                          overlay             swarm
         

    Note: if you get error Error response from daemon: Error response from daemon: Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component while creating network, check your network name. Make sure it does not contain any dot '.'. Error message itself is little bit confusing. Refer to issue 31772 for details.

    7.3 Enable "HTTP Routing Mesh" if necessary

    • Login to the UCP web Web UI.
    • Navigate to Admin Settings > Routing Mesh.
    • Check Enable HTTP Routing Mesh.
    • Configure the ports for HRM to listen on, with the defaults being 9080 and 9443. The HTTPS port defaults to 8443 so that it doesn't interfere with the default UCP management port (443).
    Note: If it is a NEW network with label '--label com.docker.ucp.mesh.http=true', you need to disable and then re-enable "HTTP Routing Mesh". It can be done through UCP UI:
    • Disable: Admin Settings --> Routing Mesh  --> Uncheck "Enable HTTP routing mesh". Click on Update button.
    • Enable: Admin Settings --> Routing Mesh  --> check "Enable HTTP routing mesh". Click on Update button.

    8. Docker Application Deployment

    8.1 Preparation:

    For this PoC, we're going to build the custom image of Lets-Chat app and deploy using Docker Compose. Here is how our Dockerfile looks like:
    Note: All the steps listed in step 8.x are executed on or from client node.

    8.1.1 Create Dockerfile for lets-chat:

    From sdelements/lets-chat:latest
    CMD (sleep 60; npm start)

    8.1.2 Create lets-chat image using Dockerfile. Run 'docker build ...' command from the same directory where the Dockerfile is located.

    $> docker build -t lets-chat:1.0 .

    Sending build context to Docker daemon 4.608 kB
    Step 1/2 : FROM sdelements/lets-chat:latest
    latest: Pulling from sdelements/lets-chat
    6a5a5368e0c2: Pull complete
    7b9457ec39de: Pull complete
    ...
    ...
    876c39157780: Pull complete
    Digest: sha256:5b923d428176250653530fdac8a9f925043f30c511b77701662d7f8fab74961c
    Status: Downloaded newer image for sdelements/lets-chat:latest
     ---> 296501fb5b70
    Step 2/2 : CMD (sleep 60; npm start)
     ---> Running in 194eb91d5f59
     ---> 14e03b359b1d
    Removing intermediate container 194eb91d5f59
    Successfully built 14e03b359b1d

    8.1.3 Pull the Mongo DB image:


    $> docker pull mongo
    Using default tag: latest
    latest: Pulling from library/mongo
    f5cc0ee7a6f6: Pull complete
    d99b18c5f0ce: Pull complete
    ...
    ...
    72dc91cfe502: Pull complete
    d610498cfcc7: Pull complete
    Digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d
    Status: Downloaded newer image for mongo:latest
       

    8.1.4 Rename/Tag the images as per DTR namespace and Push the images to DTR:
    # Tag lets-chat image
    $> docker tag lets-chat:1.0 192.168.56.102/osboxes/lets-chat:1.0
    # Tag mongo image
    docker tag mongo:latest 192.168.56.102/osboxes/mongo:latest

    # List the images
    $> docker images
    REPOSITORY                           TAG           IMAGE ID            CREATED             SIZE
    192.168.56.102/osboxes/lets-chat     1.0           14e03b359b1d        5 minutes ago       255 MB
    lets-chat                            1.0           14e03b359b1d        5 minutes ago       255 MB
    mongo                                latest        71c101e16e61        6 days ago          358 MB
    192.168.56.102/osboxes/mongo         latest        71c101e16e61        6 days ago          358 MB
    ....


    8.1.5 Push the images to DTR:
    Note: before pushing the image, you need to create "repository" for the images (if one doesn't exist already). Create corresponding repo from DTR Web UI:






    # Login to DTR
    $ docker login 192.168.56.102 -u osboxes -p
    Login Succeeded

    # Push the mongo image to DTR
    $ docker push 192.168.56.102/osboxes/mongo
    The push refers to a repository [192.168.56.102/osboxes/mongo]
    722b5b443860: Pushed
    beaf3a1d24af: Pushed
    ...
    ...
    2589ed7ad668: Pushed
    d08535b0996b: Pushed

    latest: digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d size: 2614

    # Push the lets-chat image to DTR
    $> docker push 192.168.56.102/osboxes/lets-chat
    The push refers to a repository [192.168.56.102/osboxes/lets-chat]
    fb8b4be9b6e6: Pushed
    d3b5bb1c4411: Pushed
    ...
    ...
    b2ac5371e0f2: Pushed
    142a601d9793: Pushed

    1.0: digest: sha256:92842b34263cfb3045cf2f431852bdc4b4dd8f01bc85eb1d0cd34d00888c9bba size: 2418


    8.1.5 Pull the images to all DDC worker nodes where the images will be instantiated into corresponding containers.

    # Connect to UCP.
    # Note: make sure you run the eval command below from the directory where 
    # the client bundle was extracted
    $>eval $(<env.sh)

    #Pull lets-chat
    $> docker pull 192.168.56.102/osboxes/lets-chat:1.0
    centosddcwrk01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcucp: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcwrk02: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcdtr01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded

    # Pull mongo
    $> docker pull 192.168.56.102/osboxes/mongo
    Using default tag: latest
    centosddcwrk01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcucp: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcwrk02: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcdtr01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded

    8.1.6 Create Docker Compose file:

    Here is how our docker-compose.yml looks like:


    version: "3"
    services:
       mongo:
          image: 192.168.56.102/osboxes/mongo:latest
          networks:
             - my_hrm_network
          deploy:
             placement:
                constraints: [node.role == manager]
             restart_policy:
                condition: on-failure
                max_attempts: 3
                window: 60s
             labels:
                - "com.docker.ucp.access.label=dev"
       lets-chat:
          image: 192.168.56.102/osboxes/lets-chat:1.0
          networks:
             - my_hrm_network
          ports:
             - "8080"
          deploy:
             placement:
                constraints: [node.role == worker]
             mode: replicated
             replicas: 4
             restart_policy:
                condition: on-failure
                max_attempts: 3
                window: 60s
             labels:
                - "com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"
                - "com.docker.ucp.access.label=dev"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network   

    Few things to notice in the docker-compose.yml above are:

    1. placement constraints: [node.role == manager] for mongo. We are giving instruction to docker to instantiate the mongo container only on the node which has manager role. role can be “worker” or “manager”.
    2. Label: com.docker.ucp.access.label=dev; define access constraint by label. See for details. https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/
    3. Label: com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"; Here lets-chat application is configured for HRM and to accessed using host mydockertest.com on port 8080, which will be our HA-Proxy's host and port. Docker uses DNS for service discovery as services are created. Docker has different built in routing meshes for high availability. HTTP Routing Mesh (HRM) is an application layer routing mesh that routes HTTP traffic based on DNS hostname is part of UCP 2.0.
    4. Also, note that we are not exposing (explicitly) port for mongo, as mongo and lets-chat are in the same network 'my_hrm_network', they will be able to communicate even though, they will be instantiated in different Host (nodes). For lets-chat, the application listens on port 8080, but we are not publishing it explicitly because we are implementing containers with scaling in mind and relaying on Docker HRM. If you publish port explicitly to host (e.g. -p 8080:8080), then it will be an issue if you have to instantiate more than one replica in the same host, because there will be port conflict as only one process can listen into the same port on the same IP. More detail about HRM and service discovery: https://docs.docker.com/engine/swarm/ingress/#configure-an-external-load-balancer and https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/. Good read about service discovery, load balancing and also Swarm, Ingress and HRM: https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Service_Discovery_and_Load_Balancing_with_Docker_Universal_Control_Plane_(UCP)

     8.2 Deployment:

    8.2.1 Validate docker-compose.yml:

    # Validate docker-compose.yml, run the following command from the same directory 
    # where docker-compose.yml is located.
    $>docker-compose -f docker-compose.yml config

    WARNING: Some services (lets-chat) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
         

    Note: the above mentioned WARNING is obvious, because if you deploy your service/container using regular 'docker-compose up -d ...' command, it does not support the 'deploy' key.
    In our case, we are going to use 'docker stack deploy ...' instead. So, we can safely ignore the warning.

    8.2.1 Deploy

    # Execute docker stack deploy command using the compose-file 
    $> docker stack deploy --compose-file docker-compose.yml dev_lets-chat
    Creating service dev_lets-chat_lc-mongo
    Creating service dev_lets-chat_lets-chat

    # Verify service(s) are created:
    $> docker stack ls
    NAME           SERVICES
    dev_lets-chat  2

    # See the service details:
    $> docker stack services dev_lets-chat
    ID            NAME                     MODE        REPLICAS  IMAGE
    kib7peniroci  dev_lets-chat_mongo      replicated  1/1       192.168.56.102/osboxes/mongo:latest
    t7s5xpgdxncs  dev_lets-chat_lets-chat  replicated  4/4       192.168.56.102/osboxes/lets-chat:1.0

       

    As you can see one instance of mongo and 4 instances of lets-chat have been created.
    If you want to learn more about stack deployment, refer to https://docs.docker.com/engine/swarm/stack-deploy/#deploy-the-stack-to-the-swarm


    9. Setup HA-Proxy node:

    Here we will have a simple configuration of HA-Proxy just to show the working idea. Refer to HA-Proxy documentation and Docker documentation for HA-Proxy for details.
    Note: for this PoC, we are deploying ha-proxy outside of swarm cluster.

    9.1 Setup steps

    1. Start the virtual machine for HA-Proxy node (created in step #1. Create Virtual Machines (VMs))
    2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
    3. Make sure Docker is running.
    4. Prepare the configuration file for HA-Proxy.



    # /etc/haproxy/haproxy.cfg, version 1.7
    global
       maxconn 4096

    defaults
       mode   http
       timeout connect 5000ms
       timeout client 50000ms
       timeout server 50000ms

    frontend http
       bind *:8080
       option http-server-close
       stats uri /haproxy?stats
       default_backend bckendsrvs

    backend bckendsrvs
       balance roundrobin
       server worker1 192.168.56.103:8080 check
       server worker2 192.168.56.104:8080 check

    Few notes about haproxy.cfg above.
    1. Backend connection. We have 4 replicas (2 replica per node), but as you can see only two back-end connections are mentioned in the configuration file. It is the beauty of using Docker Swarm HRM. As long as the traffic reaches to any of the HRM node, whether the actual replica is running or not there, swarm automatically directs traffic to one of the replicas running in one of the available nodes. Docker swarm also takes care of load balancing among all replicas.
    2. The check option at the end of the server directives specifies that health checks should be performed on those back-end servers.
    3. Frontend section defines bind (ip and port) configuration for the proxy and reference to the corresponding backend configuration. In this case, it is listening to all available IPs on port 8080.
    4. 'stats uri' defines the status URI.
    Now, we have our ha-proxy configuration file is ready, let's build the custom ha-proxy image and instantiate it.

    9.2 Create Dockerfile for HA-Proxy

    FROM haproxy:1.7
    COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

    Note: In this case, haproxy.cfg and Dockerfile are located in the same directory from where we are executing 'docker build ...' command as shown below.

    9.3) Create custom image

    # Create Image

    $> docker build -t my_haproxy:1.7 .
    Sending build context to Docker daemon 3.072 kB
    Step 1/2 : FROM haproxy:1.7
    1.7: Pulling from library/haproxy
    ef0380f84d05: Pull complete
    405e00049647: Pull complete
    c97485231395: Pull complete
    389e4de140a0: Pull complete
    9abb32070ad9: Pull complete
    Digest: sha256:c335ec625d9a9b71fa5269b815597392a9d2418fa1cedb4ae0af17be8029a5b4
    Status: Downloaded newer image for haproxy:1.7
     ---> d66f0c435360
    Step 2/2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
     ---> 182b33ee6345
    Removing intermediate container 4416fbab54be
    Successfully built 182b33ee6345

    # List image
    $> docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
    my_haproxy          1.7                 182b33ee6345        About a minute ago   135 MB
    haproxy             1.7                 d66f0c435360        6 days ago           135 MB


    9.4 Verify the configuration and Instantiate ha-proxy container:

    # Verify the configuration file: 
    $> docker run -it --rm --name haproxy-syntax-check my_haproxy:1.7 haproxy -c \
       -f /usr/local/etc/haproxy/haproxy.cfg

     haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -c -f /usr/local/etc/haproxy/haproxy.cfg -Ds
    Configuration file is valid


    # Instantiate ha-proxy instance:
    $> docker run -d --name ddchaproxy -p 8080:8080 my_haproxy:1.7


    5bc06e2680e72475f2585c453f6ada0a5ef349e5222f9e75b2c0f98eb1a0462f


    10. Access and Verify Application 

    10.1) Accessing application:
    Once the ha-proxy is running, access the application. Make sure firewall is not blocking the port (that ha-proxy is listening) .

    http://<ha-proxy-host>:<ha-proxy-port>/<application-uri>

    Important: In order to access application, you need to make sure that the '' in the above URL matches the 'host' part of the external-route configuration of HRM.
    In our case it is 'mydockertest.com', so make sure 'mydockertest.com' resolves to the IP address of the ha-proxy. It is the way how HRM along with Swarm discover the services and route the requests in Ingress cluster and we are able to scale containers dynamically.

    10.2) Application verification:

    10.2.1) Get the stat from haproxy. Along with other things, stat shows the request count and which Swarm node is serving the request:
    http://:/haproxy?stats

    10.2.2) First access lets-chat through Web-UI (http://mydockertest.com:8080). Create your account.  Log using your credential. Once you create the account and able to login, in order to verify that the lets-chat is making successful connection to the mongo db, you can do the following:

    # Inspect the lets-chat instance:
    $> docker inspect 3d046c183b6d | grep mongo
       "LCB_DATABASE_URI=mongodb://mongo/letschat",

    #Access the mongodb instance and run mongo shell to verify the data.
    $> docker exec -it jbz7h5hdvb20 bash

    # Launch the mongo shell
    root@jbz7h5hdvb20:/# mongo
    MongoDB shell version v3.4.5
    connecting to: mongodb://127.0.0.1:27017
    MongoDB server version: 3.4.5
    Welcome to the MongoDB shell.

    # Run command 'show dbs' and make sure letschat database is in the list.
    > show dbs
    admin     0.000GB
    letschat  0.000GB
    local     0.000GB

    # Connect to letschat database.
    > use letschat
    switched to db letschat

    # Get the users table and run find and make sure it shows your account data.
    > show collections
    messages
    rooms
    sessions
    usermessages
    users

    # Make sure Users table has account data that was created before.
    > db.users.find()
    { "_id" : ObjectId("595bdce9559bb1000eae7b9e"), "displayName" : "Purna", "lastName" : "Poudel", "firstName" : "Purna", "password" : "$2a$10$JlZrr3Gu3aklxx4qeUK6uuDF3jQDZ/CuA17.Clm6VKk6/NN35QOT6", "email" : "purna.poudel@gmail.com", "username" : "ppoudel", "provider" : "local", "messages" : [ ], "rooms" : [ ], "joined" : ISODate("2017-07-04T18:22:33.868Z"), "__v" : 0 }

    Once you have Docker Datacenter up and running, upgrade it to Docker EE 2.0 and UCP 3.x to have choice of Swarm or Kubernetes orchestration. See my post Upgrade to Docker EE 2.0 and UCP 3.x for Choice of Swarm or Kubernetes Orchestration.


    Looks like you're really into Docker, see my other related blog posts below: