Experience Sharing - Docker Datacenter/Mirantis Docker Enterprise

This post details my experience working with Docker Datacenter (DDC)/Mirantis Docker Enterprise - an integrated container management and security solution and now part of Docker Enterprise Edition (EE) offering. Docker EE is a certified solution which is commercially supported. Refer to https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/dee-intro.html for more information on Docker EE. I've worked with both production implementation as well as Proof Of Concept (PoC) solution of DDC. This blog post mainly contains my experience while doing PoC.
Obviously, the first step, while building the DDC is to define/design the architecture of DDC. Docker provides a Docker Reference architecture (https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Docker_EE_Best_Practices_and_Design_Considerations) and it is a good starting point. For this PoC, we will be creating a DDC based on reference architecture by taking a subset of it. Instead of three Universal Control Plane (UCP) nodes, we'll have just one, instead of three Docker Trusted Registry (DTR) nodes, we'll have one and lastly instead of four UCP worker nodes for application, we'll have just two. As an application load balancer, we'll use Dockerized HA-Proxy. It'll be a separate node (not managed by UCP). For this PoC, we'll use CentOS 7.x powered virtual machines created using 'Oracle VirtualBox'. I'm also going to highlight a few tricks and tips associated with VirtualBox.  We'll also create a client Docker node to communicate with DDC components remotely.

Since, we have decided to take the subset of Docker reference architecture and work on it, we can go ahead and start building the infrastructure. For this POC, the entire infrastructure consists of a Windows 10 laptop with 16 GB memory and an Oracle VM VirtualBox.

1. Create Virtual Machines (VMs)

Let's first create a VM with Centos 7.x Linux. Download CentOS 7-1611 VirtualBox image from osboxes.org and create the first node with 2 GB of memory.

Once the VM is ready, make 6 clones of it. For each clone follow the steps below:

1.1 Network setting:

We'll do few things to emulate a static IP for each virtual machine, otherwise the UCP and DTR will experience an issue if IP changes after the installation. See the recommendation from Docker regarding the static IP and hostname here. Enable two network adapter and configure as below:

  • Adapter 1: NAT - to allow the VM (guest) to communicate with the outside world through host computer's network connection.
  • Adapter 2: Host-only Adapter - to allow connection between host and guest. It also helps us to set static IP.
Refer to https://gist.github.com/pjdietz/5768124 for details on how to set this up. One more thing to remember, if you are using CentOS 7, and want to set Permanent static IP, you need to use Interface Configuration file (refer to https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-networkscripts-interfaces.html) instead of /etc/network/interfaces. In my case, I used the following:

1.1.1) Identified the interface used for HostOnly Adapter. See below highlighted in yellow:

$>ip a | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
...
    inet 192.168.56.101/24 brd 192.168.56.255 scope global enp0s8
...
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
...

1.1.2) Created file "/etc/sysconfig/network-scripts/ifcfg-enp0s8" on guest. where enp0s8 is the network interface name with content like this:

#/etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.56.0
NETMASK=255.255.255.0
IPADDR=192.168.56.101
BROADCAST=192.168.56.255
USERCTL=no

Note: make sure to create unique IP for each clone like 192.168.56.101, 192.168.56.102, 192.168.56.103, ... etc.

1.2 Hostname setup

In order to set hostname on CentOS, follow:
#Update the hostname:
#Set hostname for DDC UCP node 

$>sudo hostnamectl set-hostname centosddcucp
# restart the systemd-hostnamed daemon
$>sudo systemctl restart systemd-hostnamed

1.3) [optional] Update /etc/hosts file

For easy access to each node, add the mapping entries in /etc/hosts file of each VM. The following entries are per my configuration.

#/etc/hosts 
192.168.56.101 centosddcucp
192.168.56.102 centosddcdtr01
192.168.56.103 centosddcwrk01
192.168.56.104 centosddcwrk02
192.168.56.105 centosddcclnt
192.168.56.106 centosddchaproxy 
mydockertest.com



2. Install & Configure Commercially Supported (CS) Docker Engine

2.1) Installation:

Official installation document: https://docs.docker.com/engine/installation/linux/centos/#install-using-the-repository
You can install either using the repository or install from a package. Here, we will install using the repository.  To install Docker Enterprise Edition (Docker EE) using the repository, you need to know the Docker EE repository URL associated with your licensed or trial subscription. To get this information:

  • Go to https://store.docker.com/?overlay=subscriptions.
  • Choose Get Details / Setup Instructions within the Docker Enterprise Edition for CentOS section.
  • Copy the URL from the field labeled Copy and paste this URL to download your Edition.
  • set up Docker’s repositories and install from there, for ease of installation and upgrade tasks. This is the recommended approach.

# 2.1.1 Remove any existing Docker repositories (like docker-ce.repo, docker-ee.repo) from /etc/yum.repos.d/.

# 2.1.2 Store your Docker EE repository URL in a yum variable in /etc/yum/vars/. 

# Note: Replace with the URL you noted from your subscription.

$>sudo sh -c 'echo "" > /etc/yum/vars/dockerurl'

# Note: DOCKER-EE-URL looks something like:
# https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

# Note: I've replaced the actual text with 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' for confidentiality reason.


# 2.1.3 Install required packages:
$>sudo yum install -y yum-utils device-mapper-persistent-data lvm2
...
Updated:
  lvm2.x86_64 7:2.02.166-1.el7_3.4

Dependency Updated:
  device-mapper.x86_64 7:1.02.135-1.el7_3.4              device-mapper-event.x86_64 7:1.02.135-1.el7_3.4         device-mapper-event-libs.x86_64 7:1.02.135-1.el7_3.4
  device-mapper-libs.x86_64 7:1.02.135-1.el7_3.4         lvm2-libs.x86_64 7:2.02.166-1.el7_3.4

Complete!

# 2.1.4 Add stable repository:


$>sudo yum-config-manager --add-repo /docker-ee.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/docker-ee.repo
grabbing file https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/docker-ee.repo to /etc/yum.repos.d/docker-ee.repo
repo saved to /etc/yum.repos.d/docker-ee.repo

# 2.1.5 Update Yum package index:


$>sudo yum makecache fast
...
Loading mirror speeds from cached hostfile
 * base: mirror.its.sfu.ca
 * extras: muug.ca
 * updates: muug.ca
Metadata Cache Created

# 2.1.6 Install Docker CS Engine 

$>sudo yum install docker-ee
...
Installed:
  docker-ee.x86_64 0:17.03.2.ee.4-1.el7.centos

Dependency Installed:
  docker-ee-selinux.noarch 0:17.03.2.ee.4-1.el7.centos

# 2.1.7 Add following content (if doesn't exist) or edit (if required) in /etc/docker/daemon.json
   
   {
      "storage-driver": "device-mapper"
   }
 
# 2.1.8 Check the docker service status, enable (if required) and start

$>sudo systemctl status docker
? docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: https://docs.docker.com


$>sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.


$>sudo systemctl start docker


$>sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

# 2.1.9 Manage Docker as non-root user
# 9.1 Add docker group (if doesn't exist)
$>sudo groupadd docker


# 2.1.10) add user to the 'docker' group
$>sudo usermod -aG docker $USER


# logout and login again, and run:


$>docker ps



3. Install & Configure Universal Control Plane (UCP)

UCP is a cluster management solution for Docker Enterprise. In a nutshell, it itself is a containerized application that runs on (CS) Docker Engine and facilitates user interaction (deploy, configure, monitor etc) through (API, Docker CLI, GUI etc) with other containerized applications managed by DDC.

3.1 Prepare for Installation

  1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Prepare UCP node:

# Get the firewall zones
$>sudo firewall-cmd --get-zones
work drop internal external trusted home dmz public block


# Open the following ports for public zone
tcp_ports="443 80 2376 2377 4789 7946 12376 12379 12380 12381 12382 12383 12384 12385 12386 12387"
# udp_ports="4789 7946"
# sudo firewall-cmd --permanent --zone=public --add-port=${_port}/;
# For example:
$> sudo firewall-cmd --permanent --zone=public --add-port=2376/tcp;
# Once all ports are added, restart the firewall.
$> sudo firewall-cmd --reload;


3.2 Install Docker Universal Control Plane (UCP)


# 3.2.1 Pull UCP image

$>docker pull docker/ucp:latest
latest: Pulling from docker/ucp
709515475419: Pull complete
6beede3f81f7: Pull complete
37a4fec5e659: Pull complete
Digest: sha256:b8c4a162b5ec6224b31be9ec52c772a8ba3f78995f691237365cfa728341e942
Status: Downloaded newer image for docker/ucp:latest

# 3.2.2 Install UCP
Note: 192.168.56.101 is my UCP node:


$>sudo docker run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp install --host-address 192.168.56.101 --interactive
INFO[0000] Verifying your system is compatible with UCP 2.1.4 (10e6c44)
INFO[0000] Your engine version 17.03.2-ee-4, build 1e6d71e (3.10.0-514.el7.x86_64) is compatible
WARN[0000] Your system uses devicemapper.  We can not accurately detect available storage space.  Please make sure you have at least 3.00 GB available in /var/lib/docker
Admin Username: osboxes
Admin Password:
Confirm Admin Password:
INFO[0033] All required images are present

...

INFO[0001] Initializing a new swarm at 192.168.56.101
INFO[0018] Establishing mutual Cluster Root CA with Swarm
...
INFO[0021] Deploying UCP Service
INFO[0085] Installation completed on centosddcucp (node ywkywo08e6dagbe45aprmbhlc)
INFO[0085] UCP Instance ID: IJUU:N6K6:KVJK:W3BO:LXVL:FBB4:RKF5:XNHM:HTQI:TZVL:XFIO:Z253
INFO[0085] UCP Server SSL: SHA-256 Fingerprint=D2:68:F3:...........:BD
INFO[0085] Login to UCP at https://192.168.56.101:443
INFO[0085] Username: osboxes
INFO[0085] Password: (your admin password)

# 3.2.3 Very UCP containers are running:


$>docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED              STATUS                        PORTS                                                                             NAMES
f615d8c1d5ba        docker/ucp-controller:2.1.4   "/bin/controller s..."   55 seconds ago       Up 55 seconds (healthy)       0.0.0.0:443->8080/tcp                                                             ucp-controller
255119a3444e        docker/ucp-swarm:2.1.4        "/bin/swarm manage..."   About a minute ago   Up 56 seconds                 0.0.0.0:2376->2375/tcp                                                            ucp-swarm-manager
08a6e3789fed        docker/ucp-auth:2.1.4         "/usr/local/bin/en..."   About a minute ago   Up 57 seconds (healthy)       0.0.0.0:12386->4443/tcp                                                           ucp-auth-worker
caa4f9b4543a        docker/ucp-metrics:2.1.4      "/bin/entrypoint.s..."   About a minute ago   Up 58 seconds                 0.0.0.0:12387->12387/tcp                                                          ucp-metrics
58fa77c78bb2        docker/ucp-auth:2.1.4         "/usr/local/bin/en..."   About a minute ago   Up 58 seconds                 0.0.0.0:12385->4443/tcp                                                           ucp-auth-api
867f6aec884c        docker/ucp-auth-store:2.1.4   "rethinkdb --bind ..."   About a minute ago   Up About a minute             0.0.0.0:12383-12384->12383-12384/tcp                                              ucp-auth-store
b3e536f9309b        docker/ucp-etcd:2.1.4         "/bin/etcd --data-..."   About a minute ago   Up About a minute (healthy)   2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:12379->2379/tcp   ucp-kv
927cc6a7b5c8        docker/ucp-cfssl:2.1.4        "/bin/ucp-ca serve..."   About a minute ago   Up About a minute             0.0.0.0:12381->12381/tcp                                                          ucp-cluster-root-ca
ef00fe9f7200        docker/ucp-cfssl:2.1.4        "/bin/ucp-ca serve..."   About a minute ago   Up About a minute             0.0.0.0:12382->12382/tcp                                                          ucp-client-root-ca
b56a56aeeddd        docker/ucp-agent:2.1.4        "/bin/ucp-agent pr..."   About a minute ago   Up About a minute             0.0.0.0:12376->2376/tcp                                                           ucp-proxy
403f88c79f46        docker/ucp-agent:2.1.4        "/bin/ucp-agent agent"   About a minute ago   Up About a minute             2376/tcp                                                                          ucp-agent.ywkywo08e6dagbe45aprmbhlc.18vdyg9uqfuepzslnu0uclxzj

# 3.2.4 Apply license

# Note: in order to apply the license, launch the UCP Web UI https://ucp-host:
# and it gives you an option - either to upload the existing license "Upload License" option or "Get free trial or
# purchase license".

$> docker node ls
ID                           HOSTNAME      STATUS  AVAILABILITY  MANAGER STATUS
ywkywo08e6dagbe45aprmbhlc *  centosddcucp  Ready   Active        Leader


That concludes the installation and configuration of UCP. Next is to install the Docker Trusted Registry (DTR).

4. Install & Configure Docker Trusted Registry (DTR)

4.1 Installation steps

  1. Start the virtual machine for DTR node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Add this (DTR) node to DDC UCP:
    • Access the UCP Web UI.
    • Click on "+ Add node" link.
    • It shows you command to run from the node. Copy the command, it looks something like:
      docker swarm join --token 192.168.56.101:2377 

      Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx

      Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.


    • Run the command from DTR node:
      $>docker swarm join --token \
      SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377

      This node joined a swarm as a worker.
  5. Generate DTR Installation command string from UCP Web UI:
    • Access UCP Web UI.
    • Under Install DTR (in the newer version of UCP, you have to navigate to Admin Settings --> Docker Trusted Registry), click on Install Now, select appropriate selection and it gives you command to copy. Command looks something like:
      docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
      --ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
      --ucp-url https://192.168.56.101

      Note: Where the --ucp-node is the hostname of the UCP node where you want to deploy DTR

      Here is a screen shot that shows DTR installation command string:
  6. Start the installation.
    Note: DTR installation details can be found at https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/#step-3-install-dtr

    # 1. Pull the latest version of DTR >>docker pull docker/dtr
    # 2. Run installation command: 


    $>docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
    --ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
    --ucp-url https://192.168.56.101

    INFO[0000] Beginning Docker Trusted Registry installation
    ucp-password:
    INFO[0009] Validating UCP cert
    INFO[0009] Connecting to UCP
    INFO[0009] UCP cert validation successful
    INFO[0010] The UCP cluster contains the following nodes: centosddcucp, centosddcdtr01
    INFO[0017] verifying [80 443] ports on centosddcdtr01
    INFO[0000] Validating UCP cert
    INFO[0000] Connecting to UCP
    INFO[0000] UCP cert validation successful
    INFO[0000] Checking if the node is okay to install on
    INFO[0000] Connecting to network: dtr-ol
    INFO[0000] Waiting for phase2 container to be known to the Docker daemon
    INFO[0001] Starting UCP connectivity test
    INFO[0001] UCP connectivity test passed
    INFO[0001] Setting up replica volumes...
    INFO[0001] Creating initial CA certificates
    INFO[0001] Bootstrapping rethink...
    ...
    ...
    INFO[0115] Installation is complete
    INFO[0115] Replica ID is set to: fc27c2f482e5
    INFO[0115] You can use flag '--existing-replica-id fc27c2f482e5' when joining other replicas to your Docker Trusted Registry Cluster

    # 3. Make sure DTR is running:
       In your browser, navigate to the Docker Universal Control Plane web UI, and navigate to the Applications screen. DTR should be listed as an application.

    # 4. Access DTR Web UI:
    https://

Troubleshooting note: if you find your DTR is having some issue and need to remove and re-install, follow this like https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/uninstall/

# 1. Uninstall DTR:
$>sudo docker run -it --rm docker/dtr destroy --ucp-insecure-tls


INFO[0000] Beginning Docker Trusted Registry replica destroy
ucp-url (The UCP URL including domain and port): https://192.168.56.101:443
ucp-username (The UCP administrator username): osboxes
ucp-password:
INFO[0049] Validating UCP cert
INFO[0049] Connecting to UCP
INFO[0049] UCP cert validation successful
INFO[0049] No replicas found in this cluster. If you are trying to clean up a broken replica, provide its replica ID manually.
Choose a replica to destroy: bd02a612d0c0
INFO[0109] Force removing replica
INFO[0110] Stopping containers
INFO[0110] Removing containers
INFO[0110] Removing volumes
INFO[0110] Replica removed.

# 2. Remove DTR node from UCP
$>docker node ls
$>docker node rm

# Follow the installation steps (4, 5, 6) again.

5. Setup DDC client node

In order to access all DDC nodes (UCP, DTR, Worker) and perform operation remotely, you need to have a Docker client, configured to communicate with DDC securely.

5.1 Installation steps

  1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Download UCP client certificate bundle from UCP and extract it on client host machine.     
  • Access UCP Web UI and navigate to User Management
  • Click on User
  • Click on « Create a Client Bundle » as shown below in the screen shot:
  1. Configure client so that it can securely connect to UCP:
    # 1. Extract client bundle on Client node:
    $> unzip ucp-bundle-osboxes.zip
    Archive:  ucp-bundle-osboxes.zip
     extracting: ca.pem
     extracting: cert.pem
     extracting: key.pem
     extracting: cert.pub
     extracting: env.ps1
     extracting: env.cmd
     extracting: env.sh
    # 2. load the DDC environment
    $>eval $(<env.sh)
    # 3. Make sure docker is connected to UCP:
    $>docker ps
    # Note: based on your access level, you should see the Docker processes running on UCP, DTR and Worker(s) node(s).








  • Configure client so that it can securely connect to DTR and push/pull images.
    Note: If DTR is using the auto generated self signed cert, your client Docker Engine
    need to configure to trust the certificate presented by DTR, otherwise, you get "x509: certificate signed by unknown authority" error.
    Refer to: https://docs.docker.com/datacenter/dtr/2.1/guides/repos-and-images/#configure-your-host for detail.
    For CentOS, you can install the DTR certificate in the client trust store as follows:
    # 1. Pull the DTR certificate. Here 192.168.56.102 is my DTR node.
    $>sudo curl -k https://192.168.56.102/ca -o /etc/pki/ca-trust/source/anchors/centosddcdtr01.crt
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  2009  100  2009    0     0   8999      0 --:--:-- --:--:-- --:--:--  9049
    # 2. Update CA Trust

    $>sudo update-ca-trust
    # 3. Start the Docker Engine
    $> sudo systemctl restart docker
    # 4. Test the connectivity from client node to DTR node:
    $> docker login 192.168.56.102
    Username: osboxes
    Password:
    Login Succeeded

  • 5.2 Configure Notary client

    By configuring Notary client, you'll be able to sign Docker image(s) with the private keys in your UCP client bundle, trusted by UCP and easily traced back to your user account. Read details here; https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/configure-your-notary-client/

    # 5.2.1 Download notary
    $>curl -L https://github.com/docker/notary/releases/download/v0.4.3/notary-Linux-amd64 -o notary


      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   591    0   591    0     0   1184      0 --:--:-- --:--:-- --:--:--  1184
    100 9518k  100 9518k    0     0  3300k      0  0:00:02  0:00:02 --:--:-- 5115k

    # 5.2.2 Give execution permission
    $>chmod +x notary


    # 5.2.3 Move to /usr/bin
    $>sudo mv notary /usr/bin

    # 5.2.4. Import UCP private key into notary key database:
    $> notary key import ./key.pem
    Enter passphrase for new delegation key with ID 4e672ee (tuf_keys):
    Repeat passphrase for new delegation key with ID 4e672ee (tuf_keys):

    # 5.2.5 List key list
    $>notary key list

    ROLE          GUN    KEY ID                                                              LOCATION
    ----          ---    ------                                                              --------
    delegation           4e672ee5f4de7bf132d03554a8f592236ae6054026efc6b01873fc1b45a61dca    /home/osboxes/.docker/trust/private

    # 5.2.6 Configure notary CLI so that it can talk with the Notary server that’s part of DTR
    # There are few ways it can be accomplished. Easiest one is to configure Notary by creating a ~/.notary/config.json file with the following content:

    {
      "trust_dir" : "~/.docker/trust",
      "remote_server": {
        "url": "",
        "root_ca": ""
      }
    }


    # 5.2.7. [optional} Sign image while pushing to DTR:



    Note: By default, CLI does not sign an image while pushing to DTR. In order to sign image while # pushing, set the environment variable DOCKER_CONTENT_TRUST=1

    5.3 Install Docker Compose:

    Docker Compose is a very handy tool that can be used to  define and manage multi-container Docker application(s). For details refer to https://docs.docker.com/compose/overview/
    Note: Docker for Mac, and Windows may already include docker-compose. In order to find out whether the Docker Compose is already installed, just run the the docker-compose --version command.
      $> docker-compose --version
      ash: docker-compose: command not found...

    Install:

    $>sudo curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    $>chmod +x /usr/local/bin/docker-compose
    $> docker-compose --version
    docker-compose version 1.14.0, build c7bdf9e

    6. Setup Worker node(s):

    Worker node is real work horse in DDC setup where production application runs. Below are installation steps.

    6.1 Installation steps

    1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
    2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
    3. Make sure Docker is running.
    4. Add this (worker) node to DDC UCP:
      • Access the UCP Web UI.
      • Click on "+ Add node" link.
      • It shows you command to run from the node. Copy the command, it looks something like:
        docker swarm join --token 192.168.56.101:2377 
        Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx
        Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.

        Note: 
      • Run the command from worker node:
        $>docker swarm join --token \
        SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377

        This node joined a swarm as a worker.
    5. Repeat steps 1 to 4 for each additional worker node
    6. Once all nodes join the swarm, run the following command from client (while connected to UCP) to confirm and list all the nodes:
      $> docker node ls
      ID                           HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS
      ivczlaqfyjmtvs0xqk6aivy8p    centosddcwrk02  Ready   Active
      usl2otwy3u6hj3vls9r77i45r    centosddcwrk01  Ready   Active
      ywkywo08e6dagbe45aprmbhlc *  centosddcucp    Ready   Active        Leader
      z2cobi7ag2qqsevfjsvye3d19    centosddcdtr01  Ready   Active   
    7. Optional: Put node like DTR or UCP in "drain" mode, where you don't want application container to deploy. Here, we put DTR in "drain" mode
      #command format: docker node update --availability drain
      $> docker node update --availability drain z2cobi7ag2qqsevfjsvye3d19
      z2cobi7ag2qqsevfjsvye3d19
      $> docker node ls
      ID                           HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS
      ivczlaqfyjmtvs0xqk6aivy8p    centosddcwrk02  Ready   Active
      usl2otwy3u6hj3vls9r77i45r    centosddcwrk01  Ready   Active
      ywkywo08e6dagbe45aprmbhlc *  centosddcucp    Ready   Active        Leader
      z2cobi7ag2qqsevfjsvye3d19    centosddcdtr01  Ready   Drain   
    Here is how node listing of our PoC appears on UCP Web UI:

    UCP node listing


    7. Create Additional User, Access Label and Network:

    7.1 Create additional user, team and permission label as necessary from UCP Web UI.

    Follow Docker documentation (https://docs.docker.com/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-users/) to create user, team and permission levels as required. 

    7.2 Create network.

    As per Docker documentation, containers connected to the default bridge network can communicate with each other by IP address. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. Traditionally, you could link two containers together using the legacy 'docker run --link ...' option, but here, we are going to define a network and attach our containers, so that they can communicate as required. Details can be found here https://docs.docker.com/engine/userguide/networking/#user-defined-networks
    Note: docker links feature has been deprecated in favor of user defined networks as explained here.
    Service discovery in Docker is network-scoped, meaning the embedded DNS functionality in Docker 
    can be used only by containers or tasks using the same network to resolve each other's addresses, so our plan here is to deploy a set of services that can communicate to each other using DNS. 
    Note: If the destination and source container or service are not on the same network, Docker Engine forwards the DNS query to the default DNS server.


    # Create network 
    # From client node, first connect to UCP:
    eval $(<env.sh)

    $> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
    naf8hvyx22n6lsvb4bq43z968

    # Verify the network   
    $> docker network ls
    NETWORK ID          NAME                             DRIVER              SCOPE
    eea917ac864c        centosddcdtr01/bridge            bridge              local
    065f72b45f37        centosddcdtr01/docker_gwbridge   bridge              local
    229f9949f85f        centosddcucp/bridge              bridge              local
    d8a09aed43ae        centosddcucp/docker_gwbridge     bridge              local
    a18d616d5fed        centosddcucp/host                host                local
    fc423b7dc25f        centosddcucp/none                null                local
    o40j6xknr6ax        dtr-ol                           overlay             swarm
    vwtzfva8q8r3        ingress                          overlay             swarm
    naf8hvyx22n6        my_hrm_network                   overlay             swarm
    tbmwjleceolg        ucp-hrm                          overlay             swarm
         

    Note: if you get error Error response from daemon: Error response from daemon: Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component while creating network, check your network name. Make sure it does not contain any dot '.'. Error message itself is little bit confusing. Refer to issue 31772 for details.

    7.3 Enable "HTTP Routing Mesh" if necessary

    • Login to the UCP web Web UI.
    • Navigate to Admin Settings > Routing Mesh.
    • Check Enable HTTP Routing Mesh.
    • Configure the ports for HRM to listen on, with the defaults being 9080 and 9443. The HTTPS port defaults to 8443 so that it doesn't interfere with the default UCP management port (443).
    Note: If it is a NEW network with label '--label com.docker.ucp.mesh.http=true', you need to disable and then re-enable "HTTP Routing Mesh". It can be done through UCP UI:
    • Disable: Admin Settings --> Routing Mesh  --> Uncheck "Enable HTTP routing mesh". Click on Update button.
    • Enable: Admin Settings --> Routing Mesh  --> check "Enable HTTP routing mesh". Click on Update button.

    8. Docker Application Deployment

    8.1 Preparation:

    For this PoC, we're going to build the custom image of Lets-Chat app and deploy using Docker Compose. Here is how our Dockerfile looks like:
    Note: All the steps listed in step 8.x are executed on or from client node.

    8.1.1 Create Dockerfile for lets-chat:

    From sdelements/lets-chat:latest
    CMD (sleep 60; npm start)

    8.1.2 Create lets-chat image using Dockerfile. Run 'docker build ...' command from the same directory where the Dockerfile is located.

    $> docker build -t lets-chat:1.0 .

    Sending build context to Docker daemon 4.608 kB
    Step 1/2 : FROM sdelements/lets-chat:latest
    latest: Pulling from sdelements/lets-chat
    6a5a5368e0c2: Pull complete
    7b9457ec39de: Pull complete
    ...
    ...
    876c39157780: Pull complete
    Digest: sha256:5b923d428176250653530fdac8a9f925043f30c511b77701662d7f8fab74961c
    Status: Downloaded newer image for sdelements/lets-chat:latest
     ---> 296501fb5b70
    Step 2/2 : CMD (sleep 60; npm start)
     ---> Running in 194eb91d5f59
     ---> 14e03b359b1d
    Removing intermediate container 194eb91d5f59
    Successfully built 14e03b359b1d

    8.1.3 Pull the Mongo DB image:


    $> docker pull mongo
    Using default tag: latest
    latest: Pulling from library/mongo
    f5cc0ee7a6f6: Pull complete
    d99b18c5f0ce: Pull complete
    ...
    ...
    72dc91cfe502: Pull complete
    d610498cfcc7: Pull complete
    Digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d
    Status: Downloaded newer image for mongo:latest
       

    8.1.4 Rename/Tag the images as per DTR namespace and Push the images to DTR:
    # Tag lets-chat image
    $> docker tag lets-chat:1.0 192.168.56.102/osboxes/lets-chat:1.0
    # Tag mongo image
    docker tag mongo:latest 192.168.56.102/osboxes/mongo:latest

    # List the images
    $> docker images
    REPOSITORY                           TAG           IMAGE ID            CREATED             SIZE
    192.168.56.102/osboxes/lets-chat     1.0           14e03b359b1d        5 minutes ago       255 MB
    lets-chat                            1.0           14e03b359b1d        5 minutes ago       255 MB
    mongo                                latest        71c101e16e61        6 days ago          358 MB
    192.168.56.102/osboxes/mongo         latest        71c101e16e61        6 days ago          358 MB
    ....


    8.1.5 Push the images to DTR:
    Note: before pushing the image, you need to create "repository" for the images (if one doesn't exist already). Create corresponding repo from DTR Web UI:






    # Login to DTR
    $ docker login 192.168.56.102 -u osboxes -p
    Login Succeeded

    # Push the mongo image to DTR
    $ docker push 192.168.56.102/osboxes/mongo
    The push refers to a repository [192.168.56.102/osboxes/mongo]
    722b5b443860: Pushed
    beaf3a1d24af: Pushed
    ...
    ...
    2589ed7ad668: Pushed
    d08535b0996b: Pushed

    latest: digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d size: 2614

    # Push the lets-chat image to DTR
    $> docker push 192.168.56.102/osboxes/lets-chat
    The push refers to a repository [192.168.56.102/osboxes/lets-chat]
    fb8b4be9b6e6: Pushed
    d3b5bb1c4411: Pushed
    ...
    ...
    b2ac5371e0f2: Pushed
    142a601d9793: Pushed

    1.0: digest: sha256:92842b34263cfb3045cf2f431852bdc4b4dd8f01bc85eb1d0cd34d00888c9bba size: 2418


    8.1.5 Pull the images to all DDC worker nodes where the images will be instantiated into corresponding containers.

    # Connect to UCP.
    # Note: make sure you run the eval command below from the directory where 
    # the client bundle was extracted
    $>eval $(<env.sh)

    #Pull lets-chat
    $> docker pull 192.168.56.102/osboxes/lets-chat:1.0
    centosddcwrk01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcucp: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcwrk02: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcdtr01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded

    # Pull mongo
    $> docker pull 192.168.56.102/osboxes/mongo
    Using default tag: latest
    centosddcwrk01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcucp: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcwrk02: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcdtr01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded

    8.1.6 Create Docker Compose file:

    Here is how our docker-compose.yml looks like:


    version: "3"
    services:
       mongo:
          image: 192.168.56.102/osboxes/mongo:latest
          networks:
             - my_hrm_network
          deploy:
             placement:
                constraints: [node.role == manager]
             restart_policy:
                condition: on-failure
                max_attempts: 3
                window: 60s
             labels:
                - "com.docker.ucp.access.label=dev"
       lets-chat:
          image: 192.168.56.102/osboxes/lets-chat:1.0
          networks:
             - my_hrm_network
          ports:
             - "8080"
          deploy:
             placement:
                constraints: [node.role == worker]
             mode: replicated
             replicas: 4
             restart_policy:
                condition: on-failure
                max_attempts: 3
                window: 60s
             labels:
                - "com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"
                - "com.docker.ucp.access.label=dev"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network   

    Few things to notice in the docker-compose.yml above are:

    1. placement constraints: [node.role == manager] for mongo. We are giving instruction to docker to instantiate the mongo container only on the node which has manager role. role can be “worker” or “manager”.
    2. Label: com.docker.ucp.access.label=dev; define access constraint by label. See for details. https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/
    3. Label: com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"; Here lets-chat application is configured for HRM and to accessed using host mydockertest.com on port 8080, which will be our HA-Proxy's host and port. Docker uses DNS for service discovery as services are created. Docker has different built in routing meshes for high availability. HTTP Routing Mesh (HRM) is an application layer routing mesh that routes HTTP traffic based on DNS hostname is part of UCP 2.0.
    4. Also, note that we are not exposing (explicitly) port for mongo, as mongo and lets-chat are in the same network 'my_hrm_network', they will be able to communicate even though, they will be instantiated in different Host (nodes). For lets-chat, the application listens on port 8080, but we are not publishing it explicitly because we are implementing containers with scaling in mind and relaying on Docker HRM. If you publish port explicitly to host (e.g. -p 8080:8080), then it will be an issue if you have to instantiate more than one replica in the same host, because there will be port conflict as only one process can listen into the same port on the same IP. More detail about HRM and service discovery: https://docs.docker.com/engine/swarm/ingress/#configure-an-external-load-balancer and https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/. Good read about service discovery, load balancing and also Swarm, Ingress and HRM: https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Service_Discovery_and_Load_Balancing_with_Docker_Universal_Control_Plane_(UCP)

     8.2 Deployment:

    8.2.1 Validate docker-compose.yml:

    # Validate docker-compose.yml, run the following command from the same directory 
    # where docker-compose.yml is located.
    $>docker-compose -f docker-compose.yml config

    WARNING: Some services (lets-chat) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
         

    Note: the above mentioned WARNING is obvious, because if you deploy your service/container using regular 'docker-compose up -d ...' command, it does not support the 'deploy' key.
    In our case, we are going to use 'docker stack deploy ...' instead. So, we can safely ignore the warning.

    8.2.1 Deploy

    # Execute docker stack deploy command using the compose-file 
    $> docker stack deploy --compose-file docker-compose.yml dev_lets-chat
    Creating service dev_lets-chat_lc-mongo
    Creating service dev_lets-chat_lets-chat

    # Verify service(s) are created:
    $> docker stack ls
    NAME           SERVICES
    dev_lets-chat  2

    # See the service details:
    $> docker stack services dev_lets-chat
    ID            NAME                     MODE        REPLICAS  IMAGE
    kib7peniroci  dev_lets-chat_mongo      replicated  1/1       192.168.56.102/osboxes/mongo:latest
    t7s5xpgdxncs  dev_lets-chat_lets-chat  replicated  4/4       192.168.56.102/osboxes/lets-chat:1.0

       

    As you can see one instance of mongo and 4 instances of lets-chat have been created.
    If you want to learn more about stack deployment, refer to https://docs.docker.com/engine/swarm/stack-deploy/#deploy-the-stack-to-the-swarm


    9. Setup HA-Proxy node:

    Here we will have a simple configuration of HA-Proxy just to show the working idea. Refer to HA-Proxy documentation and Docker documentation for HA-Proxy for details.
    Note: for this PoC, we are deploying ha-proxy outside of swarm cluster.

    9.1 Setup steps

    1. Start the virtual machine for HA-Proxy node (created in step #1. Create Virtual Machines (VMs))
    2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
    3. Make sure Docker is running.
    4. Prepare the configuration file for HA-Proxy.



    # /etc/haproxy/haproxy.cfg, version 1.7
    global
       maxconn 4096

    defaults
       mode   http
       timeout connect 5000ms
       timeout client 50000ms
       timeout server 50000ms

    frontend http
       bind *:8080
       option http-server-close
       stats uri /haproxy?stats
       default_backend bckendsrvs

    backend bckendsrvs
       balance roundrobin
       server worker1 192.168.56.103:8080 check
       server worker2 192.168.56.104:8080 check

    Few notes about haproxy.cfg above.
    1. Backend connection. We have 4 replicas (2 replica per node), but as you can see only two back-end connections are mentioned in the configuration file. It is the beauty of using Docker Swarm HRM. As long as the traffic reaches to any of the HRM node, whether the actual replica is running or not there, swarm automatically directs traffic to one of the replicas running in one of the available nodes. Docker swarm also takes care of load balancing among all replicas.
    2. The check option at the end of the server directives specifies that health checks should be performed on those back-end servers.
    3. Frontend section defines bind (ip and port) configuration for the proxy and reference to the corresponding backend configuration. In this case, it is listening to all available IPs on port 8080.
    4. 'stats uri' defines the status URI.
    Now, we have our ha-proxy configuration file is ready, let's build the custom ha-proxy image and instantiate it.

    9.2 Create Dockerfile for HA-Proxy

    FROM haproxy:1.7
    COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

    Note: In this case, haproxy.cfg and Dockerfile are located in the same directory from where we are executing 'docker build ...' command as shown below.

    9.3) Create custom image

    # Create Image

    $> docker build -t my_haproxy:1.7 .
    Sending build context to Docker daemon 3.072 kB
    Step 1/2 : FROM haproxy:1.7
    1.7: Pulling from library/haproxy
    ef0380f84d05: Pull complete
    405e00049647: Pull complete
    c97485231395: Pull complete
    389e4de140a0: Pull complete
    9abb32070ad9: Pull complete
    Digest: sha256:c335ec625d9a9b71fa5269b815597392a9d2418fa1cedb4ae0af17be8029a5b4
    Status: Downloaded newer image for haproxy:1.7
     ---> d66f0c435360
    Step 2/2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
     ---> 182b33ee6345
    Removing intermediate container 4416fbab54be
    Successfully built 182b33ee6345

    # List image
    $> docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
    my_haproxy          1.7                 182b33ee6345        About a minute ago   135 MB
    haproxy             1.7                 d66f0c435360        6 days ago           135 MB


    9.4 Verify the configuration and Instantiate ha-proxy container:

    # Verify the configuration file: 
    $> docker run -it --rm --name haproxy-syntax-check my_haproxy:1.7 haproxy -c \
       -f /usr/local/etc/haproxy/haproxy.cfg

     haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -c -f /usr/local/etc/haproxy/haproxy.cfg -Ds
    Configuration file is valid


    # Instantiate ha-proxy instance:
    $> docker run -d --name ddchaproxy -p 8080:8080 my_haproxy:1.7


    5bc06e2680e72475f2585c453f6ada0a5ef349e5222f9e75b2c0f98eb1a0462f


    10. Access and Verify Application 

    10.1) Accessing application:
    Once the ha-proxy is running, access the application. Make sure firewall is not blocking the port (that ha-proxy is listening) .

    http://<ha-proxy-host>:<ha-proxy-port>/<application-uri>

    Important: In order to access application, you need to make sure that the '' in the above URL matches the 'host' part of the external-route configuration of HRM.
    In our case it is 'mydockertest.com', so make sure 'mydockertest.com' resolves to the IP address of the ha-proxy. It is the way how HRM along with Swarm discover the services and route the requests in Ingress cluster and we are able to scale containers dynamically.

    10.2) Application verification:

    10.2.1) Get the stat from haproxy. Along with other things, stat shows the request count and which Swarm node is serving the request:
    http://:/haproxy?stats

    10.2.2) First access lets-chat through Web-UI (http://mydockertest.com:8080). Create your account.  Log using your credential. Once you create the account and able to login, in order to verify that the lets-chat is making successful connection to the mongo db, you can do the following:

    # Inspect the lets-chat instance:
    $> docker inspect 3d046c183b6d | grep mongo
       "LCB_DATABASE_URI=mongodb://mongo/letschat",

    #Access the mongodb instance and run mongo shell to verify the data.
    $> docker exec -it jbz7h5hdvb20 bash

    # Launch the mongo shell
    root@jbz7h5hdvb20:/# mongo
    MongoDB shell version v3.4.5
    connecting to: mongodb://127.0.0.1:27017
    MongoDB server version: 3.4.5
    Welcome to the MongoDB shell.

    # Run command 'show dbs' and make sure letschat database is in the list.
    > show dbs
    admin     0.000GB
    letschat  0.000GB
    local     0.000GB

    # Connect to letschat database.
    > use letschat
    switched to db letschat

    # Get the users table and run find and make sure it shows your account data.
    > show collections
    messages
    rooms
    sessions
    usermessages
    users

    # Make sure Users table has account data that was created before.
    > db.users.find()
    { "_id" : ObjectId("595bdce9559bb1000eae7b9e"), "displayName" : "Purna", "lastName" : "Poudel", "firstName" : "Purna", "password" : "$2a$10$JlZrr3Gu3aklxx4qeUK6uuDF3jQDZ/CuA17.Clm6VKk6/NN35QOT6", "email" : "purna.poudel@gmail.com", "username" : "ppoudel", "provider" : "local", "messages" : [ ], "rooms" : [ ], "joined" : ISODate("2017-07-04T18:22:33.868Z"), "__v" : 0 }

    Once you have Docker Datacenter up and running, upgrade it to Docker EE 2.0 and UCP 3.x to have choice of Swarm or Kubernetes orchestration. See my post Upgrade to Docker EE 2.0 and UCP 3.x for Choice of Swarm or Kubernetes Orchestration.


    Looks like you're really into Docker, see my other related blog posts below:


    Using Docker Secrets with IBM WebSphere Liberty Profile Application Server


    In this blog post, I'm discussing how to utilize Docker Secrets (a Docker Swarm service feature) to manage sensitive data (like password encryption keys, SSH private keys, SSL certificates etc.) for Dockerized application powered by IBM WebSphere Liberty Profile (WLP) application server. Docker Secrets helps to centrally manage these sensitive information while in rest or in transit (encrypted and securely transmitted to only those containers that need access to it and has explicit access to it). It is out of scope for this post to go deep into Docker secretes, however, if you need to familiarize yourself with Docker Secretes, refer to https://docs.docker.com/engine/swarm/secrets/.
    Note: if you like to know how to program encryption/decryption within your Java application using passwordutilities-1.0 feature of WLP, see my blog How to use WLP passwordUtilities feature for encryption/decryption
    I'm going to write this post in a tutorial style, so that anyone interested to try can follow the steps.

    Pre-requisite: In order to follow the steps outlined here, you have to have following:

    1. Good working knowledge of Docker
    2. Configured Docker Swarm environment (using Docker 1.13 or higher version) with at least one manager and one worker node or Docker Datacenter with Universal Control Plane (UCP) having manager node, worker node(s). It's good to have a separate Docker client node, so that you can remotely connect to manager and execute commands. 
    3. Good working knowledge of IBM WebSphere Liberty Profile (https://developer.ibm.com/wasdev/blog/2013/03/29/introducing_the_liberty_profile/).

    Here is brief description, how we are going to utilize Docker Secretes with WLP.

    1. Password encryption key that is used to encrypt password for WLP KeyStore, TrustStore and any other password(s) used by WLP applications will be externalized and stored as Docker Secretes.
    2. Private key such as one stored in KeyStore (used to enable secure communication in WLP) will be externalized and stored as Docker Secretes.

    Here are some obvious benefits:

    1. Centrally manage all sensitive data. Since Docker enforces access control, only people with enough/right privilege(s) will have access to sensitive data.
    2. Only those container(s) and service(s) will have access to private/sensitive data which has explicit access as per need basis.
    3. Private information remains private while in rest or in transit.
    4. New Docker image created by 'docker commit' will not contain any sensitive data and also dump/package created by WLP server dump or package command, will not contain encryption key as it's externalized. See more insights about WLP password encryption here: https://www.ibm.com/support/knowledgecenter/en/SS7K4U_8.5.5/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/cwlp_pwd_encrypt.html and managing Docker Secrets here: https://docs.docker.com/engine/swarm/secrets/

    Enough talk, now, let's start the real work. Below are the major steps that we'll carry out:

    1. Create Docker secrets for following that is being used by WLP:
      • KeyStore
      • Truststore
      • Password Encryption key
    2. Build Docker image based on websphere-liberty:webProfile7
    3. Create network
    4. Put together docker-compose.xml for deployment.
    5. Deploy application as Docker service.


    Create Docker Secrets

    Here, we're going to use Docker Commandline (CLI) and we'll execute Docker command  from Docker client node remotely. You need have following three environment variables correctly setup in order to execute command remotely. Refer to https://docs.docker.com/engine/reference/commandline/cli/#description for detail.
    • DOCKER_TLS_VERIFY
    • DOCKER_CERT_PATH
    • DOCKER_HOST
    If you are using Docker Datacenter, you can use GUI based UCP Admin Console to create the same. Note: label com.docker.ucp.access.label="<value>" is not mandatory unless you have access constraint defined. For detail refer to Authentication and authorization
    1) Create Docker Secrete with name keystore.jks, which basically is key database that stores private key to be used by WLP.

    #Usage: docker secret create [OPTIONS] SECRET file|- 
    #Create a secret from a file or STDIN as content 

    $> docker secret create keystore.jks /mnt/nfs/dockershared/wlpapp/keystore.jks --label com.docker.ucp.access.label="dev"
     idc9em1u3fki8k0z77ol91sh4 

    2) Following command creates secret called truststore.jks using physical Java keystore file which contains trust certificates

    $> docker secret create truststore.jks /mnt/nfs/dockershared/wlpapp/truststore.jks --label com.docker.ucp.access.label="dev"
    w8qs1o7pwrvl96nuamv97sb9t

    Finally create the Docker secret call app_enc_key.xml, which basically refers to the fragment of xml wich contains definintion of password encryption key

    $> docker secret create app_enc_key.xml /mnt/nfs/dockershared/wlpapp/app_enc_key.xml --label com.docker.ucp.access.label="dev"
    kj3hcw4ss71hnudfgr6g32mxm

    Note: Docker secrets are available under '/run/secrets/' at runtime to any container which has explicit access to that secret.
    Here is how the /mnt/nfs/dockershared/wlpapp/app_enc_key.xml look like:

    <server> 
       <variable name="wlp.password.encryption.key" value="#replaceMe#">
       </variable>
    </server>

    Note: Make sure to replace the string '#replaceMe#' with your own password encryption key.

    Let's check and make sure all our secrets are properly created and listed:

    $> docker secret ls
    ID                        NAME             CREATED        UPDATED
    idc9em1u3fki8k0z77ol91sh4 keystore.jks     3 hours ago    3 hours ago
    kj3hcw4ss71hnudfgr6g32mxm app_enc_key.xml  21 seconds ago 21 seconds ago
    w8qs1o7pwrvl96nuamv97sb9t truststore.jks   3 hours ago    3 hours ago


    Building Docker Image:

    Now, let's first encrypt our keystore and trusstore passwords using the pre-defined encryption key and put together the server.xml for WLP server. We are going to use securityUtility tool that ships with IBM WLP to encrypt our password.
    Note: make sure your password encryption key matches to the one that is defined by 'wlp.password.encryption.key' property in app_enc_key.xml.
    Here I'm encoding my example password '#myStrongPassw0rd#' using encryption key '#replaceMe#' with encoding option 'aes'.
    Please note that encoding option 'xor' ignores the encryption key and uses default.

    $> cd /opt/ibm/wlp/bin
    $> ./securityUtility encode #myStrongPassw0rd# --encoding=aes --key=#replaceMe#
    {aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==

    Now, we have our Docker secrets created and we have encrypted our password. It's time to put together our server.xml for WLP application server and build the Docker image. Here is how my server.xml looke like.

    <server description="TestWLPApp">
       <featuremanager> 
          <feature>javaee-7.0</feature> 
          <feature>localConnector-1.0</feature> 
          <feature>ejbLite-3.2</feature> 
          <feature>jaxrs-2.0</feature> 
          <feature>jpa-2.1</feature> 
          <feature>jsf-2.2</feature> 
          <feature>json-1.0</feature> 
          <feature>cdi-1.2</feature> 
          <feature>ssl-1.0</feature> 
       </featuremanager> 
       <include location="/run/secrets/app_enc_key.xml"/> 
       <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/> 
       <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/> 
       <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
       <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
       <applicationmonitor updatetrigger="mbean"/> 
       <datasource id="wlpappDS" jndiname="wlpappDS"> 
          <jdbcdriver libraryref="OracleDBLib"/> 
          <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
       </datasource>  
        <library id="OracleDBLib"> 
           <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
        </library> 
        <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/> 
    </server>

    As you can see, the location of defaultKeyStore, defaultTrustStore, and app_enc_key.xml is pointing to directory '/run/secrets'. It is, as mentioned before, because all private data created by Docker Secrets will be available for the assigned services under '/run/secrets' of the corresponding container.

    Now let's put together Dockerfile.

    FROM websphere-liberty:webProfile7
    COPY /mnt/nfs/dockershared/wlpapp/server.xml /opt/ibm/wlp/usr/servers/defaultServer/
    RUN installUtility install --acceptLicense defaultServer
    COPY /mnt/nfs/dockershared/wlpapp/wlptest.war /apps/wlpapp/war/
    COPY /mnt/nfs/dockershared/wlpapp/ojdbc6-11.2.0.1.0.jar /apps/wlpapp/shared_lib/
    CMD ["/opt/ibm/java/jre/bin/java","-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar","-Djava.awt.headless=true","-jar","/opt/ibm/wlp/bin/tools/ws-server.jar","defaultServer"]

    Note: above, I'm copying my server.xml into /opt/ibm/wlp/usr/servers/defaultServer/ before running the installUtility as I'm adding few features required by my application including, ssl-1.0.

    Finally, we're going to build the Docker image.

    $> docker build -t 192.168.56.102/osboxes/wlptest:1.0 .
    Sending build context to Docker daemon 56.9 MB
    Step 1/7 : FROM websphere-liberty:webProfile7
    ---> c035090355f5
    ...
    Step 4/7 : RUN installUtility install --acceptLicense defaultServer
    ---> Running in 2bce0d02e253
    Checking for missing features required by the server ...
    ...
    Successfully built 07fef794348e

    Note: 192.168.56.102 is my local Docker Trusted Registry (DTR).

    Once, the image is successfully built, make sure the image is available on all nodes of Docker Swarm. I'm not going show details how you distribute the image.
    > If you are using DTR, You can first push the image to the registry (using 'docker push ...', then connect to Docker Swarm host and execute 'docker pull ...'),
    > Other option is to use 'docker save ...' to save the image as tar file then load the image into Swarm using 'docker load ...'.
     Here, I'm deploying this into Docker Datacenter which has two UCP worker nodes, one UCP manager node and DTR node. I'm also going to use the HTTP routing mesh (HRM), and User defined Overlay networks in swarm mode.
    Note: User defined Docker network and HRM are NOT necessary to utilize the Docker secrets.


    Create Overlay network:

    $> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
    naf8hvyx22n6lsvb4bq43z968

    Note: Label 'com.docker.ucp.mesh.http=true' is required while creating network in order to utilize HRM.

    Put together docker-compose.yml

    Here is my compose file. Your may look different.

    version: "3.1"
    services:
       wlpappsrv: 

          image: 192.168.56.102/osboxes/wlptest:1.0
          volumes:
             - /mnt/nfs/dockershared/wlpapp/server.xml:/opt/ibm/wlp/usr/servers/defaultServer/server.xml
          networks:
             - my_hrm_network
          secrets:
             - keystore.jks
             - truststore.jks
             - app_enc_key.xml
          ports:
             - 9080
             - 9443
         deploy:
            placement:
               constraints: [node.role == worker]
               mode: replicated
               replicas: 4
               resources:
                  limits:
                     memory: 2048M
               restart_policy:
                  condition: on-failure
                  max_attempts: 3
                  window: 6000s
               labels:
                  - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
                  - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
                  - "com.docker.ucp.access.label=dev"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network
    secrets:
       keystore.jks:
          external: true
       truststore.jks:
          external: true
       app_enc_key.xml:
          external: true

    Few notes about the docker-compose.yml
    1. Volume definition that maps server.xml in the container with the one in the NFS file system is optional. This mapping gives additional flexibility to update the server.xml. You can achieve similar or even better flexibility/portability by using Docker Swarm Config service. See my blog post - How to use Docker Swarm Configs service with WebSphere Liberty Profile for details.
    2. The secrets definition under service 'wlpappsrv' refers to the secrets definition in the root level, which in it turns refers to externally defined secret.
    3. "com.docker.ucp.mesh.http." labels are totally optional and only required if you are using HRM. 
    4. "com.docker.ucp.access.label" is also optional and required only if you have defined access constraints.
    5. Since, I'm using Swarm and HRM, I don't need to explicitly map the internal container ports to host port. If you need to map, you can use something like below for your port definition:
      ports:
         - 9080:9080
         - 9443:9443
    6. You may encounter situation that your container application is not able to access the secrets created under /run/secrets. It may be related to bug #31006. In order to resolve the issue use 'mode: 0444' while defining your secrets. Something like this:
      secrets:
         - source: keystore.jks
           mode: 0444
         ...   

    Deploy the service 

    Here I'm using "docker stack deploy..." to deploy the service:
    $> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP

    Note: In certain cases, you may get "secrets Additional property secrets is not allowed", error message. In order to resolve, make sure your compose file version to 3.1. In my case, where it's working fine, I've Docker version 17.03.2-ee4, API version: 1.27, Docker-Compose version 1.14.0.

    Once the service is deployed. You can list it using 'docker service ls ..." command

    $> docker service ls
    ID           NAME                 MODE       REPLICAS IMAGE
    28xhhnbcnhfg dev_WLPAPP_wlpappsrv replicated 4/4      192.168.56.102/osboxes/wlptest:1.0

    And list the replicated containers:
    $> docker ps
    CONTAINER ID IMAGE                              COMMAND CREATED STATUS PORTS NAMES
    2052806bbae3 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.3.m7apci6i1ks218ddnv4qsdbwv
    541cf0f39b6e 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.4.wckec2jcjbmrhstftajh2zotr
    ccdd7275fd7f 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.2.oke0fz2sifs5ej0vy63250wo9
    7d5668a4d851 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.1.r9gi0qllnh8r9u8popqg5mg5b

    And here is what the WLP messages.log shows (taken from one of the containers log file):
    ********************************************************************************
    product = WebSphere Application Server 17.0.0.2 (wlp-1.0.17.cl170220170523-1818)
    wlp.install.dir = /opt/ibm/wlp/
    server.output.dir = /opt/ibm/wlp/output/defaultServer/
    java.home = /opt/ibm/java/jre
    java.version = 1.8.0
    java.runtime = Java(TM) SE Runtime Environment (pxa6480sr4fp7-20170627_02 (SR4 FP7))
    os = Linux (3.10.0-514.el7.x86_64; amd64) (en_US)
    process = 1@e086b8c54a8d
    ********************************************************************************
    [7/24/17 19:44:29:275 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server defaultServer has been launched.
    ...
    [7/24/17 19:44:30:533 UTC] 00000017 com.ibm.ws.config.xml.internal.XMLConfigParser A CWWKG0028A: Processing included configuration resource: /run/secrets/app_enc_key.xml
    [7/24/17 19:44:31:680 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 2.763 seconds
    [7/24/17 19:44:31:990 UTC] 0000001f com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
    [7/24/17 19:44:45:877 UTC] 00000017 com.ibm.ws.security.ready.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting...
    [7/24/17 19:44:47:262 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager I CWWKS4103I: Creating the LTPA keys. This may take a few seconds.
    [7/24/17 19:44:47:295 UTC] 00000017 ibm.ws.security.authentication.internal.jaas.JAASServiceImpl I CWWKS1123I: The collective authentication plugin with class name NullCollectiveAuthenticationPlugin has been activated.
    [7/24/17 19:44:48:339 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager A CWWKS4104A: LTPA keys created in 1.065 seconds. LTPA key file: /opt/ibm/wlp/output/defaultServer/resources/security/ltpa.keys
    [7/24/17 19:44:48:365 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyCreateTask I CWWKS4105I: LTPA configuration is ready after 1.107 seconds.
    [7/24/17 19:44:57:514 UTC] 00000017 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications.
    [7/24/17 19:44:57:651 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv6) port 9080.
    [7/24/17 19:44:57:675 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint-ssl has been started and is now listening for requests on host * (IPv6) port 9443.
    [7/24/17 19:44:57:947 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302 has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7276.
    [7/24/17 19:44:57:951 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302-ssl has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7286.
    ...

    As you can see (messages in blue), it's able to include the configuration from /run/secrets/app_enc_key.xml and it also shows that defaultHttpEndpoint-ssl has been started and listening on port 9443; meaning that it's able to successfully load and open the /run/secrets/keystore.jks and /run/secrets/truststore.jks files using the encrypted password with encryption key defined in /run/secrets/app_enc_key.xml.

    Now, it's time to access the application. In my case, since, I'm using HRM, I will access it as: https://mydockertest.com:8443/wlpappctx
    If you are not using HRM; you may access it using:
    https://<docker-container-host>:9443/<application-context>


    Example using Load-Balancer


    If you have a load-balancer in front and want to set-up a  pass-through SSL, you can use SNI: aka SSL routing. Below is simple example using ha-proxy. You can also refer to HA-Proxy documentation here for details.  

    Here is haproxy.cfg for our example PoC:
    # /etc/haproxy/haproxy.cfg, version 1.7
    global
       maxconn 4096

    defaults
       timeout connect 5000ms
       timeout client 50000ms
       timeout server 50000ms

    frontend frontend_ssl_tcp
       bind *:8443
       mode tcp
       tcp-request inspect-delay 5s
       tcp-request content accept if { req_ssl_hello_type 1 }
       default_backend bckend_ssl_default

    backend bckend_ssl_default
       mode tcp
       balance roundrobin
       server worker1 192.168.56.103:8443 check
       server worker2 192.168.56.104:8443 check           

    Here is a Dockerfile for custom image:
    FROM haproxy:1.7
    COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
    Build the image:
    Note: execute the 'docker build ...' command from the same directory where Dockerbuild file is located.

    $> docker build -t my_haproxy:1.7 .
    Once you build the image and start the ha-proxy container like below:
    $> docker run -d --name ddchaproxy -p 8443:8443 my_haproxy:1.7

    Note: In this case ha-proxy is listening on port 8443.

    Access the application:

    https://mydockertest.com:8443//wlpappctx

    Note: Make sure mydockertest.com resolves to the IP address of ha-proxy.


    Looks like you're really into Docker, see my other related blog posts below:

    Sharing My Reading List

    I love reading. I read every day at least around two hours - partly thanks to my commute time to work. I take YRT to work, it's comfortable seating gives me opportunity to indulge in reading while commuting. Listed below are some of the books that I have read in the past and liked or currently been reading.

    Personal Development/Self Help/Finance/Management

    Moving Forward - Taking The Lead in Your Life
    - by Dave Pelzer
    This is really a great book. It helped me to get rid of all (almost) craps from my mind.
    The Four doors
    - by Richard Paul Evans
    One of the stories in this book (about a beggar), gave me a thunder shock and hit hard right in the centre of my greatest weakness
    - which is afraid to try. Not even trying is the biggest failure in our life.

    The Secret Letters Of The Monk Who Sold His Ferrari
    - by Robin Sharma
    WHAT DOESN'T KILL US
    - by Scott Carney
    This amazing book, which greatly inspired me to practice Wim Hof's breathing techniques and also lead me to Pranayama Yoga. These days, showering in cold water, practicing Wim Hof and Pranayama breathings techniques are part of my daily routine and I'm feeling great and strong. Robin Sharma's reading list introduced me this book, so many thanks to Robin.
    The Fred factor : how passion in your work and life can turn the ordinary into the extraordinary
    - by Mark Sanborn.
    Let The Elephants Run
    - by David Usher
    Peak: Secrets from the New Science of Expertise
    - by Robert Pool Anders Ericsson
    The Freaks Shall Inherit the Earth: Entrepreneurship for Weirdos, Misfits, and World Dominators
    - by Chris Brogan
    Outliers: The Story of Success
    - by Malcolm Gladwell
    David and Goliath: Underdogs, Misfits, and the Art of Battling Giants
    - by Malcolm Gladwell
    Millionaire Teacher: The Nine Rules of Wealth You Should Have Learned in School
    - by Andrew Hallam
    This is one of the best books I have ever read in personal finance, investment etc.
    Spark: The Revolutionary New Science of Exercise and the Brain
    - by John J. Ratey, and Eric Hagerman
    Fail Fast or Win Big: The Start-Up Plan for Starting Now
    - by Bernhard Schroeder
    The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom
    - by John Coates


    Literature/Fiction/Non-Fiction/General


    The Heart Goes Last
    - by Margaret Atwood
    Hemingway in Love: His Own Story
    - by A.E. Hotchner
    Jonathan Livingston Seagull
    - by Richard Bach
    I've got the recommendation for this book from Robin Sharma's reading list. Thanks Robin!
    Superintelligence: Paths, Dangers, Strategies Reprint Edition
    - by Nick Bostrom
    Nexus
    - by Ramez Naam
    The Black Swan: The Impact of the Highly Improbable
    - by Nassim Nicholas Taleb


    I also read some great Nepalese books


    Karnali Blues (कर्नाली ब्लुज)
    - by Buddhi Sagar
    Palpasa Cafe (पल्पसा क्याफे)
    - by Narayan Wagle
    Seto Dharti(सेतो धरती)
    - by Amar Neupane
    Prayogshala (प्रयोगशाला, नेपाली सङ्क्रमणमा दिल्ली, दरबार र माओवादी)
    - by Sudheer Sharma
    Jiwan Kada Ki Phool (जीवन काँडा कि फूल)
    - by Jhamak Kumari Ghimire

    Antarman ko Yatra (अन्तर्मनको यात्रा)
    - by Jagadish Ghimire