Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

How to Configure PostgreSQL with SSL/TLS support on Kubernetes

SSL is disabled in the default Postgresql configuration and I had to struggle a little bit while making Postgresql on Kubernetes with SSL/TLS support to work. After few research and trials, I was able to resolve the issues and here I'm sharing what I have done to make it work for me.


    
    

High Level Steps:
  1. Customize postgresql.conf  (to add/edit SSL/TLS configuration) and create configMap object. This way, we don't need to rebuild the Postgres to apply custom postgresql.conf , because ConfigMap allows us you to decouple configuration artifacts from image content. 
  2. Create secret type objects for server.key, server.crt, root.crt, ca.crt, and password file.
  3. Define and use NFS type PersistentVolume (PV) and PersistentVolumeClaim (PVC)
  4. Use securityContext to resolve permission issues.
  5. Use '-c config_file=<config-volume-location>/postgresql.conf' to override the default postgresql.conf
Note: all files used in this post can be cloned/downloaded from GitHub https://github.com/pppoudel/postgresql-with-ssl-on-kubernetes.git

Let's get started

In the example, I'm using namespace called 'shared-services' and service account called 'shared-svc-accnt'. You can create your own namespace and service account or use the 'default'. In anyways, I have listed here necessary steps and yaml files can be downloaded from github.

Create namespace and service account


# Create namespace shared-services

   $> kubectl create -f shared-services-ns.yml

# Create Service Account shared-svc-accnt

   $> kubectl create -f shared-svc-accnt.yml

# Create a grant for service account shared-svc-accnt. Do this step as per your platform.


Create configMap object

I have put both postgresql.conf and pg_hba.conf under config directory. I have updated postgresql.conf as follows:

ssl = on
ssl_cert_file = '/etc/postgresql-secrets-vol/server.crt'
ssl_key_file = '/etc/postgresql-secrets-vol/server.key'

Note: the location '/etc/postgresql-config-vol' needs to be mounted while defining 'volumeMounts', which we will discuss later in the post.

Those three above listed are the main configuration items that need to have proper values in order to force Postgresql to support SSL/TLS. If you are using CA signed certificate, you also need to provide value for 'ssl_ca_file' and optionally 'ssl_crl_file'. Read Secure TCP/IP Connections with SSL for more details.
You also need to update the pg_hba.conf (HBA stands for host-based authentication) as necessary. pg_hba.conf is used to manage connection type, control access using a client IP address range, a database name, a user name, and the authentication method etc.

# Sample entries in pg_hba.conf
# Trust local connection - no password required.
local    all             all                                     trust
# Only secured remote connection from given IP-Range accepted and password are encoded using MD5
#hostssl  all             all             < Cluster IP-Range >/< Prefix length >         md5
hostssl  all             all             10.96.0.0/16         md5


$> ls -l config/

-rw-------. 1 osboxes osboxes  4535 Sep 22 17:33 pg_hba.conf
-rw-------. 1 osboxes osboxes 22781 Sep 23 03:03 postgresql.conf

# Create configMap object
$> kubectl create configmap postgresql-config --from-file=config/ -n shared-services
configmap "postgresql-config" created

# Review created object
$> kubectl describe configMap/postgresql-config -n shared-services
Name:         postgresql-config
Namespace:    shared-services
...

Create secrets

I've created server.key and self signed certificate using OpenSSL. you can either do the same or have CA signed certificates. Here, we are not going to use the client certificate. Read section 18.9.3. Creating Certificates if you need help in creating certificates.

# Create MD5 hashed password to be used with postgresql
$> POSTGRES_USER=postgres
$> POSTGRES_PASSWORD=myp3qlpwD
$> echo "md5$(echo -n $POSTGRES_PASSWORD$POSTGRES_USER | md5sum | cut -d ' ' -f1)" > secrets/postgresql-pwd.txt

# Here are all files under secrets directory
$> ls -la secrets/
-rw-rw-r--. 1 osboxes osboxes  13 Sep 22 23:42 postgresql-pwd.txt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:51 root.crt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:49 server.crt
-r--------. 1 osboxes osboxes 887 Sep 22 16:43 server.key

# Create secret postgresql-secrets
$> kubectl create secret generic postgresql-secrets --from-file=secrets/ -n shared-services
secret "postgresql-secrets" created

# Verify

$> kubectl describe secrets/postgresql-secrets -n shared-services
Name:         postgresql-secrets
Namespace:    shared-services
Labels:       
Annotations:  

Type:  Opaque

Data
====
server.key:          887 bytes
postgresql-pwd.txt:  13 bytes
root.crt:            891 bytes
server.crt:          891 bytes

Note: As seen above, I have created MD5 hash using "md5<password>:<userid>". The reason, I added string "md5" in front of hashed string is that when Postgres sees "md5" as a prefix, it recognizes that the string is already hashed and does not try to hash again and stores as it is.

Create PersistentVolume (PV) and PersistentVolumeClaim (PVC) 

Let's go ahead and create PV and PVC. We will use 'Retain' as persistentVolumeReclaimPolicy, so that data can be retained even when Postgresql pod is destroyed and recreated.
Sample PV yaml file:
## shared-nfs-pv-postgresql.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv-postgresql
  namespace: shared-services
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/postgresql/
    server: 192.168.56.101
  persistentVolumeReclaimPolicy: Retain

Sample PVC yaml file:
## shared-nfs-pvc-postgresql.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc-postgresql
  namespace: shared-services
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

PV and PVC creation and verification steps:
# Create persistentvolume
$> kubectl create -f yaml/shared-nfs-pv-postgresql.yml
persistentvolume "shared-nfs-pv-postgresql" created

# Create persistentvolumeclaim
$> kubectl create -f yaml/shared-nfs-pvc-postgresql.yml
persistentvolumeclaim "shared-nfs-pvc-postgresql" created

# Verify and make sure status of persistentvolumeclaim/shared-nfs-pvc-postgresql is Bound
$> kubectl get pv,pvc -n shared-services
NAME                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS   REASON    AGE
persistentvolume/shared-nfs-pv-postgresql   5Gi        RWX            Retain           Bound     shared-services/shared-nfs-pvc-postgresql                            32s

NAME                                              STATUS    VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/shared-nfs-pvc-postgresql   Bound     shared-nfs-pv-postgresql   5Gi        RWX                           20s

Create deployment manifest file

Here is the one, I have put together. You can customize it further per your need.

---
# Service definition
apiVersion: v1
kind: Service
metadata:
  name: sysg-postgres-svc
  namespace: shared-services
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
      name: tcp-5432
  selector:
      app: sysg-postgres-app
---
# Deployment definition
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: sysg-postgres-dpl
  namespace: shared-services
spec:
  selector:
    matchLabels:
      app: sysg-postgres-app
  replicas: 1
  template:
    metadata:
      labels:
        app: sysg-postgres-app
    spec:
      serviceAccountName: shared-svc-accnt
      securityContext:
        runAsUser: 70
        supplementalGroups: [999,1000]
        fsGroup: 70
      volumes:
        - name: shared-nfs-pv-postgresql
          persistentVolumeClaim:
            claimName: shared-nfs-pvc-postgresql
        - name: secret-vol
          secret:
            secretName: postgresql-secrets
            defaultMode: 0640
        - name: config-vol
          configMap:
            name: postgresql-config
      containers:
      - name: sysg-postgres-cnt
        image: postgres:10.5-alpine
        imagePullPolicy: IfNotPresent
        args:
          - -c
          - hba_file=/etc/postgresql-config-vol/pg_hba.conf
          - -c
          - config_file=/etc/postgresql-config-vol/postgresql.conf
        env:
          - name: POSTGRES_USER
            value: postgres
          - name: PGUSER
            value: postgres
          - name: POSTGRES_DB
            value: mmdb
          - name: PGDATA
            value: /var/lib/postgresql/data/pgdata
          - name: POSTGRES_PASSWORD_FILE
            value: /etc/postgresql-secrets-vol/postgresql-pwd.txt
        ports:
         - containerPort: 5432
        volumeMounts:
          - name: config-vol
            mountPath: /etc/postgresql-config-vol
          - mountPath: /var/lib/postgresql/data/pgdata
            name: shared-nfs-pv-postgresql
          - name: secret-vol
            mountPath: /etc/postgresql-secrets-vol
      nodeSelector:
        kubernetes.io/hostname: centosddcwrk01

Deploy the Postgresql

Below steps show the creation of service and deployment as well as step to make sure that the Postgres is running with SSL enabled mode.
# Deploy
$> kubectl apply -f yaml/postgres-deploy.yml
service "sysg-postgres-svc" created
deployment.apps "sysg-postgres-dpl" created

# Verify
$> kubectl get pods,svc -n shared-services
NAME                                     READY     STATUS    RESTARTS   AGE
pod/sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          1h

NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/sysg-postgres-svc   ClusterIP   10.96.90.30           5432/TCP   1h

# sh into the postgresql pod:
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services
/ $

# Launch psql
/ $ psql -U postgres
psql (10.5)
Type "help" for help.

# Verify SSL is enabled
postgres=# SHOW ssl;
 ssl
-----
 on
(1 row)

# Check the stored password. It should match the hashed value of "<password><user>" with "md5" prepended.
postgres=#  select usename,passwd from pg_catalog.pg_shadow;
 usename  |               passwd
----------+-------------------------------------
 postgres | md5db59316e90b1afb5334a331081618af6

# Connect remotely. You need to provide password.
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm -n shared-services -- psql "sslmode=require host=10.96.90.30 port=5432 dbname=mmdb" --username=postgres
Password for user postgres:
psql (10.5)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
(1 row)

Few key points

1) Using customized configuration file:
As you have seen above,  I have created configMap object postgresql-config and used it using option '-c config_file=/etc/postgresql-config-vol/postgresql.conf'. ConfigMap object postgresql-config is mapped to path '/etc/postgresql-config-vol' in volumeMounts definition.

containers:
- name: sysg-postgres-cnt
  imagePullPolicy: IfNotPresent
  args:
    - -c
    - hba_file=/etc/postgresql-config-vol/pg_hba.conf
    - -c
    - config_file=/etc/postgresql-config-vol/postgresql.conf

volumeMounts:
  - name: config-vol
    mountPath: /etc/postgresql-config-vol


2) Creating environment variable from secrets:

env:
  - name: POSTGRES_PASSWORD_FILE
    value: /etc/postgresql-secrets-vol/postgresql-pwd.txt

And the secret is mapped to path /etc/postgresql-secrets-vol
volumeMounts:
  - name: secret-vol
    mountPath: /etc/postgresql-secrets-vol

3) PGDATA environment variable: 
The default value is '/var/lib/postgresql/data'. However, Postgres recommends "... if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.". Refer to https://hub.docker.com/_/postgres/

Here we assign /var/lib/postgresql/data/pgdata:
env:
  - name: PGDATA
    value: /var/lib/postgresql/data/pgdata


Troubleshooting

1) Make sure server.key, server.crt, and root.crt all have appropriate permissions that is 0400 (if owned by postgres process owner) or 0640 (if owned by root). If proper permissions is not applied, Postgresql will not start. and in log, you will see following FATAL message.

2018-09-22 18:26:22.391 UTC [1] FATAL:  private key file "/etc/postgresql-secrets-vol/server.key" has group or world access
2018-09-22 18:26:22.391 UTC [1] DETAIL: File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
2018-09-22 18:26:22.391 UTC [1] LOG:  database system is shut down

In order to apply proper permission in file level, you can use 'defaultMode', I'm using defaultMode: 0644 as shown below (fragment from postgres-deploy.yml)

- name: secret-vol
  secret:
    secretName: postgresql-secrets
    defaultMode: 0640

2) Make sure to have right ownership - whether the files/directories are related to secret volume, config volume or persistence storage volume. Below is the error related PV path:

initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted

In order to resolve the above issue, you need to use Kubernetes' provided securityContext options like 'runAsUser', 'fsGroup', 'supplementalGroups' and/or capabilities. SecurityContext can be defined in both pod level and container level. In my case, I've defined it in pod level as shown below (fragment from postgres-deploy.yml)

securityContext:
  runAsUser: < specify your run as user >
  fsGroup: < specify group >
  supplementalGroups: [< comma delimited list of supplementalGroups >]

Read Configure a Security Context for a Pod or Container chapter from official Kubernetes site. I've also given some troubleshooting tips while using NFS type Persistent Volume and Claim in my previous blog How to Create, Troubleshoot and Use NFS type Persistent Storage Volume in Kubernetes

Below, I'm showing file permission per my configuration. Files are owned by root:postgres.

# Get running Kubernetes pod
$> kubectl get pods -n shared-services
NAME                                 READY     STATUS    RESTARTS   AGE
sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          12s

# sh to running Kubernetes pod
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services

# Explore files
/ $ cd /etc/postgresql-secrets-vol/..data/
/etc/postgresql-secrets-vol/..2018_09_22_19_24_52.289379695 $ ls -la

drwxr-sr-x    2 root     postgres       120 Sep 23 01:19 .
drwxrwsrwt    3 root     postgres       160 Sep 23 01:19 ..
-rw-r-----    1 root     postgres        13 Sep 23 01:19 postgresql-pwd.txt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 root.crt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 server.crt
-rw-r-----    1 root     postgres       887 Sep 23 01:19 server.key

3) psql: FATAL:  no pg_hba.conf entry for host "10.0.2.15", user "postgres" ... This FATAL message usually appears when you are trying to establish connection to Postgres, but the way you are trying to  authenticate is not defined in pg_hba.conf. Either the source IP (from where the connection originates is out of range, security option is not supported. Check your pg_hba.conf file and make sure right entry has been added.

[Optional]  Creating custom Postgres Docker image with customized postgresql.conf

If you prefer to create custom Docker image with custom postgresql.conf rather than creating configMap and using '-c config_file' option, you can do so. Here is how:

Create Dockerfile:


FROM postgres:10.5-alpine
COPY config/postgresql.conf /tmp/postgresql.conf
COPY scripts/_updateConfig.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/_updateConfig.sh && chmod 644 /tmp/postgresql.conf

My custom postgresql.conf is located under config directory locally. It will be copied to /tmp when Docker image is created. _updateConfig.sh is located under scripts directory locally and copied to /docker-entrypoint-initdb.d/ in build time.

Create script file _updateConfig.sh as shown below. It assumes that default PGDATA value '/var/lib/postgresql/data' is being used.

#!/usr/bin/env bash
cat /tmp/postgresql.conf > /var/lib/postgresql/data/postgresql.conf

Important: we can not directly copy the custom postgresql.conf into $PGDATA directory in build time because that directory does not exist yet.

Build the image:


Directory and files shown below are local:
$> ls -la postgresql/

drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 13:01 config
-rwxr-xr--.  1 osboxes osboxes  227 Sep 22 19:14 Dockerfile
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 18:39 scripts
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 23:42 secrets
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 12:11 yaml
$>cd postgresql

# docker build -t <image tag> .
# In my case I am using osboxes/postgres:10.5-sysg as image name and tag.
$> docker build -t osboxes/postgres:10.5-sysg .

If you use custom Docker image built this way, you don't need to define configMap to use custom postgresql.conf.



Upgrade to Docker EE 2.0 and UCP 3.x for Choice of Swarm or Kubernetes Orchestration

Docker Enterprise Edition (EE) 2.0 has introduced integrated Kubernetes orchestration engine along with SWARM. Since Kubernetes is installed and configured as part of the of the upgrade to Docker EE 2.0 and Universal Control Plane (UCP) 3.x, it saves a lot of time which otherwise is needed to install and setup Kubernetes environment.


In this blog post, I'm discussing the upgrade process (not going to go through each step though. Because official Docker documentation is detail enough for this) and going to direct you to the right documentation and also discuss few issues that I encountered during the upgrade and how I resolved them.


Planning for Upgrade

1) Prerequisite check for hardware/software - Docker recommends at least 8 GB of physical memory available on UCP and Docker Trusted Registry (DTR) nodes and 4 GB for other worker nodes. See details hardware and software requirement here: https://docs.docker.com/ee/ucp/admin/install/system-requirements/

2) Firewall ports - since Kubernetes master and worker nodes will be part of the upgraded environment, additional ports required for Kubernetes need to open. Details on port used can be found here: https://docs.docker.com/ee/ucp/admin/install/system-requirements/#ports-used. I put together few lines of shell script to open firewall ports (uses firewall-cmd utility). Use/modify it as needed.

openFWPortsForDockerEE.sh

#!/bin/sh
# openFWPortsForDockerEE.sh
# Opens required ports for Docker EE 2.0/UCP 3.x
# Ref:
# https://docs.docker.com/ee/ucp/admin/install/system-requirements/#ports-used
# https://docs.docker.com/datacenter/ucp/2.1/guides/admin/install/system-requirements/#network-requirements
tcp_ports="179,443,80,2375,2376,2377,2380,4001,4443,4789,6443,6444,7001,7946,8080,10250,12376-12387"
udp_ports="4789,7946"

openFW() {
   IFS=",";
   for _port in $1; do
      echo "Opening ${_port}/$2";
      sudo firewall-cmd --permanent --zone=public --add-port=${_port}/$2;
   done
   IFS=" ";
}

openFW "${tcp_ports}" tcp;
openFW "${udp_ports}" udp;

# Recycle firewall
sudo firewall-cmd --reload;

Backup Docker EE

You need to backup Docker Swarm, UCP, and DTR . Please follow this document (https://docs.docker.com/ee/backup/) for backup.

Upgrade Docker Engine

Very well documented step by step process can be found here: https://docs.docker.com/ee/upgrade/#upgrade-docker-engine

Upgrade UCP

UCP can be upgraded from UCP Web user interface (Web UI) or  command line interface (CLI). Both options are documented here: https://docs.docker.com/ee/ucp/admin/install/upgrade/#use-the-cli-to-perform-an-upgrade.

Note: If all possible try to use CLI instead of Web UI. I had upgraded my personal DEV environment using CLI and did not encounter any issue, however, one of my colleagues initially tried to use Web UI and  had issue. The upgrade process went forever, and failed.

Note: If you have less than 4 GB of memory, you'll get warning during the upgrade. It may complete successfully (as you see below) or may fail. So, it is best practice to fulfil the minimum requirement whenever possible. Below is output from my UCP 3.0 upgrade:

$> sudo docker container run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:3.0.0 upgrade --interactive

INFO[0000] Your engine version 17.06.2-ee-10, build 66261a0 (3.10.0-514.el7.x86_64) is compatible
FATA[0000] Your system does not have enough memory. UCP suggests a minimum of 4.00 GB, but you only have 2.92 GB. You may have unexpected errors. You may proceed by specifying the '--force-minimums' fla g, but you may experience scale and performance problems as a result
[osboxes@centosddcucp scripts]$ sudo docker container run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp:3.0.0 upgrade --interactive --force-minimums
INFO[0000] Your engine version 17.06.2-ee-10, build 66261a0 (3.10.0-514.el7.x86_64) is compatible
WARN[0000] Your system does not have enough memory. UCP suggests a minimum of 4.00 GB, but you only have 2.92 GB. You may have unexpected errors.
WARN[0002] Your system uses devicemapper. We can not accurately detect available storage space. Please make sure you have at least 3.00 GB available in /var/lib/docker
INFO[0006] Upgrade the UCP 3.0.0 installation on this cluster to 3.0.0 for UCP ID: nufs9fb696bs6rm4kxaauewly
INFO[0006] Once this operation completes, all nodes in this cluster will be upgraded.
Do you want proceed with the upgrade? (y/n): y
INFO[0017] Pulling required images... (this may take a while)
INFO[0017] Pulling docker/ucp-interlock:3.0.0
INFO[0048] Pulling docker/ucp-compose:3.0.0
INFO[0130] Pulling docker/ucp-dsinfo:3.0.0
INFO[0183] Pulling docker/ucp-interlock-extension:3.0.0
WARN[0000] Your system does not have enough memory. UCP suggests a minimum of 4.00 GB, but you only have 2.92 GB. You may have unexpected errors.
WARN[0002] Your system uses devicemapper. We can not accurately detect available storage space. Please make sure you have at least 3.00 GB available in /var/lib/docker
INFO[0007] Checking for version compatibility
INFO[0007] Updating configuration for Interlock service
INFO[0038] Updating configuration for existing UCP service
INFO[0141] Waiting for cluster to finish upgrading
INFO[0146] Success! Please log in to the UCP console to verify your system.

Note: You may also find your upgrade to UCP 3.x process getting stuck while updating ucp-kv, just like we had in one of our environments. The symptom and resolution are documented here: https://success.docker.com/article/upgrade-to-ucp-3-gets-stuck-updating-ucp-kv


After the Upgrade

If you run 'docker ps' after upgrade on UCP host, all UCP related processes (like docker/ucp-*) should be of version '3.x', if you notice any of those processes still in version '2.x', meaning upgrade is not quite successful. You can also run 'docker version' and make sure the output shows 'ucp/3.x'

If your upgrade is successful, after the upgrade, you are going to notice few things right way, some of them are listed below:

1) UCP Web UI looks different now. You are going to see Kubernetes and related resources standing out as the first class citizen.

2) You may also notice that your application is not accessible any more even though corresponding service(s) may seem to be running (specifically, if you used HTTP Routing Mesh (HRM) before the upgrade). We encountered an issue (related to HRM) in our DEV environment. Before the upgrade, we had something like this configuration (fragment from  our yaml file):

version: "3.1"
services:
   testsvc:
      ...
      ...
      ports:
         - "9080"
         - "9443"
      deploy:
         ...
         ...
         labels:
            - "com.docker.ucp.mesh.http.9080=external_route=http://testsvc.devdte.com:8080,internal_port=9080"
            - "com.docker.ucp.mesh.http.9443=external_route=sni://testsvc.devdte.com:8443,internal_port=9443"
...
...



As shown above, internal port 9080 is mapped to external port 8080 (http) and internal port 9443 is mapped to external port 8443 (https) and 'testsvc.devdte.com' is configured as a host. And our routing mesh setting looked like as shown below:


Before the upgrade, the above configuration allowed us to access the service as shown below:

  • http://testsvc.devdte.com:8080/xxx
    or
  • https://testsvc.devdte.com:/8443/xxx

However, after the upgrade, we could access the application only on port 8443. If you encounter similar issue, refer to Layer 7 routing upgrade for more details.


3) Another interesting issue we encountered after the upgrade was related to HTTP header parameter being rejected. One of our applications relied on HTTP header parameter and the parameter had a underscore '_' (something like 'user_name'). After the upgrade, suddenly, application started responding with HTTP status code 502. After investigation, we found out that the Nginx - that's a part of Layer 7 routing solution, was silently rejecting this parameter because it had underscore '_'. Refer to my blog How to override Kubernetes Ingress-Nginx-Controller and Docker UCP Layer 7 Routing Configuration for details.

4) Lastly, if you are planning to use Kubernetes orchestration and 'kubectl' utility to connect to Kubernetes master, you need to download your client certificates bundle again. env.sh/env.cmd has been updated to set Kubernetes cluster, context and credentials configuration so that 'kubectl' command can securely establish connection to Kubernetes master and be able to communicate. Refer to CLI based access and Install the Kubernetes CLI for more details. Once you have installed 'kubectl' and downloaded and extracted client certificates bundle, test connectivity to Kubernetes master as follows:

# Change directory to the folder where you extracted you client certificates bundle
# and run following command to set kubernetes context, credentials and cluster configuration

$> eval "$(<env.sh)"
Cluster "ucp_ddcucphost:6443_ppoudel" set.
User "ucp_ddcucphost:6443_ppoudel" set.
Context "ucp_ddcucphost:6443_ppoudel" created.

# Confirm the connection to UCP. You should see something like this:


$> kubectl config current-context
ucp_ddcucphost:6443_ppoudel

# Inspect Kubernetes resources

$> kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 443/TCP 6d


How to use Docker Swarm Configs service with WebSphere Liberty Profile


   In order to make your dockerized application portable, you can externalize (Docker container using configuration from outside of Docker image) configuration that changes from one environment to another (from DEV to QA, UAT, Prod etc.). This helps to maintain a generic docker image for your dockerized application and also get rid of most of the bind-mount configuration files and/or environment variables used by your container. Following Docker Swarm services are extremely useful in externalizing the configuration:
  • Docker Secrets (available in Docker 1.13 and higher version)
  • Docker Configs (available in Docker 17.06 and higher version)
You can use Docker Secrets to externalize configuration that are confidential in nature, and Docker Configs for general configuration that has potential to be changed from one environment to another.
In this blog post, I will use Dockerized application powered by WebSphere Application Server Liberty Profile (WLP) to show how to use Docker Configs service to externalize server.xml. You can look my other blog - Using Docker Secrets with IBM WebSphere Liberty Profile Application Server, to learn how to use Docker Secrets.


So, here is my server.xml for my WLP application to be used in this post as an example.

<server description="TestWLPApp">
   <featuremanager>
      <feature>javaee-7.0</feature>
      <feature>localConnector-1.0</feature>
      <feature>ejbLite-3.2</feature>
      <feature>jaxrs-2.0</feature>
      <feature>jpa-2.1</feature>
      <feature>jsf-2.2</feature>
      <feature>json-1.0</feature>
      <feature>cdi-1.2</feature>
      <feature>ssl-1.0</feature>
   </featuremanager>
   <include location="/run/secrets/app_enc_key.xml"/>
   <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/>
   <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/>
   <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/>
   <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/>
   <applicationmonitor updatetrigger="mbean"/>
   <datasource id="wlpappDS" jndiname="wlpappDS">
      <jdbcdriver libraryref="OracleDBLib"/>
      <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
   </datasource>  
    <library id="OracleDBLib">
       <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
    </library>
    <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/>
</server>

As you can see in above server.xml, the following items were created as Docker Secrets:


  • <include location="/run/secrets/app_enc_key.xml"/>

  • <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" ...

  • <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" ...


See, Create Docker Secrets paragraph of  Using Docker Secrets with IBM WebSphere Liberty Profile Application Server to create these confidential configuration items.

Once confidential configuration items are created using Docker Secrets, follow the steps below to create general configuration items using Docker Configs.
  1. Connect to Docker UCP using client bundle. 
  2. Create configuration item for server.xml using docker config create ...command.
    Important: both the client and daemon API must both be at least at version 1.30 to use this command.

    $> docker config create dev_wlp_server_config_v1.0 /mnt/nfs/dockershared/wlpapp/server.xml_v1.0

    9i5edohyzyrvopuz988caxw4r

    Note: here dev_wlp_server_config_v1.0 is configuration item name which gets configuration from /mnt/nfs/dockershared/wlpapp/server.xml_v1.0. I've decided to version the configuration item, so that in future if I need to update the configuration, it becomes easier.

  3. verify that the configuration item created

     $> docker config ls

    ID                        NAME                       CREATED        UPDATED
    9i5edohyzyrvopuz988caxw4r dev_wlp_server_config_v1.0 18 seconds ago 18 seconds ago
    geuerj6t98d8eeu8nqvvxgtw9 com.docker.license-0       5 days ago     5 days ago
    vdzwhpe91iptvuiro654u3lue com.docker.ucp.config-1    5 days ago     5 days ago

  4. Use configuration item. Below example shows using YAML.

    docker-compose.yml
    version: "3.3"
    services:
       wlpappsrv: 

          image: 192.168.56.102/osboxes/wlptest:1.0
          networks:
             - my_hrm_network
          secrets:
             - keystore.jks
             - truststore.jks
             - app_enc_key.xml
          ports:
             - 9080
             - 9443
          configs:
             - source: dev_wlp_server_config_v1.0
               target: /opt/ibm/wlp/usr/servers/defaultServer/server.xml
               mode: 0444

         deploy:
            placement:
               constraints: [node.role == worker]
               mode: replicated
               replicas: 4
               resources:
                  limits:
                     memory: 2048M
               restart_policy:
                  condition: on-failure
                  max_attempts: 3
                  window: 6000s
               labels:
                  - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
                  - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network
    secrets:
       keystore.jks:
          external: true
       truststore.jks:
          external: true
       app_enc_key.xml:
          external: true
    configs:
        dev_wlp_server_config_v1.0:
         external: true

    Note: if you don't want to create configuration item in advance (step #2 above), you can also specify configuration file in the YAML file itself. Replace external: true in the above example with file: /mnt/nfs/dockershared/wlpapp/server.xml_v1.0

    If you want to use use docker service create ... command, instead of YAML file, here is how you can use config

    docker service create \
     --name wlpappsrv \
     --config  source=dev_wlp_server_config_v1.0,target=/opt/ibm/wlp/usr/servers/defaultServer/server.xml,mode=0444 \
     ... \
     192.168.56.102/osboxes/wlptest:1.0

  5. Validate compose file:
    $> docker-compose -f docker-compose.yml config
    WARNING: Some services (opal) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm. WARNING: Some services (opal) use the 'configs' key, which will be ignored. Compose does not support 'configs' configuration - use `docker stack deploy` to deploy to a swarm.

  6. Deploy the service as a stack:
    $> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP


How to refresh/update or rotate configuration


Configuration item created by Docker Configs service is immutable, however, there is a way to rotate configuration. Let's say, you need to update some configuration value in server.xml, like you have to reference to new version of JDBC driver.  See the steps below:
  1. Create another configuration item that references to updated server.xml
    $>docker config create dev_wlp_server_config_v2.0 \
      /mnt/nfs/var/dockershared/dev_PAL/server.xml_v2.0



    o4173tet99vuwuz1fma4dqd2j

  2. Update the service so that it references to the newly created configuration item
    $>docker service update \
     --config-rm dev_wlp_server_config_v1.0 \
     --config-add  source=dev_wlp_server_config_v2.0,target=/opt/ibm/wlp/usr/servers/defaultServer/server.xml
    \
     wlpappsrv

  3. [optional] Once the service is fully updated, you can remove the old configuration item:
    $> docker config rm dev_wlp_server_config_v1.0

  4. [optional] If you need to see which configuration item is attached to the service, you can run 'docker service inspect <service-name>' command.
    $>docker service inspect wlpappsrv

    ...
    "Configs": [
      {
        "File": {
          "Name": "/opt/ibm/wlp/usr/servers/defaultServer/server.xml",
          "UID": "0",
          "GID": "0",
          "Mode": 292
        },
         "ConfigID": "o4173tet99vuwuz1fma4dqd2j",
         "ConfigName": "dev_wlp_server_config_v2.0"
      }
    ]
    ...


For more information about Docker Swarm Configs service, review the following Docker documentations:

Experience Sharing - Docker Datacenter/Mirantis Docker Enterprise

This post details my experience working with Docker Datacenter (DDC)/Mirantis Docker Enterprise - an integrated container management and security solution and now part of Docker Enterprise Edition (EE) offering. Docker EE is a certified solution which is commercially supported. Refer to https://docs.mirantis.com/docker-enterprise/v3.1/dockeree-products/dee-intro.html for more information on Docker EE. I've worked with both production implementation as well as Proof Of Concept (PoC) solution of DDC. This blog post mainly contains my experience while doing PoC.
Obviously, the first step, while building the DDC is to define/design the architecture of DDC. Docker provides a Docker Reference architecture (https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Docker_EE_Best_Practices_and_Design_Considerations) and it is a good starting point. For this PoC, we will be creating a DDC based on reference architecture by taking a subset of it. Instead of three Universal Control Plane (UCP) nodes, we'll have just one, instead of three Docker Trusted Registry (DTR) nodes, we'll have one and lastly instead of four UCP worker nodes for application, we'll have just two. As an application load balancer, we'll use Dockerized HA-Proxy. It'll be a separate node (not managed by UCP). For this PoC, we'll use CentOS 7.x powered virtual machines created using 'Oracle VirtualBox'. I'm also going to highlight a few tricks and tips associated with VirtualBox.  We'll also create a client Docker node to communicate with DDC components remotely.

Since, we have decided to take the subset of Docker reference architecture and work on it, we can go ahead and start building the infrastructure. For this POC, the entire infrastructure consists of a Windows 10 laptop with 16 GB memory and an Oracle VM VirtualBox.

1. Create Virtual Machines (VMs)

Let's first create a VM with Centos 7.x Linux. Download CentOS 7-1611 VirtualBox image from osboxes.org and create the first node with 2 GB of memory.

Once the VM is ready, make 6 clones of it. For each clone follow the steps below:

1.1 Network setting:

We'll do few things to emulate a static IP for each virtual machine, otherwise the UCP and DTR will experience an issue if IP changes after the installation. See the recommendation from Docker regarding the static IP and hostname here. Enable two network adapter and configure as below:

  • Adapter 1: NAT - to allow the VM (guest) to communicate with the outside world through host computer's network connection.
  • Adapter 2: Host-only Adapter - to allow connection between host and guest. It also helps us to set static IP.
Refer to https://gist.github.com/pjdietz/5768124 for details on how to set this up. One more thing to remember, if you are using CentOS 7, and want to set Permanent static IP, you need to use Interface Configuration file (refer to https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-networkscripts-interfaces.html) instead of /etc/network/interfaces. In my case, I used the following:

1.1.1) Identified the interface used for HostOnly Adapter. See below highlighted in yellow:

$>ip a | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
...
    inet 192.168.56.101/24 brd 192.168.56.255 scope global enp0s8
...
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
...

1.1.2) Created file "/etc/sysconfig/network-scripts/ifcfg-enp0s8" on guest. where enp0s8 is the network interface name with content like this:

#/etc/sysconfig/network-scripts/ifcfg-enp0s8
DEVICE=enp0s8
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.56.0
NETMASK=255.255.255.0
IPADDR=192.168.56.101
BROADCAST=192.168.56.255
USERCTL=no

Note: make sure to create unique IP for each clone like 192.168.56.101, 192.168.56.102, 192.168.56.103, ... etc.

1.2 Hostname setup

In order to set hostname on CentOS, follow:
#Update the hostname:
#Set hostname for DDC UCP node 

$>sudo hostnamectl set-hostname centosddcucp
# restart the systemd-hostnamed daemon
$>sudo systemctl restart systemd-hostnamed

1.3) [optional] Update /etc/hosts file

For easy access to each node, add the mapping entries in /etc/hosts file of each VM. The following entries are per my configuration.

#/etc/hosts 
192.168.56.101 centosddcucp
192.168.56.102 centosddcdtr01
192.168.56.103 centosddcwrk01
192.168.56.104 centosddcwrk02
192.168.56.105 centosddcclnt
192.168.56.106 centosddchaproxy 
mydockertest.com



2. Install & Configure Commercially Supported (CS) Docker Engine

2.1) Installation:

Official installation document: https://docs.docker.com/engine/installation/linux/centos/#install-using-the-repository
You can install either using the repository or install from a package. Here, we will install using the repository.  To install Docker Enterprise Edition (Docker EE) using the repository, you need to know the Docker EE repository URL associated with your licensed or trial subscription. To get this information:

  • Go to https://store.docker.com/?overlay=subscriptions.
  • Choose Get Details / Setup Instructions within the Docker Enterprise Edition for CentOS section.
  • Copy the URL from the field labeled Copy and paste this URL to download your Edition.
  • set up Docker’s repositories and install from there, for ease of installation and upgrade tasks. This is the recommended approach.

# 2.1.1 Remove any existing Docker repositories (like docker-ce.repo, docker-ee.repo) from /etc/yum.repos.d/.

# 2.1.2 Store your Docker EE repository URL in a yum variable in /etc/yum/vars/. 

# Note: Replace with the URL you noted from your subscription.

$>sudo sh -c 'echo "" > /etc/yum/vars/dockerurl'

# Note: DOCKER-EE-URL looks something like:
# https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

# Note: I've replaced the actual text with 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' for confidentiality reason.


# 2.1.3 Install required packages:
$>sudo yum install -y yum-utils device-mapper-persistent-data lvm2
...
Updated:
  lvm2.x86_64 7:2.02.166-1.el7_3.4

Dependency Updated:
  device-mapper.x86_64 7:1.02.135-1.el7_3.4              device-mapper-event.x86_64 7:1.02.135-1.el7_3.4         device-mapper-event-libs.x86_64 7:1.02.135-1.el7_3.4
  device-mapper-libs.x86_64 7:1.02.135-1.el7_3.4         lvm2-libs.x86_64 7:2.02.166-1.el7_3.4

Complete!

# 2.1.4 Add stable repository:


$>sudo yum-config-manager --add-repo /docker-ee.repo
Loaded plugins: fastestmirror, langpacks
adding repo from: https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/docker-ee.repo
grabbing file https://storebits.docker.com/ee/centos/sub-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/docker-ee.repo to /etc/yum.repos.d/docker-ee.repo
repo saved to /etc/yum.repos.d/docker-ee.repo

# 2.1.5 Update Yum package index:


$>sudo yum makecache fast
...
Loading mirror speeds from cached hostfile
 * base: mirror.its.sfu.ca
 * extras: muug.ca
 * updates: muug.ca
Metadata Cache Created

# 2.1.6 Install Docker CS Engine 

$>sudo yum install docker-ee
...
Installed:
  docker-ee.x86_64 0:17.03.2.ee.4-1.el7.centos

Dependency Installed:
  docker-ee-selinux.noarch 0:17.03.2.ee.4-1.el7.centos

# 2.1.7 Add following content (if doesn't exist) or edit (if required) in /etc/docker/daemon.json
   
   {
      "storage-driver": "device-mapper"
   }
 
# 2.1.8 Check the docker service status, enable (if required) and start

$>sudo systemctl status docker
? docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: https://docs.docker.com


$>sudo systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.


$>sudo systemctl start docker


$>sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

# 2.1.9 Manage Docker as non-root user
# 9.1 Add docker group (if doesn't exist)
$>sudo groupadd docker


# 2.1.10) add user to the 'docker' group
$>sudo usermod -aG docker $USER


# logout and login again, and run:


$>docker ps



3. Install & Configure Universal Control Plane (UCP)

UCP is a cluster management solution for Docker Enterprise. In a nutshell, it itself is a containerized application that runs on (CS) Docker Engine and facilitates user interaction (deploy, configure, monitor etc) through (API, Docker CLI, GUI etc) with other containerized applications managed by DDC.

3.1 Prepare for Installation

  1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Prepare UCP node:

# Get the firewall zones
$>sudo firewall-cmd --get-zones
work drop internal external trusted home dmz public block


# Open the following ports for public zone
tcp_ports="443 80 2376 2377 4789 7946 12376 12379 12380 12381 12382 12383 12384 12385 12386 12387"
# udp_ports="4789 7946"
# sudo firewall-cmd --permanent --zone=public --add-port=${_port}/;
# For example:
$> sudo firewall-cmd --permanent --zone=public --add-port=2376/tcp;
# Once all ports are added, restart the firewall.
$> sudo firewall-cmd --reload;


3.2 Install Docker Universal Control Plane (UCP)


# 3.2.1 Pull UCP image

$>docker pull docker/ucp:latest
latest: Pulling from docker/ucp
709515475419: Pull complete
6beede3f81f7: Pull complete
37a4fec5e659: Pull complete
Digest: sha256:b8c4a162b5ec6224b31be9ec52c772a8ba3f78995f691237365cfa728341e942
Status: Downloaded newer image for docker/ucp:latest

# 3.2.2 Install UCP
Note: 192.168.56.101 is my UCP node:


$>sudo docker run --rm -it --name ucp -v /var/run/docker.sock:/var/run/docker.sock docker/ucp install --host-address 192.168.56.101 --interactive
INFO[0000] Verifying your system is compatible with UCP 2.1.4 (10e6c44)
INFO[0000] Your engine version 17.03.2-ee-4, build 1e6d71e (3.10.0-514.el7.x86_64) is compatible
WARN[0000] Your system uses devicemapper.  We can not accurately detect available storage space.  Please make sure you have at least 3.00 GB available in /var/lib/docker
Admin Username: osboxes
Admin Password:
Confirm Admin Password:
INFO[0033] All required images are present

...

INFO[0001] Initializing a new swarm at 192.168.56.101
INFO[0018] Establishing mutual Cluster Root CA with Swarm
...
INFO[0021] Deploying UCP Service
INFO[0085] Installation completed on centosddcucp (node ywkywo08e6dagbe45aprmbhlc)
INFO[0085] UCP Instance ID: IJUU:N6K6:KVJK:W3BO:LXVL:FBB4:RKF5:XNHM:HTQI:TZVL:XFIO:Z253
INFO[0085] UCP Server SSL: SHA-256 Fingerprint=D2:68:F3:...........:BD
INFO[0085] Login to UCP at https://192.168.56.101:443
INFO[0085] Username: osboxes
INFO[0085] Password: (your admin password)

# 3.2.3 Very UCP containers are running:


$>docker ps
CONTAINER ID        IMAGE                         COMMAND                  CREATED              STATUS                        PORTS                                                                             NAMES
f615d8c1d5ba        docker/ucp-controller:2.1.4   "/bin/controller s..."   55 seconds ago       Up 55 seconds (healthy)       0.0.0.0:443->8080/tcp                                                             ucp-controller
255119a3444e        docker/ucp-swarm:2.1.4        "/bin/swarm manage..."   About a minute ago   Up 56 seconds                 0.0.0.0:2376->2375/tcp                                                            ucp-swarm-manager
08a6e3789fed        docker/ucp-auth:2.1.4         "/usr/local/bin/en..."   About a minute ago   Up 57 seconds (healthy)       0.0.0.0:12386->4443/tcp                                                           ucp-auth-worker
caa4f9b4543a        docker/ucp-metrics:2.1.4      "/bin/entrypoint.s..."   About a minute ago   Up 58 seconds                 0.0.0.0:12387->12387/tcp                                                          ucp-metrics
58fa77c78bb2        docker/ucp-auth:2.1.4         "/usr/local/bin/en..."   About a minute ago   Up 58 seconds                 0.0.0.0:12385->4443/tcp                                                           ucp-auth-api
867f6aec884c        docker/ucp-auth-store:2.1.4   "rethinkdb --bind ..."   About a minute ago   Up About a minute             0.0.0.0:12383-12384->12383-12384/tcp                                              ucp-auth-store
b3e536f9309b        docker/ucp-etcd:2.1.4         "/bin/etcd --data-..."   About a minute ago   Up About a minute (healthy)   2380/tcp, 4001/tcp, 7001/tcp, 0.0.0.0:12380->12380/tcp, 0.0.0.0:12379->2379/tcp   ucp-kv
927cc6a7b5c8        docker/ucp-cfssl:2.1.4        "/bin/ucp-ca serve..."   About a minute ago   Up About a minute             0.0.0.0:12381->12381/tcp                                                          ucp-cluster-root-ca
ef00fe9f7200        docker/ucp-cfssl:2.1.4        "/bin/ucp-ca serve..."   About a minute ago   Up About a minute             0.0.0.0:12382->12382/tcp                                                          ucp-client-root-ca
b56a56aeeddd        docker/ucp-agent:2.1.4        "/bin/ucp-agent pr..."   About a minute ago   Up About a minute             0.0.0.0:12376->2376/tcp                                                           ucp-proxy
403f88c79f46        docker/ucp-agent:2.1.4        "/bin/ucp-agent agent"   About a minute ago   Up About a minute             2376/tcp                                                                          ucp-agent.ywkywo08e6dagbe45aprmbhlc.18vdyg9uqfuepzslnu0uclxzj

# 3.2.4 Apply license

# Note: in order to apply the license, launch the UCP Web UI https://ucp-host:
# and it gives you an option - either to upload the existing license "Upload License" option or "Get free trial or
# purchase license".

$> docker node ls
ID                           HOSTNAME      STATUS  AVAILABILITY  MANAGER STATUS
ywkywo08e6dagbe45aprmbhlc *  centosddcucp  Ready   Active        Leader


That concludes the installation and configuration of UCP. Next is to install the Docker Trusted Registry (DTR).

4. Install & Configure Docker Trusted Registry (DTR)

4.1 Installation steps

  1. Start the virtual machine for DTR node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Add this (DTR) node to DDC UCP:
    • Access the UCP Web UI.
    • Click on "+ Add node" link.
    • It shows you command to run from the node. Copy the command, it looks something like:
      docker swarm join --token 192.168.56.101:2377 

      Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx

      Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.


    • Run the command from DTR node:
      $>docker swarm join --token \
      SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377

      This node joined a swarm as a worker.
  5. Generate DTR Installation command string from UCP Web UI:
    • Access UCP Web UI.
    • Under Install DTR (in the newer version of UCP, you have to navigate to Admin Settings --> Docker Trusted Registry), click on Install Now, select appropriate selection and it gives you command to copy. Command looks something like:
      docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
      --ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
      --ucp-url https://192.168.56.101

      Note: Where the --ucp-node is the hostname of the UCP node where you want to deploy DTR

      Here is a screen shot that shows DTR installation command string:
  6. Start the installation.
    Note: DTR installation details can be found at https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/#step-3-install-dtr

    # 1. Pull the latest version of DTR >>docker pull docker/dtr
    # 2. Run installation command: 


    $>docker run -it --rm docker/dtr install --dtr-external-url https://192.168.56.102 \
    --ucp-node centosddcdtr01 --ucp-insecure-tls --ucp-username osboxes \
    --ucp-url https://192.168.56.101

    INFO[0000] Beginning Docker Trusted Registry installation
    ucp-password:
    INFO[0009] Validating UCP cert
    INFO[0009] Connecting to UCP
    INFO[0009] UCP cert validation successful
    INFO[0010] The UCP cluster contains the following nodes: centosddcucp, centosddcdtr01
    INFO[0017] verifying [80 443] ports on centosddcdtr01
    INFO[0000] Validating UCP cert
    INFO[0000] Connecting to UCP
    INFO[0000] UCP cert validation successful
    INFO[0000] Checking if the node is okay to install on
    INFO[0000] Connecting to network: dtr-ol
    INFO[0000] Waiting for phase2 container to be known to the Docker daemon
    INFO[0001] Starting UCP connectivity test
    INFO[0001] UCP connectivity test passed
    INFO[0001] Setting up replica volumes...
    INFO[0001] Creating initial CA certificates
    INFO[0001] Bootstrapping rethink...
    ...
    ...
    INFO[0115] Installation is complete
    INFO[0115] Replica ID is set to: fc27c2f482e5
    INFO[0115] You can use flag '--existing-replica-id fc27c2f482e5' when joining other replicas to your Docker Trusted Registry Cluster

    # 3. Make sure DTR is running:
       In your browser, navigate to the Docker Universal Control Plane web UI, and navigate to the Applications screen. DTR should be listed as an application.

    # 4. Access DTR Web UI:
    https://

Troubleshooting note: if you find your DTR is having some issue and need to remove and re-install, follow this like https://docs.docker.com/datacenter/dtr/2.2/guides/admin/install/uninstall/

# 1. Uninstall DTR:
$>sudo docker run -it --rm docker/dtr destroy --ucp-insecure-tls


INFO[0000] Beginning Docker Trusted Registry replica destroy
ucp-url (The UCP URL including domain and port): https://192.168.56.101:443
ucp-username (The UCP administrator username): osboxes
ucp-password:
INFO[0049] Validating UCP cert
INFO[0049] Connecting to UCP
INFO[0049] UCP cert validation successful
INFO[0049] No replicas found in this cluster. If you are trying to clean up a broken replica, provide its replica ID manually.
Choose a replica to destroy: bd02a612d0c0
INFO[0109] Force removing replica
INFO[0110] Stopping containers
INFO[0110] Removing containers
INFO[0110] Removing volumes
INFO[0110] Replica removed.

# 2. Remove DTR node from UCP
$>docker node ls
$>docker node rm

# Follow the installation steps (4, 5, 6) again.

5. Setup DDC client node

In order to access all DDC nodes (UCP, DTR, Worker) and perform operation remotely, you need to have a Docker client, configured to communicate with DDC securely.

5.1 Installation steps

  1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
  2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
  3. Make sure Docker is running.
  4. Download UCP client certificate bundle from UCP and extract it on client host machine.     
  • Access UCP Web UI and navigate to User Management
  • Click on User
  • Click on « Create a Client Bundle » as shown below in the screen shot:
  1. Configure client so that it can securely connect to UCP:
    # 1. Extract client bundle on Client node:
    $> unzip ucp-bundle-osboxes.zip
    Archive:  ucp-bundle-osboxes.zip
     extracting: ca.pem
     extracting: cert.pem
     extracting: key.pem
     extracting: cert.pub
     extracting: env.ps1
     extracting: env.cmd
     extracting: env.sh
    # 2. load the DDC environment
    $>eval $(<env.sh)
    # 3. Make sure docker is connected to UCP:
    $>docker ps
    # Note: based on your access level, you should see the Docker processes running on UCP, DTR and Worker(s) node(s).








  • Configure client so that it can securely connect to DTR and push/pull images.
    Note: If DTR is using the auto generated self signed cert, your client Docker Engine
    need to configure to trust the certificate presented by DTR, otherwise, you get "x509: certificate signed by unknown authority" error.
    Refer to: https://docs.docker.com/datacenter/dtr/2.1/guides/repos-and-images/#configure-your-host for detail.
    For CentOS, you can install the DTR certificate in the client trust store as follows:
    # 1. Pull the DTR certificate. Here 192.168.56.102 is my DTR node.
    $>sudo curl -k https://192.168.56.102/ca -o /etc/pki/ca-trust/source/anchors/centosddcdtr01.crt
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100  2009  100  2009    0     0   8999      0 --:--:-- --:--:-- --:--:--  9049
    # 2. Update CA Trust

    $>sudo update-ca-trust
    # 3. Start the Docker Engine
    $> sudo systemctl restart docker
    # 4. Test the connectivity from client node to DTR node:
    $> docker login 192.168.56.102
    Username: osboxes
    Password:
    Login Succeeded

  • 5.2 Configure Notary client

    By configuring Notary client, you'll be able to sign Docker image(s) with the private keys in your UCP client bundle, trusted by UCP and easily traced back to your user account. Read details here; https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/configure-your-notary-client/

    # 5.2.1 Download notary
    $>curl -L https://github.com/docker/notary/releases/download/v0.4.3/notary-Linux-amd64 -o notary


      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100   591    0   591    0     0   1184      0 --:--:-- --:--:-- --:--:--  1184
    100 9518k  100 9518k    0     0  3300k      0  0:00:02  0:00:02 --:--:-- 5115k

    # 5.2.2 Give execution permission
    $>chmod +x notary


    # 5.2.3 Move to /usr/bin
    $>sudo mv notary /usr/bin

    # 5.2.4. Import UCP private key into notary key database:
    $> notary key import ./key.pem
    Enter passphrase for new delegation key with ID 4e672ee (tuf_keys):
    Repeat passphrase for new delegation key with ID 4e672ee (tuf_keys):

    # 5.2.5 List key list
    $>notary key list

    ROLE          GUN    KEY ID                                                              LOCATION
    ----          ---    ------                                                              --------
    delegation           4e672ee5f4de7bf132d03554a8f592236ae6054026efc6b01873fc1b45a61dca    /home/osboxes/.docker/trust/private

    # 5.2.6 Configure notary CLI so that it can talk with the Notary server that’s part of DTR
    # There are few ways it can be accomplished. Easiest one is to configure Notary by creating a ~/.notary/config.json file with the following content:

    {
      "trust_dir" : "~/.docker/trust",
      "remote_server": {
        "url": "",
        "root_ca": ""
      }
    }


    # 5.2.7. [optional} Sign image while pushing to DTR:



    Note: By default, CLI does not sign an image while pushing to DTR. In order to sign image while # pushing, set the environment variable DOCKER_CONTENT_TRUST=1

    5.3 Install Docker Compose:

    Docker Compose is a very handy tool that can be used to  define and manage multi-container Docker application(s). For details refer to https://docs.docker.com/compose/overview/
    Note: Docker for Mac, and Windows may already include docker-compose. In order to find out whether the Docker Compose is already installed, just run the the docker-compose --version command.
      $> docker-compose --version
      ash: docker-compose: command not found...

    Install:

    $>sudo curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    $>chmod +x /usr/local/bin/docker-compose
    $> docker-compose --version
    docker-compose version 1.14.0, build c7bdf9e

    6. Setup Worker node(s):

    Worker node is real work horse in DDC setup where production application runs. Below are installation steps.

    6.1 Installation steps

    1. Start the virtual machine for UCP node (created in step #1. Create Virtual Machines (VMs))
    2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
    3. Make sure Docker is running.
    4. Add this (worker) node to DDC UCP:
      • Access the UCP Web UI.
      • Click on "+ Add node" link.
      • It shows you command to run from the node. Copy the command, it looks something like:
        docker swarm join --token 192.168.56.101:2377 
        Note: Swarm token looks something like this: SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yxxxxx
        Note: last 4 digits of SWARM-TOKEN are replaced with xxxxx.

        Note: 
      • Run the command from worker node:
        $>docker swarm join --token \
        SWMTKN-1-28cwz2szulitkrdult2qskn2ehlljyvs6big4oh31hw8l7ez98-f2ohafrl025tlat99f4yfqq4l \ 192.168.56.101:2377

        This node joined a swarm as a worker.
    5. Repeat steps 1 to 4 for each additional worker node
    6. Once all nodes join the swarm, run the following command from client (while connected to UCP) to confirm and list all the nodes:
      $> docker node ls
      ID                           HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS
      ivczlaqfyjmtvs0xqk6aivy8p    centosddcwrk02  Ready   Active
      usl2otwy3u6hj3vls9r77i45r    centosddcwrk01  Ready   Active
      ywkywo08e6dagbe45aprmbhlc *  centosddcucp    Ready   Active        Leader
      z2cobi7ag2qqsevfjsvye3d19    centosddcdtr01  Ready   Active   
    7. Optional: Put node like DTR or UCP in "drain" mode, where you don't want application container to deploy. Here, we put DTR in "drain" mode
      #command format: docker node update --availability drain
      $> docker node update --availability drain z2cobi7ag2qqsevfjsvye3d19
      z2cobi7ag2qqsevfjsvye3d19
      $> docker node ls
      ID                           HOSTNAME        STATUS  AVAILABILITY  MANAGER STATUS
      ivczlaqfyjmtvs0xqk6aivy8p    centosddcwrk02  Ready   Active
      usl2otwy3u6hj3vls9r77i45r    centosddcwrk01  Ready   Active
      ywkywo08e6dagbe45aprmbhlc *  centosddcucp    Ready   Active        Leader
      z2cobi7ag2qqsevfjsvye3d19    centosddcdtr01  Ready   Drain   
    Here is how node listing of our PoC appears on UCP Web UI:

    UCP node listing


    7. Create Additional User, Access Label and Network:

    7.1 Create additional user, team and permission label as necessary from UCP Web UI.

    Follow Docker documentation (https://docs.docker.com/datacenter/ucp/2.1/guides/admin/manage-users/create-and-manage-users/) to create user, team and permission levels as required. 

    7.2 Create network.

    As per Docker documentation, containers connected to the default bridge network can communicate with each other by IP address. If you want containers to be able to resolve IP addresses by container name, you should use user-defined networks instead. Traditionally, you could link two containers together using the legacy 'docker run --link ...' option, but here, we are going to define a network and attach our containers, so that they can communicate as required. Details can be found here https://docs.docker.com/engine/userguide/networking/#user-defined-networks
    Note: docker links feature has been deprecated in favor of user defined networks as explained here.
    Service discovery in Docker is network-scoped, meaning the embedded DNS functionality in Docker 
    can be used only by containers or tasks using the same network to resolve each other's addresses, so our plan here is to deploy a set of services that can communicate to each other using DNS. 
    Note: If the destination and source container or service are not on the same network, Docker Engine forwards the DNS query to the default DNS server.


    # Create network 
    # From client node, first connect to UCP:
    eval $(<env.sh)

    $> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
    naf8hvyx22n6lsvb4bq43z968

    # Verify the network   
    $> docker network ls
    NETWORK ID          NAME                             DRIVER              SCOPE
    eea917ac864c        centosddcdtr01/bridge            bridge              local
    065f72b45f37        centosddcdtr01/docker_gwbridge   bridge              local
    229f9949f85f        centosddcucp/bridge              bridge              local
    d8a09aed43ae        centosddcucp/docker_gwbridge     bridge              local
    a18d616d5fed        centosddcucp/host                host                local
    fc423b7dc25f        centosddcucp/none                null                local
    o40j6xknr6ax        dtr-ol                           overlay             swarm
    vwtzfva8q8r3        ingress                          overlay             swarm
    naf8hvyx22n6        my_hrm_network                   overlay             swarm
    tbmwjleceolg        ucp-hrm                          overlay             swarm
         

    Note: if you get error Error response from daemon: Error response from daemon: Error response from daemon: rpc error: code = 3 desc = name must be valid as a DNS name component while creating network, check your network name. Make sure it does not contain any dot '.'. Error message itself is little bit confusing. Refer to issue 31772 for details.

    7.3 Enable "HTTP Routing Mesh" if necessary

    • Login to the UCP web Web UI.
    • Navigate to Admin Settings > Routing Mesh.
    • Check Enable HTTP Routing Mesh.
    • Configure the ports for HRM to listen on, with the defaults being 9080 and 9443. The HTTPS port defaults to 8443 so that it doesn't interfere with the default UCP management port (443).
    Note: If it is a NEW network with label '--label com.docker.ucp.mesh.http=true', you need to disable and then re-enable "HTTP Routing Mesh". It can be done through UCP UI:
    • Disable: Admin Settings --> Routing Mesh  --> Uncheck "Enable HTTP routing mesh". Click on Update button.
    • Enable: Admin Settings --> Routing Mesh  --> check "Enable HTTP routing mesh". Click on Update button.

    8. Docker Application Deployment

    8.1 Preparation:

    For this PoC, we're going to build the custom image of Lets-Chat app and deploy using Docker Compose. Here is how our Dockerfile looks like:
    Note: All the steps listed in step 8.x are executed on or from client node.

    8.1.1 Create Dockerfile for lets-chat:

    From sdelements/lets-chat:latest
    CMD (sleep 60; npm start)

    8.1.2 Create lets-chat image using Dockerfile. Run 'docker build ...' command from the same directory where the Dockerfile is located.

    $> docker build -t lets-chat:1.0 .

    Sending build context to Docker daemon 4.608 kB
    Step 1/2 : FROM sdelements/lets-chat:latest
    latest: Pulling from sdelements/lets-chat
    6a5a5368e0c2: Pull complete
    7b9457ec39de: Pull complete
    ...
    ...
    876c39157780: Pull complete
    Digest: sha256:5b923d428176250653530fdac8a9f925043f30c511b77701662d7f8fab74961c
    Status: Downloaded newer image for sdelements/lets-chat:latest
     ---> 296501fb5b70
    Step 2/2 : CMD (sleep 60; npm start)
     ---> Running in 194eb91d5f59
     ---> 14e03b359b1d
    Removing intermediate container 194eb91d5f59
    Successfully built 14e03b359b1d

    8.1.3 Pull the Mongo DB image:


    $> docker pull mongo
    Using default tag: latest
    latest: Pulling from library/mongo
    f5cc0ee7a6f6: Pull complete
    d99b18c5f0ce: Pull complete
    ...
    ...
    72dc91cfe502: Pull complete
    d610498cfcc7: Pull complete
    Digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d
    Status: Downloaded newer image for mongo:latest
       

    8.1.4 Rename/Tag the images as per DTR namespace and Push the images to DTR:
    # Tag lets-chat image
    $> docker tag lets-chat:1.0 192.168.56.102/osboxes/lets-chat:1.0
    # Tag mongo image
    docker tag mongo:latest 192.168.56.102/osboxes/mongo:latest

    # List the images
    $> docker images
    REPOSITORY                           TAG           IMAGE ID            CREATED             SIZE
    192.168.56.102/osboxes/lets-chat     1.0           14e03b359b1d        5 minutes ago       255 MB
    lets-chat                            1.0           14e03b359b1d        5 minutes ago       255 MB
    mongo                                latest        71c101e16e61        6 days ago          358 MB
    192.168.56.102/osboxes/mongo         latest        71c101e16e61        6 days ago          358 MB
    ....


    8.1.5 Push the images to DTR:
    Note: before pushing the image, you need to create "repository" for the images (if one doesn't exist already). Create corresponding repo from DTR Web UI:






    # Login to DTR
    $ docker login 192.168.56.102 -u osboxes -p
    Login Succeeded

    # Push the mongo image to DTR
    $ docker push 192.168.56.102/osboxes/mongo
    The push refers to a repository [192.168.56.102/osboxes/mongo]
    722b5b443860: Pushed
    beaf3a1d24af: Pushed
    ...
    ...
    2589ed7ad668: Pushed
    d08535b0996b: Pushed

    latest: digest: sha256:f1ae736ea5f115822cf6fcef6458839d87bdaea06f40b97934ad913ed348f67d size: 2614

    # Push the lets-chat image to DTR
    $> docker push 192.168.56.102/osboxes/lets-chat
    The push refers to a repository [192.168.56.102/osboxes/lets-chat]
    fb8b4be9b6e6: Pushed
    d3b5bb1c4411: Pushed
    ...
    ...
    b2ac5371e0f2: Pushed
    142a601d9793: Pushed

    1.0: digest: sha256:92842b34263cfb3045cf2f431852bdc4b4dd8f01bc85eb1d0cd34d00888c9bba size: 2418


    8.1.5 Pull the images to all DDC worker nodes where the images will be instantiated into corresponding containers.

    # Connect to UCP.
    # Note: make sure you run the eval command below from the directory where 
    # the client bundle was extracted
    $>eval $(<env.sh)

    #Pull lets-chat
    $> docker pull 192.168.56.102/osboxes/lets-chat:1.0
    centosddcwrk01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcucp: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcwrk02: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded
    centosddcdtr01: Pulling 192.168.56.102/osboxes/lets-chat:1.0... : downloaded

    # Pull mongo
    $> docker pull 192.168.56.102/osboxes/mongo
    Using default tag: latest
    centosddcwrk01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcucp: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcwrk02: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded
    centosddcdtr01: Pulling 192.168.56.102/osboxes/mongo:latest... : downloaded

    8.1.6 Create Docker Compose file:

    Here is how our docker-compose.yml looks like:


    version: "3"
    services:
       mongo:
          image: 192.168.56.102/osboxes/mongo:latest
          networks:
             - my_hrm_network
          deploy:
             placement:
                constraints: [node.role == manager]
             restart_policy:
                condition: on-failure
                max_attempts: 3
                window: 60s
             labels:
                - "com.docker.ucp.access.label=dev"
       lets-chat:
          image: 192.168.56.102/osboxes/lets-chat:1.0
          networks:
             - my_hrm_network
          ports:
             - "8080"
          deploy:
             placement:
                constraints: [node.role == worker]
             mode: replicated
             replicas: 4
             restart_policy:
                condition: on-failure
                max_attempts: 3
                window: 60s
             labels:
                - "com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"
                - "com.docker.ucp.access.label=dev"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network   

    Few things to notice in the docker-compose.yml above are:

    1. placement constraints: [node.role == manager] for mongo. We are giving instruction to docker to instantiate the mongo container only on the node which has manager role. role can be “worker” or “manager”.
    2. Label: com.docker.ucp.access.label=dev; define access constraint by label. See for details. https://blog.docker.com/2016/03/role-based-access-control-docker-ucp-tutorial/
    3. Label: com.docker.ucp.mesh.http.8080=external_route=http://mydockertest.com:8080,internal_port=8080"; Here lets-chat application is configured for HRM and to accessed using host mydockertest.com on port 8080, which will be our HA-Proxy's host and port. Docker uses DNS for service discovery as services are created. Docker has different built in routing meshes for high availability. HTTP Routing Mesh (HRM) is an application layer routing mesh that routes HTTP traffic based on DNS hostname is part of UCP 2.0.
    4. Also, note that we are not exposing (explicitly) port for mongo, as mongo and lets-chat are in the same network 'my_hrm_network', they will be able to communicate even though, they will be instantiated in different Host (nodes). For lets-chat, the application listens on port 8080, but we are not publishing it explicitly because we are implementing containers with scaling in mind and relaying on Docker HRM. If you publish port explicitly to host (e.g. -p 8080:8080), then it will be an issue if you have to instantiate more than one replica in the same host, because there will be port conflict as only one process can listen into the same port on the same IP. More detail about HRM and service discovery: https://docs.docker.com/engine/swarm/ingress/#configure-an-external-load-balancer and https://docs.docker.com/datacenter/ucp/2.1/guides/admin/configure/use-domain-names-to-access-services/. Good read about service discovery, load balancing and also Swarm, Ingress and HRM: https://success.docker.com/Architecture/Docker_Reference_Architecture%3A_Service_Discovery_and_Load_Balancing_with_Docker_Universal_Control_Plane_(UCP)

     8.2 Deployment:

    8.2.1 Validate docker-compose.yml:

    # Validate docker-compose.yml, run the following command from the same directory 
    # where docker-compose.yml is located.
    $>docker-compose -f docker-compose.yml config

    WARNING: Some services (lets-chat) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
         

    Note: the above mentioned WARNING is obvious, because if you deploy your service/container using regular 'docker-compose up -d ...' command, it does not support the 'deploy' key.
    In our case, we are going to use 'docker stack deploy ...' instead. So, we can safely ignore the warning.

    8.2.1 Deploy

    # Execute docker stack deploy command using the compose-file 
    $> docker stack deploy --compose-file docker-compose.yml dev_lets-chat
    Creating service dev_lets-chat_lc-mongo
    Creating service dev_lets-chat_lets-chat

    # Verify service(s) are created:
    $> docker stack ls
    NAME           SERVICES
    dev_lets-chat  2

    # See the service details:
    $> docker stack services dev_lets-chat
    ID            NAME                     MODE        REPLICAS  IMAGE
    kib7peniroci  dev_lets-chat_mongo      replicated  1/1       192.168.56.102/osboxes/mongo:latest
    t7s5xpgdxncs  dev_lets-chat_lets-chat  replicated  4/4       192.168.56.102/osboxes/lets-chat:1.0

       

    As you can see one instance of mongo and 4 instances of lets-chat have been created.
    If you want to learn more about stack deployment, refer to https://docs.docker.com/engine/swarm/stack-deploy/#deploy-the-stack-to-the-swarm


    9. Setup HA-Proxy node:

    Here we will have a simple configuration of HA-Proxy just to show the working idea. Refer to HA-Proxy documentation and Docker documentation for HA-Proxy for details.
    Note: for this PoC, we are deploying ha-proxy outside of swarm cluster.

    9.1 Setup steps

    1. Start the virtual machine for HA-Proxy node (created in step #1. Create Virtual Machines (VMs))
    2. Follow the CS Docker engine installation steps (step #2. Install & Configure Commercially Supported (CS) Docker Engine)
    3. Make sure Docker is running.
    4. Prepare the configuration file for HA-Proxy.



    # /etc/haproxy/haproxy.cfg, version 1.7
    global
       maxconn 4096

    defaults
       mode   http
       timeout connect 5000ms
       timeout client 50000ms
       timeout server 50000ms

    frontend http
       bind *:8080
       option http-server-close
       stats uri /haproxy?stats
       default_backend bckendsrvs

    backend bckendsrvs
       balance roundrobin
       server worker1 192.168.56.103:8080 check
       server worker2 192.168.56.104:8080 check

    Few notes about haproxy.cfg above.
    1. Backend connection. We have 4 replicas (2 replica per node), but as you can see only two back-end connections are mentioned in the configuration file. It is the beauty of using Docker Swarm HRM. As long as the traffic reaches to any of the HRM node, whether the actual replica is running or not there, swarm automatically directs traffic to one of the replicas running in one of the available nodes. Docker swarm also takes care of load balancing among all replicas.
    2. The check option at the end of the server directives specifies that health checks should be performed on those back-end servers.
    3. Frontend section defines bind (ip and port) configuration for the proxy and reference to the corresponding backend configuration. In this case, it is listening to all available IPs on port 8080.
    4. 'stats uri' defines the status URI.
    Now, we have our ha-proxy configuration file is ready, let's build the custom ha-proxy image and instantiate it.

    9.2 Create Dockerfile for HA-Proxy

    FROM haproxy:1.7
    COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg

    Note: In this case, haproxy.cfg and Dockerfile are located in the same directory from where we are executing 'docker build ...' command as shown below.

    9.3) Create custom image

    # Create Image

    $> docker build -t my_haproxy:1.7 .
    Sending build context to Docker daemon 3.072 kB
    Step 1/2 : FROM haproxy:1.7
    1.7: Pulling from library/haproxy
    ef0380f84d05: Pull complete
    405e00049647: Pull complete
    c97485231395: Pull complete
    389e4de140a0: Pull complete
    9abb32070ad9: Pull complete
    Digest: sha256:c335ec625d9a9b71fa5269b815597392a9d2418fa1cedb4ae0af17be8029a5b4
    Status: Downloaded newer image for haproxy:1.7
     ---> d66f0c435360
    Step 2/2 : COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
     ---> 182b33ee6345
    Removing intermediate container 4416fbab54be
    Successfully built 182b33ee6345

    # List image
    $> docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
    my_haproxy          1.7                 182b33ee6345        About a minute ago   135 MB
    haproxy             1.7                 d66f0c435360        6 days ago           135 MB


    9.4 Verify the configuration and Instantiate ha-proxy container:

    # Verify the configuration file: 
    $> docker run -it --rm --name haproxy-syntax-check my_haproxy:1.7 haproxy -c \
       -f /usr/local/etc/haproxy/haproxy.cfg

     haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -c -f /usr/local/etc/haproxy/haproxy.cfg -Ds
    Configuration file is valid


    # Instantiate ha-proxy instance:
    $> docker run -d --name ddchaproxy -p 8080:8080 my_haproxy:1.7


    5bc06e2680e72475f2585c453f6ada0a5ef349e5222f9e75b2c0f98eb1a0462f


    10. Access and Verify Application 

    10.1) Accessing application:
    Once the ha-proxy is running, access the application. Make sure firewall is not blocking the port (that ha-proxy is listening) .

    http://<ha-proxy-host>:<ha-proxy-port>/<application-uri>

    Important: In order to access application, you need to make sure that the '' in the above URL matches the 'host' part of the external-route configuration of HRM.
    In our case it is 'mydockertest.com', so make sure 'mydockertest.com' resolves to the IP address of the ha-proxy. It is the way how HRM along with Swarm discover the services and route the requests in Ingress cluster and we are able to scale containers dynamically.

    10.2) Application verification:

    10.2.1) Get the stat from haproxy. Along with other things, stat shows the request count and which Swarm node is serving the request:
    http://:/haproxy?stats

    10.2.2) First access lets-chat through Web-UI (http://mydockertest.com:8080). Create your account.  Log using your credential. Once you create the account and able to login, in order to verify that the lets-chat is making successful connection to the mongo db, you can do the following:

    # Inspect the lets-chat instance:
    $> docker inspect 3d046c183b6d | grep mongo
       "LCB_DATABASE_URI=mongodb://mongo/letschat",

    #Access the mongodb instance and run mongo shell to verify the data.
    $> docker exec -it jbz7h5hdvb20 bash

    # Launch the mongo shell
    root@jbz7h5hdvb20:/# mongo
    MongoDB shell version v3.4.5
    connecting to: mongodb://127.0.0.1:27017
    MongoDB server version: 3.4.5
    Welcome to the MongoDB shell.

    # Run command 'show dbs' and make sure letschat database is in the list.
    > show dbs
    admin     0.000GB
    letschat  0.000GB
    local     0.000GB

    # Connect to letschat database.
    > use letschat
    switched to db letschat

    # Get the users table and run find and make sure it shows your account data.
    > show collections
    messages
    rooms
    sessions
    usermessages
    users

    # Make sure Users table has account data that was created before.
    > db.users.find()
    { "_id" : ObjectId("595bdce9559bb1000eae7b9e"), "displayName" : "Purna", "lastName" : "Poudel", "firstName" : "Purna", "password" : "$2a$10$JlZrr3Gu3aklxx4qeUK6uuDF3jQDZ/CuA17.Clm6VKk6/NN35QOT6", "email" : "purna.poudel@gmail.com", "username" : "ppoudel", "provider" : "local", "messages" : [ ], "rooms" : [ ], "joined" : ISODate("2017-07-04T18:22:33.868Z"), "__v" : 0 }

    Once you have Docker Datacenter up and running, upgrade it to Docker EE 2.0 and UCP 3.x to have choice of Swarm or Kubernetes orchestration. See my post Upgrade to Docker EE 2.0 and UCP 3.x for Choice of Swarm or Kubernetes Orchestration.


    Looks like you're really into Docker, see my other related blog posts below: