Showing posts with label Docker. Show all posts
Showing posts with label Docker. Show all posts

Using Docker Secrets with IBM WebSphere Liberty Profile Application Server


In this blog post, I'm discussing how to utilize Docker Secrets (a Docker Swarm service feature) to manage sensitive data (like password encryption keys, SSH private keys, SSL certificates etc.) for Dockerized application powered by IBM WebSphere Liberty Profile (WLP) application server. Docker Secrets helps to centrally manage these sensitive information while in rest or in transit (encrypted and securely transmitted to only those containers that need access to it and has explicit access to it). It is out of scope for this post to go deep into Docker secretes, however, if you need to familiarize yourself with Docker Secretes, refer to https://docs.docker.com/engine/swarm/secrets/.
Note: if you like to know how to program encryption/decryption within your Java application using passwordutilities-1.0 feature of WLP, see my blog How to use WLP passwordUtilities feature for encryption/decryption
I'm going to write this post in a tutorial style, so that anyone interested to try can follow the steps.

Pre-requisite: In order to follow the steps outlined here, you have to have following:

  1. Good working knowledge of Docker
  2. Configured Docker Swarm environment (using Docker 1.13 or higher version) with at least one manager and one worker node or Docker Datacenter with Universal Control Plane (UCP) having manager node, worker node(s). It's good to have a separate Docker client node, so that you can remotely connect to manager and execute commands. 
  3. Good working knowledge of IBM WebSphere Liberty Profile (https://developer.ibm.com/wasdev/blog/2013/03/29/introducing_the_liberty_profile/).

Here is brief description, how we are going to utilize Docker Secretes with WLP.

  1. Password encryption key that is used to encrypt password for WLP KeyStore, TrustStore and any other password(s) used by WLP applications will be externalized and stored as Docker Secretes.
  2. Private key such as one stored in KeyStore (used to enable secure communication in WLP) will be externalized and stored as Docker Secretes.

Here are some obvious benefits:

  1. Centrally manage all sensitive data. Since Docker enforces access control, only people with enough/right privilege(s) will have access to sensitive data.
  2. Only those container(s) and service(s) will have access to private/sensitive data which has explicit access as per need basis.
  3. Private information remains private while in rest or in transit.
  4. New Docker image created by 'docker commit' will not contain any sensitive data and also dump/package created by WLP server dump or package command, will not contain encryption key as it's externalized. See more insights about WLP password encryption here: https://www.ibm.com/support/knowledgecenter/en/SS7K4U_8.5.5/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/cwlp_pwd_encrypt.html and managing Docker Secrets here: https://docs.docker.com/engine/swarm/secrets/

Enough talk, now, let's start the real work. Below are the major steps that we'll carry out:

  1. Create Docker secrets for following that is being used by WLP:
    • KeyStore
    • Truststore
    • Password Encryption key
  2. Build Docker image based on websphere-liberty:webProfile7
  3. Create network
  4. Put together docker-compose.xml for deployment.
  5. Deploy application as Docker service.


Create Docker Secrets

Here, we're going to use Docker Commandline (CLI) and we'll execute Docker command  from Docker client node remotely. You need have following three environment variables correctly setup in order to execute command remotely. Refer to https://docs.docker.com/engine/reference/commandline/cli/#description for detail.
  • DOCKER_TLS_VERIFY
  • DOCKER_CERT_PATH
  • DOCKER_HOST
If you are using Docker Datacenter, you can use GUI based UCP Admin Console to create the same. Note: label com.docker.ucp.access.label="<value>" is not mandatory unless you have access constraint defined. For detail refer to Authentication and authorization
1) Create Docker Secrete with name keystore.jks, which basically is key database that stores private key to be used by WLP.

#Usage: docker secret create [OPTIONS] SECRET file|- 
#Create a secret from a file or STDIN as content 

$> docker secret create keystore.jks /mnt/nfs/dockershared/wlpapp/keystore.jks --label com.docker.ucp.access.label="dev"
 idc9em1u3fki8k0z77ol91sh4 

2) Following command creates secret called truststore.jks using physical Java keystore file which contains trust certificates

$> docker secret create truststore.jks /mnt/nfs/dockershared/wlpapp/truststore.jks --label com.docker.ucp.access.label="dev"
w8qs1o7pwrvl96nuamv97sb9t

Finally create the Docker secret call app_enc_key.xml, which basically refers to the fragment of xml wich contains definintion of password encryption key

$> docker secret create app_enc_key.xml /mnt/nfs/dockershared/wlpapp/app_enc_key.xml --label com.docker.ucp.access.label="dev"
kj3hcw4ss71hnudfgr6g32mxm

Note: Docker secrets are available under '/run/secrets/' at runtime to any container which has explicit access to that secret.
Here is how the /mnt/nfs/dockershared/wlpapp/app_enc_key.xml look like:

<server> 
   <variable name="wlp.password.encryption.key" value="#replaceMe#">
   </variable>
</server>

Note: Make sure to replace the string '#replaceMe#' with your own password encryption key.

Let's check and make sure all our secrets are properly created and listed:

$> docker secret ls
ID                        NAME             CREATED        UPDATED
idc9em1u3fki8k0z77ol91sh4 keystore.jks     3 hours ago    3 hours ago
kj3hcw4ss71hnudfgr6g32mxm app_enc_key.xml  21 seconds ago 21 seconds ago
w8qs1o7pwrvl96nuamv97sb9t truststore.jks   3 hours ago    3 hours ago


Building Docker Image:

Now, let's first encrypt our keystore and trusstore passwords using the pre-defined encryption key and put together the server.xml for WLP server. We are going to use securityUtility tool that ships with IBM WLP to encrypt our password.
Note: make sure your password encryption key matches to the one that is defined by 'wlp.password.encryption.key' property in app_enc_key.xml.
Here I'm encoding my example password '#myStrongPassw0rd#' using encryption key '#replaceMe#' with encoding option 'aes'.
Please note that encoding option 'xor' ignores the encryption key and uses default.

$> cd /opt/ibm/wlp/bin
$> ./securityUtility encode #myStrongPassw0rd# --encoding=aes --key=#replaceMe#
{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==

Now, we have our Docker secrets created and we have encrypted our password. It's time to put together our server.xml for WLP application server and build the Docker image. Here is how my server.xml looke like.

<server description="TestWLPApp">
   <featuremanager> 
      <feature>javaee-7.0</feature> 
      <feature>localConnector-1.0</feature> 
      <feature>ejbLite-3.2</feature> 
      <feature>jaxrs-2.0</feature> 
      <feature>jpa-2.1</feature> 
      <feature>jsf-2.2</feature> 
      <feature>json-1.0</feature> 
      <feature>cdi-1.2</feature> 
      <feature>ssl-1.0</feature> 
   </featuremanager> 
   <include location="/run/secrets/app_enc_key.xml"/> 
   <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/> 
   <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/> 
   <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
   <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
   <applicationmonitor updatetrigger="mbean"/> 
   <datasource id="wlpappDS" jndiname="wlpappDS"> 
      <jdbcdriver libraryref="OracleDBLib"/> 
      <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
   </datasource>  
    <library id="OracleDBLib"> 
       <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
    </library> 
    <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/> 
</server>

As you can see, the location of defaultKeyStore, defaultTrustStore, and app_enc_key.xml is pointing to directory '/run/secrets'. It is, as mentioned before, because all private data created by Docker Secrets will be available for the assigned services under '/run/secrets' of the corresponding container.

Now let's put together Dockerfile.

FROM websphere-liberty:webProfile7
COPY /mnt/nfs/dockershared/wlpapp/server.xml /opt/ibm/wlp/usr/servers/defaultServer/
RUN installUtility install --acceptLicense defaultServer
COPY /mnt/nfs/dockershared/wlpapp/wlptest.war /apps/wlpapp/war/
COPY /mnt/nfs/dockershared/wlpapp/ojdbc6-11.2.0.1.0.jar /apps/wlpapp/shared_lib/
CMD ["/opt/ibm/java/jre/bin/java","-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar","-Djava.awt.headless=true","-jar","/opt/ibm/wlp/bin/tools/ws-server.jar","defaultServer"]

Note: above, I'm copying my server.xml into /opt/ibm/wlp/usr/servers/defaultServer/ before running the installUtility as I'm adding few features required by my application including, ssl-1.0.

Finally, we're going to build the Docker image.

$> docker build -t 192.168.56.102/osboxes/wlptest:1.0 .
Sending build context to Docker daemon 56.9 MB
Step 1/7 : FROM websphere-liberty:webProfile7
---> c035090355f5
...
Step 4/7 : RUN installUtility install --acceptLicense defaultServer
---> Running in 2bce0d02e253
Checking for missing features required by the server ...
...
Successfully built 07fef794348e

Note: 192.168.56.102 is my local Docker Trusted Registry (DTR).

Once, the image is successfully built, make sure the image is available on all nodes of Docker Swarm. I'm not going show details how you distribute the image.
> If you are using DTR, You can first push the image to the registry (using 'docker push ...', then connect to Docker Swarm host and execute 'docker pull ...'),
> Other option is to use 'docker save ...' to save the image as tar file then load the image into Swarm using 'docker load ...'.
 Here, I'm deploying this into Docker Datacenter which has two UCP worker nodes, one UCP manager node and DTR node. I'm also going to use the HTTP routing mesh (HRM), and User defined Overlay networks in swarm mode.
Note: User defined Docker network and HRM are NOT necessary to utilize the Docker secrets.


Create Overlay network:

$> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
naf8hvyx22n6lsvb4bq43z968

Note: Label 'com.docker.ucp.mesh.http=true' is required while creating network in order to utilize HRM.

Put together docker-compose.yml

Here is my compose file. Your may look different.

version: "3.1"
services:
   wlpappsrv: 

      image: 192.168.56.102/osboxes/wlptest:1.0
      volumes:
         - /mnt/nfs/dockershared/wlpapp/server.xml:/opt/ibm/wlp/usr/servers/defaultServer/server.xml
      networks:
         - my_hrm_network
      secrets:
         - keystore.jks
         - truststore.jks
         - app_enc_key.xml
      ports:
         - 9080
         - 9443
     deploy:
        placement:
           constraints: [node.role == worker]
           mode: replicated
           replicas: 4
           resources:
              limits:
                 memory: 2048M
           restart_policy:
              condition: on-failure
              max_attempts: 3
              window: 6000s
           labels:
              - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
              - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
              - "com.docker.ucp.access.label=dev"
networks:
   my_hrm_network:
      external:
         name: my_hrm_network
secrets:
   keystore.jks:
      external: true
   truststore.jks:
      external: true
   app_enc_key.xml:
      external: true

Few notes about the docker-compose.yml
  1. Volume definition that maps server.xml in the container with the one in the NFS file system is optional. This mapping gives additional flexibility to update the server.xml. You can achieve similar or even better flexibility/portability by using Docker Swarm Config service. See my blog post - How to use Docker Swarm Configs service with WebSphere Liberty Profile for details.
  2. The secrets definition under service 'wlpappsrv' refers to the secrets definition in the root level, which in it turns refers to externally defined secret.
  3. "com.docker.ucp.mesh.http." labels are totally optional and only required if you are using HRM. 
  4. "com.docker.ucp.access.label" is also optional and required only if you have defined access constraints.
  5. Since, I'm using Swarm and HRM, I don't need to explicitly map the internal container ports to host port. If you need to map, you can use something like below for your port definition:
    ports:
       - 9080:9080
       - 9443:9443
  6. You may encounter situation that your container application is not able to access the secrets created under /run/secrets. It may be related to bug #31006. In order to resolve the issue use 'mode: 0444' while defining your secrets. Something like this:
    secrets:
       - source: keystore.jks
         mode: 0444
       ...   

Deploy the service 

Here I'm using "docker stack deploy..." to deploy the service:
$> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP

Note: In certain cases, you may get "secrets Additional property secrets is not allowed", error message. In order to resolve, make sure your compose file version to 3.1. In my case, where it's working fine, I've Docker version 17.03.2-ee4, API version: 1.27, Docker-Compose version 1.14.0.

Once the service is deployed. You can list it using 'docker service ls ..." command

$> docker service ls
ID           NAME                 MODE       REPLICAS IMAGE
28xhhnbcnhfg dev_WLPAPP_wlpappsrv replicated 4/4      192.168.56.102/osboxes/wlptest:1.0

And list the replicated containers:
$> docker ps
CONTAINER ID IMAGE                              COMMAND CREATED STATUS PORTS NAMES
2052806bbae3 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.3.m7apci6i1ks218ddnv4qsdbwv
541cf0f39b6e 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.4.wckec2jcjbmrhstftajh2zotr
ccdd7275fd7f 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.2.oke0fz2sifs5ej0vy63250wo9
7d5668a4d851 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.1.r9gi0qllnh8r9u8popqg5mg5b

And here is what the WLP messages.log shows (taken from one of the containers log file):
********************************************************************************
product = WebSphere Application Server 17.0.0.2 (wlp-1.0.17.cl170220170523-1818)
wlp.install.dir = /opt/ibm/wlp/
server.output.dir = /opt/ibm/wlp/output/defaultServer/
java.home = /opt/ibm/java/jre
java.version = 1.8.0
java.runtime = Java(TM) SE Runtime Environment (pxa6480sr4fp7-20170627_02 (SR4 FP7))
os = Linux (3.10.0-514.el7.x86_64; amd64) (en_US)
process = 1@e086b8c54a8d
********************************************************************************
[7/24/17 19:44:29:275 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server defaultServer has been launched.
...
[7/24/17 19:44:30:533 UTC] 00000017 com.ibm.ws.config.xml.internal.XMLConfigParser A CWWKG0028A: Processing included configuration resource: /run/secrets/app_enc_key.xml
[7/24/17 19:44:31:680 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 2.763 seconds
[7/24/17 19:44:31:990 UTC] 0000001f com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[7/24/17 19:44:45:877 UTC] 00000017 com.ibm.ws.security.ready.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting...
[7/24/17 19:44:47:262 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager I CWWKS4103I: Creating the LTPA keys. This may take a few seconds.
[7/24/17 19:44:47:295 UTC] 00000017 ibm.ws.security.authentication.internal.jaas.JAASServiceImpl I CWWKS1123I: The collective authentication plugin with class name NullCollectiveAuthenticationPlugin has been activated.
[7/24/17 19:44:48:339 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager A CWWKS4104A: LTPA keys created in 1.065 seconds. LTPA key file: /opt/ibm/wlp/output/defaultServer/resources/security/ltpa.keys
[7/24/17 19:44:48:365 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyCreateTask I CWWKS4105I: LTPA configuration is ready after 1.107 seconds.
[7/24/17 19:44:57:514 UTC] 00000017 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications.
[7/24/17 19:44:57:651 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv6) port 9080.
[7/24/17 19:44:57:675 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint-ssl has been started and is now listening for requests on host * (IPv6) port 9443.
[7/24/17 19:44:57:947 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302 has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7276.
[7/24/17 19:44:57:951 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302-ssl has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7286.
...

As you can see (messages in blue), it's able to include the configuration from /run/secrets/app_enc_key.xml and it also shows that defaultHttpEndpoint-ssl has been started and listening on port 9443; meaning that it's able to successfully load and open the /run/secrets/keystore.jks and /run/secrets/truststore.jks files using the encrypted password with encryption key defined in /run/secrets/app_enc_key.xml.

Now, it's time to access the application. In my case, since, I'm using HRM, I will access it as: https://mydockertest.com:8443/wlpappctx
If you are not using HRM; you may access it using:
https://<docker-container-host>:9443/<application-context>


Example using Load-Balancer


If you have a load-balancer in front and want to set-up a  pass-through SSL, you can use SNI: aka SSL routing. Below is simple example using ha-proxy. You can also refer to HA-Proxy documentation here for details.  

Here is haproxy.cfg for our example PoC:
# /etc/haproxy/haproxy.cfg, version 1.7
global
   maxconn 4096

defaults
   timeout connect 5000ms
   timeout client 50000ms
   timeout server 50000ms

frontend frontend_ssl_tcp
   bind *:8443
   mode tcp
   tcp-request inspect-delay 5s
   tcp-request content accept if { req_ssl_hello_type 1 }
   default_backend bckend_ssl_default

backend bckend_ssl_default
   mode tcp
   balance roundrobin
   server worker1 192.168.56.103:8443 check
   server worker2 192.168.56.104:8443 check           

Here is a Dockerfile for custom image:
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Build the image:
Note: execute the 'docker build ...' command from the same directory where Dockerbuild file is located.

$> docker build -t my_haproxy:1.7 .
Once you build the image and start the ha-proxy container like below:
$> docker run -d --name ddchaproxy -p 8443:8443 my_haproxy:1.7

Note: In this case ha-proxy is listening on port 8443.

Access the application:

https://mydockertest.com:8443//wlpappctx

Note: Make sure mydockertest.com resolves to the IP address of ha-proxy.


Looks like you're really into Docker, see my other related blog posts below:

Making Your Container Deployment Portable

   This post is a follow-up and extension of my previous post "Setting up CommandLine Environment for IBM® Bluemix® Container Service".
   In this post, I'm further exploring the way of working with containers whether it's locally deployed native Docker or container created with IBM® Bluemix® Container Service. I'm going to show few basic scripting ideas, so that the same docker-compose.yml and other related files can be used no matter whether you are dealing with locally deployed native Docker container(s) or IBM container(s).
One step ahead, here we will be working with multiple containers and employing Docker Compose. I have used basic steps for this exercise from Bluemix tutorial (https://console.ng.bluemix.net/docs/containers/container_compose_intro.html#container_compose_config) and added few steps and logic to do basic automation and make it portable, so that it can be executed the same way independent of environment.

Pre-requisites for this exercise:

  1. (native) Docker installed and running locally(may be on your laptop/desktop)
  2. CommandLine environment setup for IBM® Bluemix® Container Service. See previous post "Setting up CommandLine Environment for IBM® Bluemix® Container Service".
  3. Docker Compose version 1.6.0 or later installed on your laptop/desktop. See installation instruction here
  4. lets-chat and mongo images are available in your local and Bluemix private registry.

As part of this exercise, we will be putting together 'docker-compose.yml' with replaceable variable(s), '.env' file for environment variables with default values, property file 'depl.properties' for environment specific properties, and script file 'autoDeploy.sh' with basic logic that can be executed to manage both native Docker as well as IBM Bluemix containers. We will be creating and linking following two containers.

  1. lets-chat (basic chat application)
  2. mongo (database to store data)

At the end, we'll also look into few possible issues that you may encounter.

Let's start with creating docker-compose.yml. Compose simplifies the definition and execution of  multi-container Docker applications. See Docker Compose documentation for details.
Below is our simple docker-compose.yml which defines two containers <<lets-chat>> and <<lc-mongo>>. As you see a variable has been assigned as a value to 'image' attribute. It is in order to make it portable between native Docker and container to be deployed on IBM Bluemix as the image registry will be different. You can assign variable this way to any attribute as it's value which will be replaced by value of corresponding environment variable.

lets-chat:
   image: ${LETS_CHAT_IMAGE}
   container_name: lets-chat
   ports:
      - "8080:8080"
   links:
      - lc-mongo:mongo
lc-mongo:
   image: ${MONGODB_IMAGE}
   container_name: lc-mongo
   expose:
      - "27017"

Now, let's see, where we can define environment variables. Docker supports either defining it through command shell as 'export VAR=VALUE' or defining them in '.env' file (Note: If you are deploying your service using 'docker stack deploy --compose-file docker-compose.yml <service-name>' instead of 'docker-compose up ...' values in the docker-compose.yml may not be replaced by corresponding environment values defined in .env file. See https://github.com/moby/moby/issues/29133). Environment variable defined through 'export VAR=VALUE' takes precedence. See more detail on variable substitution and declaring default environment variables in file.

Below is our '.env' file:

# COMPOSE_HTTP_TIMEOUT default value is 60 seconds.
COMPOSE_HTTP_TIMEOUT=120
MONGODB_IMAGE=mongo
LETS_CHAT_IMAGE=lets-chat

Usually, it is a best practice to define default variables with 'DEV/Development' environment specific values in '.env' file and have mechanism to override those values for higher environment(s). It helps to boost developers' productivity. In order to follow the above mentioned principle, I've defined my local native Docker container specific environment variables in my '.env' file and will have separate property file to define environment variables and their values for other environments (Bluemix in my case for this post).
Below is my property file 'depl.properties' which defines property and their Bluemix specific values:

# Define property as _VARIABLE_NAME=VALUE where prefix will identify the environment like 'bluemix', 'native' etc.
# Note: variable with default value can be placed directly into '.env' file.
bluemix_API_ENDPOINT=https://api.ng.bluemix.net
bluemix_DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
bluemix_DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a
bluemix_DOCKER_TLS_VERIFY=1
bluemix_REGISTRY=registry.ng.bluemix.net
bluemix_NAMESPACE=sysg
bluemix_ORG_NAME=porg
bluemix_SPACE_NAME=ptest
# reference property without '<prefix>_' as script sets environment variable without <prefix>_. See autoDeploy.sh
bluemix_MONGODB_IMAGE=${REGISTRY}/${NAMESPACE}/mongo
bluemix_LETS_CHAT_IMAGE=${REGISTRY}/${NAMESPACE}/lets-chat

Now, we need to have a script (logic) that can set appropriate environment variables based on the target environment.
Below is sample (autoDeploy.sh) script:

#!/bin/sh
# Author: Purna Poudel
# Created on: 23 February, 2017

# project directory
pDir='.'
#property file
propFile=depl.properties
function usage {
   printf "Usage: $0 \n";
   printf "Options:\n";
   printf "%10s-t|--conttype:-u|--username:-p|--password\n";
}
OPTS=$(getopt -o t:u:p: -l conttype:,username:,password:, -- "$0" "$@");
if [ $? != 0 ]; then
   echo "Unrecognised command line option encountered!";
   usage;
   exit 1;
fi
eval set -- "$OPTS";
while true; do
   case "$1" in
      -t|--conttype)
         conttypearg="$1";
         conttype=$2;
         shift 2;;
      -u|--username)
         usernamearg="$1";
         username=$2;
         shift 2;;
      -p|--password)
         passwordarg="$1";
         password=$2;
         shift 2;;
    *)
         shift;
         break;;
   esac
done 

if [[ $conttype == "" ]]; then
   echo "Valid non empty value for '--conttype' or '-t' is required."
   usage;
   exit 1
fi
# Reads each line if 'prefix' matches the supplied value of $conttype. 

# Also excludes all commented (starting with #) lines, all empty lines and not related properties.

for _line in `cat "${pDir}/${propFile}" | egrep '(^'$conttype'|^all)' |grep -v -e'#' | grep -v -e'^$'`; do
   echo "Reading line: $_line from source file: ${pDir}/${propFile}";
   # Assign property name to variable '_key'
   # Also remove the prefix, which is supposed to be the identifier for particular environment 

   # in depl.properties file.
   # the final 'xargs' removes the leading and trailing blank spaces.
   _key=$(echo $_line | awk 'BEGIN {FS="="}{print $1}' | awk 'BEGIN {FS=OFS="_"}{print substr($0, index($0,$2))}' | xargs);
   # Assign property value to variable '_value'
   _value=`eval echo $_line | cut -d '=' -f2`;
   # Also declare shell variable and export to use as environment variable,
   declare $_key=$(echo $_value | xargs);
   echo "Setting environment variable: ${_key}=${!_key}";
   export ${_key}=${!_key};
done
if [[ $conttype == "bluemix" ]]; then
   # First log into CloudFoundry
   # cf login [-a API_URL] [-u USERNAME] [-p PASSWORD] [-o ORG] [-s SPACE]
   cf login -a ${API_ENDPOINT} -u ${username} -p ${password} -o ${ORG_NAME} -s ${SPACE_NAME};
   retSts=$?;
   if [ $retSts -ne 0 ]; then
      echo "Login to CloudFoundry failed with return code: "$retSts;
      exit $retSts;
   fi
   # then log into the IBM Container
   cf ic login
   retSts=$?;
   if [ $retSts -ne 0 ]; then
      echo "Login to IBM Container failed with return code: $retSts;"
      exit $retSts;
   fi
fi
# Stop and remove if container are running.
docker-compose ps | grep "Up";
retSts=$?;
if [ $retSts -eq 0 ]; then
   echo "Stopping existing docker-compose container...";
   docker-compose stop;
   sleep 5;
fi
docker-compose ps -q | grep "[a-z0-9]"
retSts=$?;
if [ $retSts -eq 0 ]; then
   echo "Removing existing docker-compose container...";
   docker-compose rm -f;
   sleep 5;
fi
# execute docker-compose
docker-compose up -d;
sleep 20;
# Make sure container built and running
docker-compose ps;


Now, it's time to test the logic above.

First, let's execute the script locally against native Docker.

$> ./autoDeploy.sh -t native
lc-mongo /entrypoint.sh mongod Up 27017/tcp
lets-chat /bin/sh -c (sleep 60; npm ... Up 5222/tcp, 0.0.0.0:8080->8080/tcp
Stopping existing docker-compose container...
Stopping lets-chat ... done
Stopping lc-mongo ... done
4afc9bc67f80fe0876fa2e5ce42af4616dbc64444c1c58128d0e63bf6007b55f
48beb1bb7423e103bfcdd4fc0ea8aa5e1ae766fcea70cf14a58df87a66e43f59
Removing existing docker-compose container...
Going to remove lets-chat, lc-mongo
Removing lets-chat ... done
Removing lc-mongo ... done
Creating lc-mongo
Creating lets-chat
Name Command State Ports
-------------------------------------------------------------------------------------
lc-mongo /entrypoint.sh mongod Up 27017/tcp
lets-chat /bin/sh -c (sleep 60; npm ... Up 5222/tcp, 0.0.0.0:8080->8080/tcp

As per script execution logic, it first identifies if any container instance of 'lc-mongo' and 'lets-chat', if so, it stops and removes the existing container then creates new one from existing images and starts and checks if they are running successfully. Since '-t native' option passed through command line, it didn't set any environment variable, but Docker Compose used the default environment variables defined in '.env' file.

It is time to test the same against IBM Bluemix Container Service. See below:

$> ./autoDeploy.sh -t bluemix -u abc.def@xyz.com -p xxxxxxxxx
Reading line: bluemix_API_ENDPOINT=https://api.ng.bluemix.net from source file: ./depl.properties
Setting environment variable: API_ENDPOINT=https://api.ng.bluemix.net
Reading line: bluemix_DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443 from source file: ./depl.properties
Setting environment variable: DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
Reading line: bluemix_DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a from source file: ./depl.properties
Setting environment variable: DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a
Reading line: bluemix_DOCKER_TLS_VERIFY=1 from source file: ./depl.properties
Setting environment variable: DOCKER_TLS_VERIFY=1
Reading line: bluemix_REGISTRY=registry.ng.bluemix.net from source file: ./depl.properties
Setting environment variable: REGISTRY=registry.ng.bluemix.net
Reading line: bluemix_NAMESPACE=sysg from source file: ./depl.properties
Setting environment variable: NAMESPACE=sysg
Reading line: bluemix_ORG_NAME=porg from source file: ./depl.properties
Setting environment variable: ORG_NAME=porg
Reading line: bluemix_SPACE_NAME=ptest from source file: ./depl.properties
Setting environment variable: SPACE_NAME=ptest
Reading line: bluemix_MONGODB_IMAGE=${REGISTRY}/${NAMESPACE}/mongo from source file: ./depl.properties
Setting environment variable: MONGODB_IMAGE=registry.ng.bluemix.net/sysg/mongo
Reading line: bluemix_LETS_CHAT_IMAGE=${REGISTRY}/${NAMESPACE}/lets-chat from source file: ./depl.properties
Setting environment variable: LETS_CHAT_IMAGE=registry.ng.bluemix.net/sysg/lets-chat
API endpoint: https://api.ng.bluemix.net
Authenticating...
OK

Targeted org porg

Targeted space ptest



API endpoint: https://api.ng.bluemix.net (API version: 2.54.0)
User: purna.poudel@gmail.com
Org: porg
Space: ptest
Deleting old configuration file...
Retrieving client certificates for IBM Containers...
Storing client certificates in /home/osboxes/.ice/certs/...

Storing client certificates in /home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a...

OK
The client certificates were retrieved.

Checking local Docker configuration...
OK

Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
OK
You are authenticated with the IBM Containers registry.
Your organization's private Bluemix registry: registry.ng.bluemix.net/sysg

You can choose from two ways to use the Docker CLI with IBM Containers:


Option 1: This option allows you to use 'cf ic' for managing containers on IBM Containers while still using the Docker CLI directly to manage your local Docker host.
Use this Cloud Foundry IBM Containers plug-in without affecting the local Docker environment:


Example Usage:
cf ic ps
cf ic images

Option 2: Use the Docker CLI directly. In this shell, override the local Docker environment by setting these variables to connect to IBM Containers. Copy and paste the following commands:
Note: Only some Docker commands are supported with this option. Run cf ic help to see which commands are supported.
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a
export DOCKER_TLS_VERIFY=1

Example Usage:
docker ps
docker images

lc-mongo Up xxx.xx.0.xx:27017->27017/tcp
lets-chat Up xxx.xx.0.xx:8080->8080/tcp
Stopping existing docker-compose container...
Stopping lets-chat ... done
Stopping lc-mongo ... done
ea11eda5-9ebc-45df-beb0-80f2ba8c44e7
1996dc00-d4a6-4ecf-9309-62c986781b88
Removing existing docker-compose container...
Going to remove lets-chat, c-mongo
Removing lets-chat ... done
Removing lc-mongo ... done
Creating lc-mongo
Creating lets-chat
Name Command State Ports
---------------------------------------------------------
lc-mongo Up xxx.xx.0.xx:27017->27017/tcp
lets-chat Up xxx.xx.0.xx:8080->8080/tcp

As you have noticed, we passed options '-t bluemix -u abc.def@xyz.com -p xxxxxxxxx' while executing the autoDeploy.sh. This enforced script to read properties from 'depl.properties' file and set corresponding environment variables specific for Bluemix. Everything else including docker-compose.yml and .env file remain unchanged.
Note: IPs, username and password masked.
In terms of defining properties specific to any environment, in this post, I'm just showing the case for two environments - local native Docker and IBM Bluemix Container Service environment, however,
if you have more environments, you can
define corresponding properties with appropriate prefix, for example:
dev_NAMSPACE=
tst_NAMESPACE=
qa_NAMESPACE=
prd_NAMESPCE=
And while running the build pass relevant container type option like '-t|--conttype dev|tst|qa|prd' then the script should set environment variable appropriately.

Note: You may need to update the logic in the autoDeploy.sh as per your requirement.

There are few other important aspect to remember while trying to make your code/script portable among native Docker and IBM Bluemix Container Services. Few of them are listed below:

  • Currently IBM Bluemix Container Service only supports Docker Compose version 1 of the docker-compose.yml file format. Refer https://docs.docker.com/compose/compose-file/compose-file-v1/ for detail.
  • IBM Bluemix Container Service may not support all Docker or Docker Compose commands or it has other commands that are not found in native Docker. Meaning in certain situation, you may still need to use the 'cf ic' commands instead of native docker command to perform task specific to IBM Bluemix Container Service. See the Supported Docker commands for IBM Bluemix Container Service plug-in (cf ic). The best way to find what native Docker commands are supported within IBM Bluemix or what 'cf ic' commands are available, just run the 'cf ic --help' and you'll see the list. The commands with '(Docker)' at the end are supported Docker commands. 

Finally, let's talk about the possible issue(s) that you may encounter.
1)
Error response from daemon:
400 The plain HTTP request was sent to HTTPS port
400 Bad Request
The plain HTTP request was sent to HTTPS port
nginx
/tmp/_MEI3n6jq4/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html

The above mentioned error was encountered while sending build context to IBM Bluemix Container Service. It was because the 'DOCKER_TLS_VERIFY' was set with empty value. You may encounter this error in any case when you are trying to establish secure connection, but any one of the following environment variables is not set correctly:
DOCKER_HOST
DOCKER_CERT_PATH
DOCKER_TLS_VERIFY

2)
ERROR: for lets-chat HTTPSConnectionPool(host='containers-api.ng.bluemix.net', port=8443): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

You may encounter the above mentioned error while executing 'docker-compse up' when request times out. The default read timeout is 60 sec. You can override this value by either defining it in '.env' file or as environment variable. e.g. 'export COMPOSE_HTTP_TIMEOUT=120'. Refer to https://docs.docker.com/compose/reference/envvars/ for all available environment variables.

That's it for this post. Try and let me know. You can find/get/download all files from GitLab here: https://gitlab.com/pppoudel/public_shared/tree/master/container_autodeploy


Looks like you're really into Docker, see my other related blog posts below:


Setting up CommandLine Environment for IBM® Bluemix® Container Service

Last week, I had gone through couple of steps to create an account (free - trial for 30 days) with IBM® Bluemix®, deploy my IBM Container (Docker based) and access my IBM Container running on Bluemix using Bluemix/CloudFoundry command line tools setup locally on my laptop. I have done it to prepare myself and also it's a kind of POC for upcoming project work. I've decided to share these steps so that other people in the same situation can benefit from it. Below are steps:

  1. Make sure you have your IBM Container deployed and running on IBM Bluemix. If you don't have one follow the below sub steps:
    • Create a free account with IBM Bluemix (https://console.ng.bluemix.net/)
    • Once the account is created, you can create an IBM Container. See below quick steps:
      • From left hand menu, click on "Containers"
      • Then click on "Create Containers" link and follow the instruction.
        Note: you can select the container type from the available list or upload your own compatible container image. Below screen shot shows the container that I created:
        Running container
  2. Now it's time to setup the command line tools on your local desktop. I have used the Fedora v24.x running as a virtual machine. 
    • Download Bluemix_CLI_0.4.6_amd64.tar.gz from http://clis.ng.bluemix.net/ui/home.html and extract it to some directory:
      $>tar -xvzf ~/Downloads/Bluemix_CLI_0.4.6_amd64.tar.gz
    • Among others files, you'll see  'install_bluemix_cli' executable file under /Bluemix_CLI
      $>sudo ./install_bluemix_cli
    • Once it's installed, download CloudFoundry tool:
      $>sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
    • Install CloudFoundry CLI:
      $>sudo yum install cf-cli
      ...
      Installed:
      cf-cli.x86_64 0:6.26.0-1
    • Check the version:
      $>cf -v
      cf version 6.23.1+a70deb3.2017-01-13
    • Install the IBM Bluemix Container Service plug-in (cf ic) to use the native Docker CLI. More details about it can be found here.
      $>cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64
    • Verify the plugins:
      $>cf plugins
      Listing Installed Plugins...
      OK

      Plugin Name Version Command Name Command Help
      IBM-Containers 0.8.964 ic IBM Containers plug-in
  3. It's time to login to CloudFoundry and run you container command to manage your container.
    • Login to Bluemix/CloudFoundry:
      $>cf login -a https://api.ng.bluemix.net
      Email> purna.poudel@gmail.com
      Password>
      Authenticating...
      OK
    • Login to Container:
      $> cf ic login
      Deleting old configuration file...
      Retrieving client certificates for IBM Containers...
      Storing client certificates in /home/osboxes/.ice/certs/...

      Storing client certificates in /home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846...

      OK
      The client certificates were retrieved.

      Checking local Docker configuration...
      OK

      Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
      OK
      You are authenticated with the IBM Containers registry.
      ...
  4. It's time to manage your Container(s) from your desktop using command line
    • Let's check our running Container process(es)
      $> cf ic ps 
      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      d476a406-ba9 registry.ng.bluemix.net/ibmliberty:webProfile7 "" 2 days ago Running 169.46.21.44:9080->9080/tcp, 169.46.21.44:9443->9443/tcp sysgLibertyCont
    • Let's inspect the running Container
      $> cf ic inspect d476a406-ba9
      [
      {
      "BluemixApp": null,
      "BluemixServices": null,
      "Config": {
      "AttachStderr": false,
      "AttachStdin": false,
      "AttachStdout": false,
      "Cmd": [],
      "Dns": "",
      "Env": [
      "logging_password=",
      "space_id=7b9e7846-0ec8-41da-83e6-209a02e1b14a",
      "logstash_target=logmet.opvis.bluemix.net:9091",
      "metrics_target=logmet.opvis.bluemix.net:9095"
      ],
      "Hostname": "instance-002eacfa",
      "Image": "registry.ng.bluemix.net/ibmliberty:webProfile7",
      "ImageArchitecture": "amd64",
      "Labels": {
      "doc.url": "/docs/images/docker_image_ibmliberty/ibmliberty_starter.html"
      },
      "Memory": 256,
      "MemorySwap": "",
      ....
List of IBM Bluemix Container Service plug-in (cf ic) commands for managing containers are available at https://console.ng.bluemix.net/docs/containers/container_cli_reference_cfic.html


Looks like you're really into Docker, see my other related blog posts below:


Quick start with IBM Datapower Gateway for Docker

Lately, IBM has done great job by providing development version of Datapower in different flavors. It's really useful for doing POC stuff as well as testing application in development environment. Today, I had a chance to play with Docker edition of Datapower Gateway. Within few minutes, I was able to pull the Datapower Docker image from the Docker hub, create the Datapower Docker container, run, and play with Datapower. IBM Datapower Gateway for Docker web page (https://hub.docker.com/r/ibmcom/datapower/) has good information to start with. However, this image contains unconfigured Datapower Gateway, and you'll not be able to access either Datapower Web Management Console or establish SSH connection to Datapower even after the container is running. For that either you have to access the Datapower in interactive mode and enable the admin-state of Web Management and SSH, add configuration through Docker build or  use (externalised) configuration file during run. Below we'll discuss two (out of three) options.
1) [optional] Create 'config' and 'local' directories on your host machine's file system from where you are going to execute the docker run command later in the step #2. For example, I have created following directory structure under: /opt/docker_dev/datapowerbuild/
$ ls -rl datapowerbuild
total 8
drwxrwxr-x. 2 osboxes osboxes 4096 Jan 31 20:19 local
drwxrwxr-x. 2 osboxes osboxes 4096 Jan 31 20:19 config
We are going to put docker configuration file(s) in those directories. Docker is able to access these external configuration. It is very powerful concept and explained as [data] volume in Docker documentation. Read details here.
2) [optional] Create auto-startup.cfg configuration file and put under 'config' directory created in step #1, so that Docker Web Management and SSH admin state is enabled when Docker container runs.

top; co

ssh

web-mgmt
  admin enabled
  port 9090
exit

Above script is taken from https://github.com/ibm-datapower/datapower-tutorials/blob/master/using-datapower-in-ibm-container-service/src/auto-startup.cfg
3) Execute docker run command. It's assumed that you already have Docker installed, configured and running on your host machine. If not follow Docker installation steps here:

Following command is based on IBM's instruction available at     https://hub.docker.com/r/ibmcom/datapower/
$ docker run -it \
-v $PWD/config:/drouter/config \
-v $PWD/local:/drouter/local \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-p 9090:9090 \
-p 2222:22 \
ibmcom/datapower
Note: make sure your machine is connected to internet otherwise it will not be able to pull the Datapower Docker image from Docker hub.
Some of the last line you'll see before logon prompt are:
20170201T014408.003Z [0x00350014][mgmt][notice] ssh(SSH Service): tid(111): Operational state up
20170201T014408.007Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20170201T014408.073Z [0x00350014][mgmt][notice] web-mgmt(WebNGUI-Settings): tid(303): Operational state up
4) logon to console using default userid 'admin' and password 'admin'.
5) Launch browser and type 'https://<host machine ip>:9090' to access Web Management console.


6) Create ssh connection to your host on port 2222 to access Datapower.
Note: If you have not performed optional steps #1 and #2, your web-mgmt and 'ssh' connection will not be available. Perform the optional steps #7.

7) [optional] Connect to Datapower console interactively and turn the Global Configuration mode.
configure terminal
web-mgmt
admin-state enabled
exit
ssh
exit
See the screen shots below for better clarity.

Enable web-mgmt admin-state

Enable ssh admin-state

Showing the status




Looks like you're really into Docker, see my other related blog posts below: