Custom Ant Task IsInList

I  had created this Custom Ant Task sometime ago while working on a project where I needed to check whether an item exists in the list. As I did not find any other efficient way to do it using any of the standard Ant tasks, I created one on my own. I'm publishing (see below GitHub project location) this Custom Ant Task source code as an Open Source. Feel free to use/modify/distribute it as per your need or suggest if you have any other better ways to do it.

What IsInList contains?
1) It contains one Java source file: com.sysgenius.tools.ant.customtask.IsInList.java


2) The GitHub project also has Ant build file build.xml to build the project from source code, sample-usage.xml - Ant build file that shows few usage scenarios of 'IsInList' task and README.txt that basically explains how to use it.

How to Use It?
Follow the steps below:

1) Make sure isinlist-<version>.jar file is in your build classpath. You can do it either by adding it into your $ANT_HOME/lib directory or by defining a custom library path like below and making a reference to it.

<path id="ant.opt.lib.path">
   <fileset dir="${basedir}/../target">
      <include name="isinlist-1.0.0.0.jar"/>
   </fileset>
</path>

2) Next, define the "isinlist" task, below is one of the few ways:

<typedef classname="com.sysgenius.tools.ant.customtask.IsInList" name="isinlist" classpathref="ant.opt.lib.path"/>

3) Use it, see the examples below:

Example 1:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";". Here you need to find out whether or not any item starting with "native_stdout.log" exists. In this case you can do lookup using regular expression (isRegEx="true"). In your build file, you'll need to have:

<property name="item.list" value="ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*"/>
<property name="regex.item.name" value="native_stdout.log"/>
<isinlist casesensitive="false" delimiter=";" value="${regex.item.name}" valueList="${item.list}" isRegEx="true"/>

Example 2:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";".
Here you need to find out whether an item called "release" exists in the given list. In this case you can use regular lookup, meaning isRegEx="false".

<property name="item.list" value="ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*"/>
<property name="regular.item.name" value="release"/>
<isinlist casesensitive="false" delimiter=";" value="${regular.item.name}" valueList="${item.list}" isRegEx="false"/>

See the sample-usage.xml for complete example and more detail usage scenarios.

You can  get/dowload files from GitHub location: https://github.com/pppoudel/customanttasks.

Making Your Container Deployment Portable

   This post is a follow-up and extension of my previous post "Setting up CommandLine Environment for IBM® Bluemix® Container Service".
   In this post, I'm further exploring the way of working with containers whether it's locally deployed native Docker or container created with IBM® Bluemix® Container Service. I'm going to show few basic scripting ideas, so that the same docker-compose.yml and other related files can be used no matter whether you are dealing with locally deployed native Docker container(s) or IBM container(s).
One step ahead, here we will be working with multiple containers and employing Docker Compose. I have used basic steps for this exercise from Bluemix tutorial (https://console.ng.bluemix.net/docs/containers/container_compose_intro.html#container_compose_config) and added few steps and logic to do basic automation and make it portable, so that it can be executed the same way independent of environment.

Pre-requisites for this exercise:

  1. (native) Docker installed and running locally(may be on your laptop/desktop)
  2. CommandLine environment setup for IBM® Bluemix® Container Service. See previous post "Setting up CommandLine Environment for IBM® Bluemix® Container Service".
  3. Docker Compose version 1.6.0 or later installed on your laptop/desktop. See installation instruction here
  4. lets-chat and mongo images are available in your local and Bluemix private registry.

As part of this exercise, we will be putting together 'docker-compose.yml' with replaceable variable(s), '.env' file for environment variables with default values, property file 'depl.properties' for environment specific properties, and script file 'autoDeploy.sh' with basic logic that can be executed to manage both native Docker as well as IBM Bluemix containers. We will be creating and linking following two containers.

  1. lets-chat (basic chat application)
  2. mongo (database to store data)

At the end, we'll also look into few possible issues that you may encounter.

Let's start with creating docker-compose.yml. Compose simplifies the definition and execution of  multi-container Docker applications. See Docker Compose documentation for details.
Below is our simple docker-compose.yml which defines two containers <<lets-chat>> and <<lc-mongo>>. As you see a variable has been assigned as a value to 'image' attribute. It is in order to make it portable between native Docker and container to be deployed on IBM Bluemix as the image registry will be different. You can assign variable this way to any attribute as it's value which will be replaced by value of corresponding environment variable.

lets-chat:
   image: ${LETS_CHAT_IMAGE}
   container_name: lets-chat
   ports:
      - "8080:8080"
   links:
      - lc-mongo:mongo
lc-mongo:
   image: ${MONGODB_IMAGE}
   container_name: lc-mongo
   expose:
      - "27017"

Now, let's see, where we can define environment variables. Docker supports either defining it through command shell as 'export VAR=VALUE' or defining them in '.env' file (Note: If you are deploying your service using 'docker stack deploy --compose-file docker-compose.yml <service-name>' instead of 'docker-compose up ...' values in the docker-compose.yml may not be replaced by corresponding environment values defined in .env file. See https://github.com/moby/moby/issues/29133). Environment variable defined through 'export VAR=VALUE' takes precedence. See more detail on variable substitution and declaring default environment variables in file.

Below is our '.env' file:

# COMPOSE_HTTP_TIMEOUT default value is 60 seconds.
COMPOSE_HTTP_TIMEOUT=120
MONGODB_IMAGE=mongo
LETS_CHAT_IMAGE=lets-chat

Usually, it is a best practice to define default variables with 'DEV/Development' environment specific values in '.env' file and have mechanism to override those values for higher environment(s). It helps to boost developers' productivity. In order to follow the above mentioned principle, I've defined my local native Docker container specific environment variables in my '.env' file and will have separate property file to define environment variables and their values for other environments (Bluemix in my case for this post).
Below is my property file 'depl.properties' which defines property and their Bluemix specific values:

# Define property as _VARIABLE_NAME=VALUE where prefix will identify the environment like 'bluemix', 'native' etc.
# Note: variable with default value can be placed directly into '.env' file.
bluemix_API_ENDPOINT=https://api.ng.bluemix.net
bluemix_DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
bluemix_DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a
bluemix_DOCKER_TLS_VERIFY=1
bluemix_REGISTRY=registry.ng.bluemix.net
bluemix_NAMESPACE=sysg
bluemix_ORG_NAME=porg
bluemix_SPACE_NAME=ptest
# reference property without '<prefix>_' as script sets environment variable without <prefix>_. See autoDeploy.sh
bluemix_MONGODB_IMAGE=${REGISTRY}/${NAMESPACE}/mongo
bluemix_LETS_CHAT_IMAGE=${REGISTRY}/${NAMESPACE}/lets-chat

Now, we need to have a script (logic) that can set appropriate environment variables based on the target environment.
Below is sample (autoDeploy.sh) script:

#!/bin/sh
# Author: Purna Poudel
# Created on: 23 February, 2017

# project directory
pDir='.'
#property file
propFile=depl.properties
function usage {
   printf "Usage: $0 \n";
   printf "Options:\n";
   printf "%10s-t|--conttype:-u|--username:-p|--password\n";
}
OPTS=$(getopt -o t:u:p: -l conttype:,username:,password:, -- "$0" "$@");
if [ $? != 0 ]; then
   echo "Unrecognised command line option encountered!";
   usage;
   exit 1;
fi
eval set -- "$OPTS";
while true; do
   case "$1" in
      -t|--conttype)
         conttypearg="$1";
         conttype=$2;
         shift 2;;
      -u|--username)
         usernamearg="$1";
         username=$2;
         shift 2;;
      -p|--password)
         passwordarg="$1";
         password=$2;
         shift 2;;
    *)
         shift;
         break;;
   esac
done 

if [[ $conttype == "" ]]; then
   echo "Valid non empty value for '--conttype' or '-t' is required."
   usage;
   exit 1
fi
# Reads each line if 'prefix' matches the supplied value of $conttype. 

# Also excludes all commented (starting with #) lines, all empty lines and not related properties.

for _line in `cat "${pDir}/${propFile}" | egrep '(^'$conttype'|^all)' |grep -v -e'#' | grep -v -e'^$'`; do
   echo "Reading line: $_line from source file: ${pDir}/${propFile}";
   # Assign property name to variable '_key'
   # Also remove the prefix, which is supposed to be the identifier for particular environment 

   # in depl.properties file.
   # the final 'xargs' removes the leading and trailing blank spaces.
   _key=$(echo $_line | awk 'BEGIN {FS="="}{print $1}' | awk 'BEGIN {FS=OFS="_"}{print substr($0, index($0,$2))}' | xargs);
   # Assign property value to variable '_value'
   _value=`eval echo $_line | cut -d '=' -f2`;
   # Also declare shell variable and export to use as environment variable,
   declare $_key=$(echo $_value | xargs);
   echo "Setting environment variable: ${_key}=${!_key}";
   export ${_key}=${!_key};
done
if [[ $conttype == "bluemix" ]]; then
   # First log into CloudFoundry
   # cf login [-a API_URL] [-u USERNAME] [-p PASSWORD] [-o ORG] [-s SPACE]
   cf login -a ${API_ENDPOINT} -u ${username} -p ${password} -o ${ORG_NAME} -s ${SPACE_NAME};
   retSts=$?;
   if [ $retSts -ne 0 ]; then
      echo "Login to CloudFoundry failed with return code: "$retSts;
      exit $retSts;
   fi
   # then log into the IBM Container
   cf ic login
   retSts=$?;
   if [ $retSts -ne 0 ]; then
      echo "Login to IBM Container failed with return code: $retSts;"
      exit $retSts;
   fi
fi
# Stop and remove if container are running.
docker-compose ps | grep "Up";
retSts=$?;
if [ $retSts -eq 0 ]; then
   echo "Stopping existing docker-compose container...";
   docker-compose stop;
   sleep 5;
fi
docker-compose ps -q | grep "[a-z0-9]"
retSts=$?;
if [ $retSts -eq 0 ]; then
   echo "Removing existing docker-compose container...";
   docker-compose rm -f;
   sleep 5;
fi
# execute docker-compose
docker-compose up -d;
sleep 20;
# Make sure container built and running
docker-compose ps;


Now, it's time to test the logic above.

First, let's execute the script locally against native Docker.

$> ./autoDeploy.sh -t native
lc-mongo /entrypoint.sh mongod Up 27017/tcp
lets-chat /bin/sh -c (sleep 60; npm ... Up 5222/tcp, 0.0.0.0:8080->8080/tcp
Stopping existing docker-compose container...
Stopping lets-chat ... done
Stopping lc-mongo ... done
4afc9bc67f80fe0876fa2e5ce42af4616dbc64444c1c58128d0e63bf6007b55f
48beb1bb7423e103bfcdd4fc0ea8aa5e1ae766fcea70cf14a58df87a66e43f59
Removing existing docker-compose container...
Going to remove lets-chat, lc-mongo
Removing lets-chat ... done
Removing lc-mongo ... done
Creating lc-mongo
Creating lets-chat
Name Command State Ports
-------------------------------------------------------------------------------------
lc-mongo /entrypoint.sh mongod Up 27017/tcp
lets-chat /bin/sh -c (sleep 60; npm ... Up 5222/tcp, 0.0.0.0:8080->8080/tcp

As per script execution logic, it first identifies if any container instance of 'lc-mongo' and 'lets-chat', if so, it stops and removes the existing container then creates new one from existing images and starts and checks if they are running successfully. Since '-t native' option passed through command line, it didn't set any environment variable, but Docker Compose used the default environment variables defined in '.env' file.

It is time to test the same against IBM Bluemix Container Service. See below:

$> ./autoDeploy.sh -t bluemix -u abc.def@xyz.com -p xxxxxxxxx
Reading line: bluemix_API_ENDPOINT=https://api.ng.bluemix.net from source file: ./depl.properties
Setting environment variable: API_ENDPOINT=https://api.ng.bluemix.net
Reading line: bluemix_DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443 from source file: ./depl.properties
Setting environment variable: DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
Reading line: bluemix_DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a from source file: ./depl.properties
Setting environment variable: DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a
Reading line: bluemix_DOCKER_TLS_VERIFY=1 from source file: ./depl.properties
Setting environment variable: DOCKER_TLS_VERIFY=1
Reading line: bluemix_REGISTRY=registry.ng.bluemix.net from source file: ./depl.properties
Setting environment variable: REGISTRY=registry.ng.bluemix.net
Reading line: bluemix_NAMESPACE=sysg from source file: ./depl.properties
Setting environment variable: NAMESPACE=sysg
Reading line: bluemix_ORG_NAME=porg from source file: ./depl.properties
Setting environment variable: ORG_NAME=porg
Reading line: bluemix_SPACE_NAME=ptest from source file: ./depl.properties
Setting environment variable: SPACE_NAME=ptest
Reading line: bluemix_MONGODB_IMAGE=${REGISTRY}/${NAMESPACE}/mongo from source file: ./depl.properties
Setting environment variable: MONGODB_IMAGE=registry.ng.bluemix.net/sysg/mongo
Reading line: bluemix_LETS_CHAT_IMAGE=${REGISTRY}/${NAMESPACE}/lets-chat from source file: ./depl.properties
Setting environment variable: LETS_CHAT_IMAGE=registry.ng.bluemix.net/sysg/lets-chat
API endpoint: https://api.ng.bluemix.net
Authenticating...
OK

Targeted org porg

Targeted space ptest



API endpoint: https://api.ng.bluemix.net (API version: 2.54.0)
User: purna.poudel@gmail.com
Org: porg
Space: ptest
Deleting old configuration file...
Retrieving client certificates for IBM Containers...
Storing client certificates in /home/osboxes/.ice/certs/...

Storing client certificates in /home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a...

OK
The client certificates were retrieved.

Checking local Docker configuration...
OK

Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
OK
You are authenticated with the IBM Containers registry.
Your organization's private Bluemix registry: registry.ng.bluemix.net/sysg

You can choose from two ways to use the Docker CLI with IBM Containers:


Option 1: This option allows you to use 'cf ic' for managing containers on IBM Containers while still using the Docker CLI directly to manage your local Docker host.
Use this Cloud Foundry IBM Containers plug-in without affecting the local Docker environment:


Example Usage:
cf ic ps
cf ic images

Option 2: Use the Docker CLI directly. In this shell, override the local Docker environment by setting these variables to connect to IBM Containers. Copy and paste the following commands:
Note: Only some Docker commands are supported with this option. Run cf ic help to see which commands are supported.
export DOCKER_HOST=tcp://containers-api.ng.bluemix.net:8443
export DOCKER_CERT_PATH=/home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846-0ec8-41da-83e6-209a02e1b14a
export DOCKER_TLS_VERIFY=1

Example Usage:
docker ps
docker images

lc-mongo Up xxx.xx.0.xx:27017->27017/tcp
lets-chat Up xxx.xx.0.xx:8080->8080/tcp
Stopping existing docker-compose container...
Stopping lets-chat ... done
Stopping lc-mongo ... done
ea11eda5-9ebc-45df-beb0-80f2ba8c44e7
1996dc00-d4a6-4ecf-9309-62c986781b88
Removing existing docker-compose container...
Going to remove lets-chat, c-mongo
Removing lets-chat ... done
Removing lc-mongo ... done
Creating lc-mongo
Creating lets-chat
Name Command State Ports
---------------------------------------------------------
lc-mongo Up xxx.xx.0.xx:27017->27017/tcp
lets-chat Up xxx.xx.0.xx:8080->8080/tcp

As you have noticed, we passed options '-t bluemix -u abc.def@xyz.com -p xxxxxxxxx' while executing the autoDeploy.sh. This enforced script to read properties from 'depl.properties' file and set corresponding environment variables specific for Bluemix. Everything else including docker-compose.yml and .env file remain unchanged.
Note: IPs, username and password masked.
In terms of defining properties specific to any environment, in this post, I'm just showing the case for two environments - local native Docker and IBM Bluemix Container Service environment, however,
if you have more environments, you can
define corresponding properties with appropriate prefix, for example:
dev_NAMSPACE=
tst_NAMESPACE=
qa_NAMESPACE=
prd_NAMESPCE=
And while running the build pass relevant container type option like '-t|--conttype dev|tst|qa|prd' then the script should set environment variable appropriately.

Note: You may need to update the logic in the autoDeploy.sh as per your requirement.

There are few other important aspect to remember while trying to make your code/script portable among native Docker and IBM Bluemix Container Services. Few of them are listed below:

  • Currently IBM Bluemix Container Service only supports Docker Compose version 1 of the docker-compose.yml file format. Refer https://docs.docker.com/compose/compose-file/compose-file-v1/ for detail.
  • IBM Bluemix Container Service may not support all Docker or Docker Compose commands or it has other commands that are not found in native Docker. Meaning in certain situation, you may still need to use the 'cf ic' commands instead of native docker command to perform task specific to IBM Bluemix Container Service. See the Supported Docker commands for IBM Bluemix Container Service plug-in (cf ic). The best way to find what native Docker commands are supported within IBM Bluemix or what 'cf ic' commands are available, just run the 'cf ic --help' and you'll see the list. The commands with '(Docker)' at the end are supported Docker commands. 

Finally, let's talk about the possible issue(s) that you may encounter.
1)
Error response from daemon:
400 The plain HTTP request was sent to HTTPS port
400 Bad Request
The plain HTTP request was sent to HTTPS port
nginx
/tmp/_MEI3n6jq4/requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html

The above mentioned error was encountered while sending build context to IBM Bluemix Container Service. It was because the 'DOCKER_TLS_VERIFY' was set with empty value. You may encounter this error in any case when you are trying to establish secure connection, but any one of the following environment variables is not set correctly:
DOCKER_HOST
DOCKER_CERT_PATH
DOCKER_TLS_VERIFY

2)
ERROR: for lets-chat HTTPSConnectionPool(host='containers-api.ng.bluemix.net', port=8443): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

You may encounter the above mentioned error while executing 'docker-compse up' when request times out. The default read timeout is 60 sec. You can override this value by either defining it in '.env' file or as environment variable. e.g. 'export COMPOSE_HTTP_TIMEOUT=120'. Refer to https://docs.docker.com/compose/reference/envvars/ for all available environment variables.

That's it for this post. Try and let me know. You can find/get/download all files from GitLab here: https://gitlab.com/pppoudel/public_shared/tree/master/container_autodeploy


Looks like you're really into Docker, see my other related blog posts below:


Setting up CommandLine Environment for IBM® Bluemix® Container Service

Last week, I had gone through couple of steps to create an account (free - trial for 30 days) with IBM® Bluemix®, deploy my IBM Container (Docker based) and access my IBM Container running on Bluemix using Bluemix/CloudFoundry command line tools setup locally on my laptop. I have done it to prepare myself and also it's a kind of POC for upcoming project work. I've decided to share these steps so that other people in the same situation can benefit from it. Below are steps:

  1. Make sure you have your IBM Container deployed and running on IBM Bluemix. If you don't have one follow the below sub steps:
    • Create a free account with IBM Bluemix (https://console.ng.bluemix.net/)
    • Once the account is created, you can create an IBM Container. See below quick steps:
      • From left hand menu, click on "Containers"
      • Then click on "Create Containers" link and follow the instruction.
        Note: you can select the container type from the available list or upload your own compatible container image. Below screen shot shows the container that I created:
        Running container
  2. Now it's time to setup the command line tools on your local desktop. I have used the Fedora v24.x running as a virtual machine. 
    • Download Bluemix_CLI_0.4.6_amd64.tar.gz from http://clis.ng.bluemix.net/ui/home.html and extract it to some directory:
      $>tar -xvzf ~/Downloads/Bluemix_CLI_0.4.6_amd64.tar.gz
    • Among others files, you'll see  'install_bluemix_cli' executable file under /Bluemix_CLI
      $>sudo ./install_bluemix_cli
    • Once it's installed, download CloudFoundry tool:
      $>sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
    • Install CloudFoundry CLI:
      $>sudo yum install cf-cli
      ...
      Installed:
      cf-cli.x86_64 0:6.26.0-1
    • Check the version:
      $>cf -v
      cf version 6.23.1+a70deb3.2017-01-13
    • Install the IBM Bluemix Container Service plug-in (cf ic) to use the native Docker CLI. More details about it can be found here.
      $>cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64
    • Verify the plugins:
      $>cf plugins
      Listing Installed Plugins...
      OK

      Plugin Name Version Command Name Command Help
      IBM-Containers 0.8.964 ic IBM Containers plug-in
  3. It's time to login to CloudFoundry and run you container command to manage your container.
    • Login to Bluemix/CloudFoundry:
      $>cf login -a https://api.ng.bluemix.net
      Email> purna.poudel@gmail.com
      Password>
      Authenticating...
      OK
    • Login to Container:
      $> cf ic login
      Deleting old configuration file...
      Retrieving client certificates for IBM Containers...
      Storing client certificates in /home/osboxes/.ice/certs/...

      Storing client certificates in /home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846...

      OK
      The client certificates were retrieved.

      Checking local Docker configuration...
      OK

      Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
      OK
      You are authenticated with the IBM Containers registry.
      ...
  4. It's time to manage your Container(s) from your desktop using command line
    • Let's check our running Container process(es)
      $> cf ic ps 
      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      d476a406-ba9 registry.ng.bluemix.net/ibmliberty:webProfile7 "" 2 days ago Running 169.46.21.44:9080->9080/tcp, 169.46.21.44:9443->9443/tcp sysgLibertyCont
    • Let's inspect the running Container
      $> cf ic inspect d476a406-ba9
      [
      {
      "BluemixApp": null,
      "BluemixServices": null,
      "Config": {
      "AttachStderr": false,
      "AttachStdin": false,
      "AttachStdout": false,
      "Cmd": [],
      "Dns": "",
      "Env": [
      "logging_password=",
      "space_id=7b9e7846-0ec8-41da-83e6-209a02e1b14a",
      "logstash_target=logmet.opvis.bluemix.net:9091",
      "metrics_target=logmet.opvis.bluemix.net:9095"
      ],
      "Hostname": "instance-002eacfa",
      "Image": "registry.ng.bluemix.net/ibmliberty:webProfile7",
      "ImageArchitecture": "amd64",
      "Labels": {
      "doc.url": "/docs/images/docker_image_ibmliberty/ibmliberty_starter.html"
      },
      "Memory": 256,
      "MemorySwap": "",
      ....
List of IBM Bluemix Container Service plug-in (cf ic) commands for managing containers are available at https://console.ng.bluemix.net/docs/containers/container_cli_reference_cfic.html


Looks like you're really into Docker, see my other related blog posts below:


Quick start with IBM Datapower Gateway for Docker

Lately, IBM has done great job by providing development version of Datapower in different flavors. It's really useful for doing POC stuff as well as testing application in development environment. Today, I had a chance to play with Docker edition of Datapower Gateway. Within few minutes, I was able to pull the Datapower Docker image from the Docker hub, create the Datapower Docker container, run, and play with Datapower. IBM Datapower Gateway for Docker web page (https://hub.docker.com/r/ibmcom/datapower/) has good information to start with. However, this image contains unconfigured Datapower Gateway, and you'll not be able to access either Datapower Web Management Console or establish SSH connection to Datapower even after the container is running. For that either you have to access the Datapower in interactive mode and enable the admin-state of Web Management and SSH, add configuration through Docker build or  use (externalised) configuration file during run. Below we'll discuss two (out of three) options.
1) [optional] Create 'config' and 'local' directories on your host machine's file system from where you are going to execute the docker run command later in the step #2. For example, I have created following directory structure under: /opt/docker_dev/datapowerbuild/
$ ls -rl datapowerbuild
total 8
drwxrwxr-x. 2 osboxes osboxes 4096 Jan 31 20:19 local
drwxrwxr-x. 2 osboxes osboxes 4096 Jan 31 20:19 config
We are going to put docker configuration file(s) in those directories. Docker is able to access these external configuration. It is very powerful concept and explained as [data] volume in Docker documentation. Read details here.
2) [optional] Create auto-startup.cfg configuration file and put under 'config' directory created in step #1, so that Docker Web Management and SSH admin state is enabled when Docker container runs.

top; co

ssh

web-mgmt
  admin enabled
  port 9090
exit

Above script is taken from https://github.com/ibm-datapower/datapower-tutorials/blob/master/using-datapower-in-ibm-container-service/src/auto-startup.cfg
3) Execute docker run command. It's assumed that you already have Docker installed, configured and running on your host machine. If not follow Docker installation steps here:

Following command is based on IBM's instruction available at     https://hub.docker.com/r/ibmcom/datapower/
$ docker run -it \
-v $PWD/config:/drouter/config \
-v $PWD/local:/drouter/local \
-e DATAPOWER_ACCEPT_LICENSE=true \
-e DATAPOWER_INTERACTIVE=true \
-p 9090:9090 \
-p 2222:22 \
ibmcom/datapower
Note: make sure your machine is connected to internet otherwise it will not be able to pull the Datapower Docker image from Docker hub.
Some of the last line you'll see before logon prompt are:
20170201T014408.003Z [0x00350014][mgmt][notice] ssh(SSH Service): tid(111): Operational state up
20170201T014408.007Z [0x8100003b][mgmt][notice] domain(default): Domain configured successfully.
20170201T014408.073Z [0x00350014][mgmt][notice] web-mgmt(WebNGUI-Settings): tid(303): Operational state up
4) logon to console using default userid 'admin' and password 'admin'.
5) Launch browser and type 'https://<host machine ip>:9090' to access Web Management console.


6) Create ssh connection to your host on port 2222 to access Datapower.
Note: If you have not performed optional steps #1 and #2, your web-mgmt and 'ssh' connection will not be available. Perform the optional steps #7.

7) [optional] Connect to Datapower console interactively and turn the Global Configuration mode.
configure terminal
web-mgmt
admin-state enabled
exit
ssh
exit
See the screen shots below for better clarity.

Enable web-mgmt admin-state

Enable ssh admin-state

Showing the status




Looks like you're really into Docker, see my other related blog posts below:


Scripting Ideas

Important: This page will be continually updated as I find new work-around or ideas while working on scripts (any script - shell, windows, or others).

Running dos2unix in batch mode:


One of my teammates today seemed pretty frustrated while trying to run 'dos2unix' command in batch mode. His script (see below) was almost doing the thing, however instead of updating the file, the content were displayed on screen (stdout).

His script (with issue) that sent output to stdout:
find . -type f -name "*.sh" | xargs -i dos2unix {};
Here is the corrected script, which correctly updates each file under current directory by converting end of line from Windows format to Unix format.
find . -type f -name "*.sh" | xargs -i dos2unix {} {};
As you have noticed, the only thing missing was the last set of '{}', which basically tells dos2unix to use the same filename for output as per input. Below is example using 'exec' instead of 'xargs' to achieve the same.
find . -type f -name ".sh" -exec dos2unix {} {} \;
Command reference links: find, xargs,dos2unix

Using variable in SED:


file="myfile.txt";
replaceme="iamnew";
sed 's/iamold/'"${replaceMe}"'/g' < $file > $file".new";
OR
file="myfile.txt";
replaceme="iamnew";
sed "s/iamold/${replaceme}/g" < $file > $file".new";
Note: in above example, any occurrence of 'iamold' in 'myfile.txt' will be replace by 'iamnew' and written in 'myfile.txt.new'. Important thing here is the variable $replaceme should be in double quote. Below variant does not work. The variable '$replaceme' will not be expanded.
file="myfile.txt";
replaceme="iamnew";
sed 's/iamold/${replaceme}/g' < $file > $file".new";
Command reference links: sed


Finding which process owns/listens on which port


Here, I'm finding which process/process ID is listening on port 9080. Here is how I can find out.
Note: the following has been tested on CentOS Linux.

1) Using 'netstat -lnp'
$> netstat -lnp | grep 9080
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.) tcp6 0 0 :::9080 :::* LISTEN 3840/java


# using sudo:
$> sudo netstat -lnp | grep 9080
tcp6 0 0 :::9080 :::* LISTEN 3840/java


# Or find all ports in use by certain process/PID
$> sudo netstat -lnp | grep java
tcp6 0 0 :::9080 :::* LISTEN 3840/java
tcp6 0 0 :::10010 :::* LISTEN 3840/java
tcp6 0 0 :::9443 :::* LISTEN 3840/java
tcp6 0 0 127.0.0.1:57576 :::* LISTEN 3840/java

#by PID
$> sudo netstat -lnp | grep 3840
tcp6 0 0 :::9080 :::* LISTEN 3840/java
tcp6 0 0 :::10010 :::* LISTEN 3840/java
tcp6 0 0 :::9443 :::* LISTEN 3840/java
tcp6 0 0 127.0.0.1:57576 :::* LISTEN 3840/java

2) Using 'lsof -i :<port>'
$> lsof -i :9080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 3840 osboxes 339u IPv6 40626 0t0 TCP *:glrpc (LISTEN)

3) Using ss -ntlp
$> ss -ntlp | grep 9080
LISTEN 0 128 :::9080 :::* users:(("java",pid=3840,fd=339))


Retrieving Certificate and Updating kestore file.


Following file show example of retrieving Google certificate from www.google.com and adding it to local key.jks file. script file: retrieveAndUpdateCert.sh

#! /bin/bash
# Remote host to retrieve certificate from
RHOST=www.google.com
# Remote port
RPORT=443
# key store file path
KS_FILEPATH=/opt/secrets/key.jks
# Certificate Alias
CERT_ALIAS=googlecert

# Retrieve the certificate and put in temporary file '/tmp/cert.crt' in this case.
# Refer to https://www.openssl.org/docs/man1.0.2/apps/openssl.html for openssl command details.
true | openssl s_client -connect ${RHOST}:${RPORT} 2>/dev/null | openssl x509 -in /dev/stdin > /tmp/cert.crt
# Install certificate using keytool
# keytool comes with Java.
# Refer to https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html for keytool command details.
keytool -import -file /tmp/cert.crt -alias ${CERT_ALIAS} -keystore ${KS_FILEPATH} -storepass $1
# View certs in the keystore:
keytool -list -v -keystore ${KS_FILEPATH} -storepass $1

Run file as:
$> ./retrieveAndUpdateCert.sh <Your keystore password>


AWK numerical processing tricks



1. If you have number with 1000 separator (,) like 84,959, AWK fails to process the number correctly unless you remove the separator (,) from input. for example:
$> echo "84,959|34,600" | awk 'BEGIN{FS=OFS="|";}{print $1/1000,$2/1000}'
0.084|0.034
As seen from the above result, AWK only took the input values prefixed by comma. Fix is simple, just remove the "," from input value. The following line gives the correct result:

$> echo "84,959|34,600" | awk 'BEGIN{FS=OFS="|";}{gsub(",","",$1);gsub(",","",$2); print $1/1000,$2/1000}'
84.959|34.6

2. If you get some weird result while doing AWK numeric comparison, make sure the value is presented as number not string literal. For example:
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2);if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
99 is greater than 100
As seen from above, the result is not correct/expected. It is because, the value of num above is '99 ', i.e. there is a space character after 99, and AWK processes this as string comparison. Simple fix is to multiply the value by 1 or add 0 before doing numeric comparison.
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2)*1;if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
99 is less than 100
or
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2)+0;if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
99 is less than 100

AWK printing from specific column/field to the end

In the following example, matrix.csv (comma delimited file) data is piped to awk which processes one row at a time (excluding first header row), first column is a time in milliseconds, so it converts into displayable date and prints, but rest of the columns (starting from 2nd column) require no processing, so it prints as it is.

cat matrix.csv | awk 'BEGIN{FS=OFS=","}{if(NR > 1) {print strftime("%c", ($1 + 500)/1000), substr($0, index($0,$2))}}'

Using comma as a delimiter in for loop

By default 'for loop' expects input delimited by space (or tab or newline) character. However, if you need to use ',' (comma), one of the easiest way is to override Internal Field Separator (IFS) value. However, make sure to set it back to the original value. See the script below, it opens a set of firewall ports delimited by comma ','. Before the for loop, we set IFS="," and after the for loop, we set value back to space " ".

#!/bin/sh
tcp_ports="179,443,80,2375,2376,2377,2380,4001,4443,4789,6443,6444,7001,7946,8080,10250,12376-12387"
udp_ports="4789,7946"

openFW() {
  IFS=",";
 for _port in $1; do
  echo "Opening ${_port}/$2";
  sudo firewall-cmd --permanent --zone=public --add-port=${_port}/$2;
 done
 IFS=" ";
}

openFW "${tcp_ports}" tcp;
openFW "${udp_ports}" udp;

# Recycle firewall
sudo firewall-cmd --reload

Reset Oracle Directory Manager's Password


Have you ever had situation that you needed to execute a command for  Oracle Directory Server which required root/Directory Manager's password and the password you had just did not work? I encountered one today and had to scramble to find a solution to reset it. 'pwdhash' tool that comes with Oracle Directory Server rescued me. Here is what I did:
  1. Before resetting the password, you may want to try few of your guesses. Here is how you do it. Get the actual root/Directory Manager's password from dse.ldif file. It's with attribute 'nsslapd-rootpw:' something like: nsslapd-rootpw: {SSHA256}WYChc/pNA34fD8RKo//ReBCsGstkz0Ux54gfsMaruXhMP89tAnMtd
  2. Then compare each of your guess with the encrypted password from dse.ldif using 'pwdhash'. It has option to compare '-c'. Below is how you do it. If password matches, you'll get message "password ok." otherwise "password does not match." is displayed.

    ./pwdhash -D <instance-location> -c "<encrypted-password>" <your-guess-password>
    # Actual example from my ODS instance
    $>cd
    /opt/ods/dsee7/bin
    $>./pwdhash -D /opt/ods/dsee7/instances/dsInst2 -c "{SSHA256}WYChc/pNA34fD8RKo//ReBCsGstkz0Ux54gfsMaruXhMP89tAnMtd" myPassw0rd
    ./pwdhash: password does not match.
     
  3. If none of your guess matches then it's time to reset the password hard way. Here is how to do it:
    # Stop your Oracle Directory Instance
    $>cd /opt/ods/dsee7/bin
    $>./dsadm stop /opt/ods/dsee7/instances/dsInst2
    Directory Server instance '/opt/ods/dsee7/instances/dsInst2' stopped

    # Generate the encrypted password
    $>./pwdhash -D /opt/ods/dsee7/instances/dsInst2 -s SSHA256 myPassw0rd
    {SSHA256}qOjAyposbx1LzM/LB4vk1ZKS2yNs2Oh0yDjo66GIjnMpIVMJMhi6fw==
     
  4. Take the generated encrypted password from step #3 and replace the value of attribute 'nsslapd-rootpw:' in dse.ldif file and save it.
  5. Restart the Oracle Directory Instance.
    # Start your Oracle Directory Instance
    $>cd /opt/ods/dsee7/bin
    $>./dsadm start /opt/ods/dsee7/instances/dsInst2
    Directory Server instance '/opt/ods/dsee7/instances/dsInst2' started: pid=2982
     

That's it, password reset is done in hard way!!!

However in future, if you just want to change the root/Directory Manager's password, you can use the 'dsconf' command with 'set-server-prop' option. Below is more detail:
# Put new password in a temporary file.
$>echo "_0d3mG4_" > /tmp/odspwd.txt
# Now run the 'dsconf' command. You need to provide current password for Directory Manager when it prompts
$>./dsconf set-server-prop -h localhost -p 1489 root-pwd-file:/tmp/odspwd.txt
Enter "cn=Directory Manager" password:
 

Experience Sharing - TOGAF® Part I and Part II Certification in Two Weeks through Self Study

I passed the combined TOGAF® 9.1 Part I  and Part II exams on May 24th. I did it through self study and wanted to share my experience so that it would be useful for others. Timeline wise, I spent around 2 weeks - started studying on May 11th by purchasing official TOGAF® self-study package from the Open Group and gave exams on May 24th here in Toronto with satisfactory results - i.e. I passed Part I with 87% and Part II with 85%.
In terms of preparation almost 60% of my time, I spent on foundation (Part I) study as I realized early that in order to really crack the Part II exam, I had to have the good knowledge of TOGAF® foundation. Having said so, however, I did not wait until the end to prepare for the Part II exam. After few days of Part I study, I kind of accommodated Part II as well, after every iteration of part I, which actually helped me to better understand the TOGAF® concept as Part II is based on knowledge, TOGAF® way of thinking, analysing and also some related experience as well. Below some details:

Registration:

I registered for TOGAF® 9 Combined Part I and II (Exam Number: OG0093; Exam details: http://certification.opengroup.org/examinations/TOGAF/TOGAF9-combined) through Prometric on May 11.
Note: the combined exam costs  495 USD. Also, note that total cost for combined exams is little bit less than giving them separately. Here is Prometric site for registration: https://www.prometric.com/en-us/clients/opengroup/Pages/landing.aspx


Preparation:

  • Purchased TOGAF® 9 CERTIFICATION SELF-STUDY PACK, 3RD EDITION (SKU: B097; cost 59.90 USD) PDF version from the Open Group web site.
  • Started studying with Pocket guide (G117p.pdf) included in the self-study package. It provided general understanding of TOGAF®. If you start with official TOGAF® 9.1 documentation or detailed study guide, you may easily get lost or be bored. So, my suggestion is "start light" and for that purpose pocket guide seems to be the best.
  • Reviewed the Reference Cards included in the self-study package. Reference cards really helped me to understand the TOGAF® concept and methodology visually. Once I reviewed all 4 reference cards, I created my own versions of reference cards to make sure that I understood them well. Note: There are 4 reference cards N111, N112, N113, and N114 included in the self-study package.
  • Once I completed the Pocket book and Reference cards, I did first iteration of Part I practice test consisting 40 questions and remembered doing fairly well. I reviewed the explanation for those questions which I answered incorrectly. That's when I also started looking into the official TOGAF® 9.1 documentation and reviewed the materials in detail related to each question.
  • After doing Part I practice test and also taking notes from official TOGAF® 9.1 documentation, I tried the first 8 questions from Part II practice test. Again reviewed all answers and referred back to either official TOGAF® 9.1 documentation or study guides included in the package.
  • Following YouTube tutorials also helped me to shape my TOGAF® concept:
  • Before doing the 2nd set of 40 questions included in the self-study package for Part I test question, I did some tests from public sites. especially 3 set of 40 questions from http://theopenarch.com/81-tests/72-TOGAF®-9-exam-tests.html . I found, these questions relatively harder than the questions included in the official self-study package. I remembered doing not very well, so I had to refer back to official document, and study guide and go deep and detail and take notes.
  • On/Around the 6th day, I felt somehow comfortable with the overall TOGAF® document structure, and had clear high level TOGAF® concept so I decided to proceed with the Part I and Part II  preparations kind of in parallel, i.e. at any point in time I could be doing Part I or Part II tests and referring back to TOGAF® 9.1 documentation for detail.
  •  Self-Study package contains bonus questions for both Part I and Part II exams. I also did tests from the following sites:
  • I did two iterations of practice tests and reviewed the answers. I did detail study from official TOGAF® 9.1 document for any questions that I answered wrong during each iteration. I did 3rd iteration only to those questions that were marked wrong in the 2nd iteration.
  • My study style:
    • Each day I started with the review of previous day's learning and any important notes.
    • On the last day before the test, I reviewed all my personal notes.


In summary (based on my experience),  Part I exam, requires well versed TOGAF® knowledge, so, make sure you know all the TOGAF® terminologies, and methodologies. TOGAF® Part II, however in addition to TOGAF® knowledge, also requires very TOGAF® way of thinking and analysing each scenario.By the way, scenarios in the Part II exams are fairly long and it is easy to loose the track, so, make sure to note (in memory or on provided pad) what' the ask and concerns are. Most of the choices for each scenario  are equally long and pretty close to each other (in terms of meaning), so try to think in TOGAF® way to select the right one and avoid the distractor. Even though, Part II is an open book exam, avoid using attached reference document in the beginning. If you are not confident with your selection, mark it and move forward. Once you answer all your questions, then you can go back to marked questions and open the PDF document provided as reference to fine tune your answer. This way, you'll save time and also won't leave behind any question unanswered because of lack of time. 
That's it. All the best if any one of you are taking TOGAF® exam any time soon!