What a day - rainy, but beautiful and cheerful! 15,000 Cyclists, Runners and Walkers stormed the Gardiner Expressway and Don Valley Parkway (@TO_DVP) today braving the rain for Ride for Heart event (www.rideforheart.ca). It was still raining in Toronto when we're at start line of 10KM run, but by the time, we started running, mother nature smiled a little bit and slowly stopped pouring. It felt amazingly good running on traffic free Gardiner Expressway while watching downtown Toronto with fellow participants - all happy and cheering. I completed the race in just few seconds less of one hour (my actual time recorded by BIB and provided by www.sportstats.ca was 59 minutes and 23 seconds). This post is to Thank YOU ALL who supported me and contributed for the good cause (Life-Saving research in the field of Heart and Stroke) and my family who waked up early in the morning and went with me to Ontario Place (http://www.ontarioplace.com/) to cheer for me. Because of your help, I was able to raise $500.00 for Heart and Stroke Foundation. Over all, today's event raised about 6 Million Dollar for Heart and Stroke related research. That's really great! Thank you All!
Purna Poudel's blog covering technology (Architecture, DevOps, DevSecOps, Security etc.), travel, and personal experiences.
Custom Ant Task IsInList
I had created this Custom Ant Task sometime ago while working on a project where I needed to check whether an item exists in the list. As I did not find any other efficient way to do it using any of the standard Ant tasks, I created one on my own. I'm publishing (see below GitHub project location) this Custom Ant Task source code as an Open Source. Feel free to use/modify/distribute it as per your need or suggest if you have any other better ways to do it.
2) The GitHub project also has Ant build file build.xml to build the project from source code, sample-usage.xml - Ant build file that shows few usage scenarios of 'IsInList' task and README.txt that basically explains how to use it.
How to Use It?
Follow the steps below:
1) Make sure isinlist-<version>.jar file is in your build classpath. You can do it either by adding it into your $ANT_HOME/lib directory or by defining a custom library path like below and making a reference to it.
2) Next, define the "isinlist" task, below is one of the few ways:
3) Use it, see the examples below:
Example 1:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";". Here you need to find out whether or not any item starting with "native_stdout.log" exists. In this case you can do lookup using regular expression (isRegEx="true"). In your build file, you'll need to have:
Example 2:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";".
Here you need to find out whether an item called "release" exists in the given list. In this case you can use regular lookup, meaning isRegEx="false".
See the sample-usage.xml for complete example and more detail usage scenarios.
You can get/dowload files from GitHub location: https://github.com/pppoudel/customanttasks.
What IsInList contains?
1) It contains one Java source file: com.sysgenius.tools.ant.customtask.IsInList.java
2) The GitHub project also has Ant build file build.xml to build the project from source code, sample-usage.xml - Ant build file that shows few usage scenarios of 'IsInList' task and README.txt that basically explains how to use it.
How to Use It?
Follow the steps below:
1) Make sure isinlist-<version>.jar file is in your build classpath. You can do it either by adding it into your $ANT_HOME/lib directory or by defining a custom library path like below and making a reference to it.
<path id="ant.opt.lib.path"> |
2) Next, define the "isinlist" task, below is one of the few ways:
<typedef classname="com.sysgenius.tools.ant.customtask.IsInList" name="isinlist" classpathref="ant.opt.lib.path"/> |
3) Use it, see the examples below:
Example 1:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";". Here you need to find out whether or not any item starting with "native_stdout.log" exists. In this case you can do lookup using regular expression (isRegEx="true"). In your build file, you'll need to have:
<property name="item.list" value="ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*"/> |
Example 2:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";".
Here you need to find out whether an item called "release" exists in the given list. In this case you can use regular lookup, meaning isRegEx="false".
<property name="item.list" value="ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*"/> |
See the sample-usage.xml for complete example and more detail usage scenarios.
You can get/dowload files from GitHub location: https://github.com/pppoudel/customanttasks.
Making Your Container Deployment Portable
This post is a follow-up and extension of my previous post "Setting up CommandLine Environment for IBM® Bluemix® Container Service".
In this post, I'm further exploring the way of working with containers whether it's locally deployed native Docker or container created with IBM® Bluemix® Container Service. I'm going to show few basic scripting ideas, so that the same docker-compose.yml and other related files can be used no matter whether you are dealing with locally deployed native Docker container(s) or IBM container(s).
One step ahead, here we will be working with multiple containers and employing Docker Compose. I have used basic steps for this exercise from Bluemix tutorial (https://console.ng.bluemix.net/docs/containers/container_compose_intro.html#container_compose_config) and added few steps and logic to do basic automation and make it portable, so that it can be executed the same way independent of environment.
Pre-requisites for this exercise:
As part of this exercise, we will be putting together 'docker-compose.yml' with replaceable variable(s), '.env' file for environment variables with default values, property file 'depl.properties' for environment specific properties, and script file 'autoDeploy.sh' with basic logic that can be executed to manage both native Docker as well as IBM Bluemix containers. We will be creating and linking following two containers.
At the end, we'll also look into few possible issues that you may encounter.
Let's start with creating docker-compose.yml. Compose simplifies the definition and execution of multi-container Docker applications. See Docker Compose documentation for details.
Below is our simple docker-compose.yml which defines two containers <<lets-chat>> and <<lc-mongo>>. As you see a variable has been assigned as a value to 'image' attribute. It is in order to make it portable between native Docker and container to be deployed on IBM Bluemix as the image registry will be different. You can assign variable this way to any attribute as it's value which will be replaced by value of corresponding environment variable.
Now, let's see, where we can define environment variables. Docker supports either defining it through command shell as 'export VAR=VALUE' or defining them in '.env' file (Note: If you are deploying your service using 'docker stack deploy --compose-file docker-compose.yml <service-name>' instead of 'docker-compose up ...' values in the docker-compose.yml may not be replaced by corresponding environment values defined in .env file. See https://github.com/moby/moby/issues/29133). Environment variable defined through 'export VAR=VALUE' takes precedence. See more detail on variable substitution and declaring default environment variables in file.
Below is our '.env' file:
Usually, it is a best practice to define default variables with 'DEV/Development' environment specific values in '.env' file and have mechanism to override those values for higher environment(s). It helps to boost developers' productivity. In order to follow the above mentioned principle, I've defined my local native Docker container specific environment variables in my '.env' file and will have separate property file to define environment variables and their values for other environments (Bluemix in my case for this post).
Below is my property file 'depl.properties' which defines property and their Bluemix specific values:
In this post, I'm further exploring the way of working with containers whether it's locally deployed native Docker or container created with IBM® Bluemix® Container Service. I'm going to show few basic scripting ideas, so that the same docker-compose.yml and other related files can be used no matter whether you are dealing with locally deployed native Docker container(s) or IBM container(s).
One step ahead, here we will be working with multiple containers and employing Docker Compose. I have used basic steps for this exercise from Bluemix tutorial (https://console.ng.bluemix.net/docs/containers/container_compose_intro.html#container_compose_config) and added few steps and logic to do basic automation and make it portable, so that it can be executed the same way independent of environment.
Pre-requisites for this exercise:
- (native) Docker installed and running locally(may be on your laptop/desktop)
- CommandLine environment setup for IBM® Bluemix® Container Service. See previous post "Setting up CommandLine Environment for IBM® Bluemix® Container Service".
- Docker Compose version 1.6.0 or later installed on your laptop/desktop. See installation instruction here
- lets-chat and mongo images are available in your local and Bluemix private registry.
As part of this exercise, we will be putting together 'docker-compose.yml' with replaceable variable(s), '.env' file for environment variables with default values, property file 'depl.properties' for environment specific properties, and script file 'autoDeploy.sh' with basic logic that can be executed to manage both native Docker as well as IBM Bluemix containers. We will be creating and linking following two containers.
- lets-chat (basic chat application)
- mongo (database to store data)
At the end, we'll also look into few possible issues that you may encounter.
Let's start with creating docker-compose.yml. Compose simplifies the definition and execution of multi-container Docker applications. See Docker Compose documentation for details.
Below is our simple docker-compose.yml which defines two containers <<lets-chat>> and <<lc-mongo>>. As you see a variable has been assigned as a value to 'image' attribute. It is in order to make it portable between native Docker and container to be deployed on IBM Bluemix as the image registry will be different. You can assign variable this way to any attribute as it's value which will be replaced by value of corresponding environment variable.
lets-chat:
|
Now, let's see, where we can define environment variables. Docker supports either defining it through command shell as 'export VAR=VALUE' or defining them in '.env' file (Note: If you are deploying your service using 'docker stack deploy --compose-file docker-compose.yml <service-name>' instead of 'docker-compose up ...' values in the docker-compose.yml may not be replaced by corresponding environment values defined in .env file. See https://github.com/moby/moby/issues/29133). Environment variable defined through 'export VAR=VALUE' takes precedence. See more detail on variable substitution and declaring default environment variables in file.
Below is our '.env' file:
# COMPOSE_HTTP_TIMEOUT default value is 60 seconds.
|
Usually, it is a best practice to define default variables with 'DEV/Development' environment specific values in '.env' file and have mechanism to override those values for higher environment(s). It helps to boost developers' productivity. In order to follow the above mentioned principle, I've defined my local native Docker container specific environment variables in my '.env' file and will have separate property file to define environment variables and their values for other environments (Bluemix in my case for this post).
Below is my property file 'depl.properties' which defines property and their Bluemix specific values:
# Define property as |
Now, we need to have a script (logic) that can set appropriate environment variables based on the target environment.
Below is sample (autoDeploy.sh) script:
#!/bin/sh
|
Now, it's time to test the logic above.
First, let's execute the script locally against native Docker.
$> ./autoDeploy.sh -t native
|
As per script execution logic, it first identifies if any container instance of 'lc-mongo' and 'lets-chat', if so, it stops and removes the existing container then creates new one from existing images and starts and checks if they are running successfully. Since '-t native' option passed through command line, it didn't set any environment variable, but Docker Compose used the default environment variables defined in '.env' file.
It is time to test the same against IBM Bluemix Container Service. See below:
$> ./autoDeploy.sh -t bluemix -u abc.def@xyz.com -p xxxxxxxxx
|
As you have noticed, we passed options '-t bluemix -u abc.def@xyz.com -p xxxxxxxxx' while executing the autoDeploy.sh. This enforced script to read properties from 'depl.properties' file and set corresponding environment variables specific for Bluemix. Everything else including docker-compose.yml and .env file remain unchanged.
Note: IPs, username and password masked.In terms of defining properties specific to any environment, in this post, I'm just showing the case for two environments - local native Docker and IBM Bluemix Container Service environment, however,
if you have more environments, you can
define corresponding properties with appropriate prefix, for example:
dev_NAMSPACE=
tst_NAMESPACE=
qa_NAMESPACE=
prd_NAMESPCE=
And while running the build pass relevant container type option like '-t|--conttype dev|tst|qa|prd' then the script should set environment variable appropriately.
Note: You may need to update the logic in the autoDeploy.sh as per your requirement.
There are few other important aspect to remember while trying to make your code/script portable among native Docker and IBM Bluemix Container Services. Few of them are listed below:
- Currently IBM Bluemix Container Service only supports Docker Compose version 1 of the docker-compose.yml file format. Refer https://docs.docker.com/compose/compose-file/compose-file-v1/ for detail.
- IBM Bluemix Container Service may not support all Docker or Docker Compose commands or it has other commands that are not found in native Docker. Meaning in certain situation, you may still need to use the 'cf ic' commands instead of native docker command to perform task specific to IBM Bluemix Container Service. See the Supported Docker commands for IBM Bluemix Container Service plug-in (cf ic). The best way to find what native Docker commands are supported within IBM Bluemix or what 'cf ic' commands are available, just run the 'cf ic --help' and you'll see the list. The commands with '(Docker)' at the end are supported Docker commands.
Finally, let's talk about the possible issue(s) that you may encounter.
Error response from daemon:
|
The above mentioned error was encountered while sending build context to IBM Bluemix Container Service. It was because the 'DOCKER_TLS_VERIFY' was set with empty value. You may encounter this error in any case when you are trying to establish secure connection, but any one of the following environment variables is not set correctly:
DOCKER_HOST
DOCKER_CERT_PATH
DOCKER_TLS_VERIFY
2)
ERROR: for lets-chat HTTPSConnectionPool(host='containers-api.ng.bluemix.net', port=8443): Read timed out. (read timeout=60)
|
You may encounter the above mentioned error while executing 'docker-compse up' when request times out. The default read timeout is 60 sec. You can override this value by either defining it in '.env' file or as environment variable. e.g. 'export COMPOSE_HTTP_TIMEOUT=120'. Refer to https://docs.docker.com/compose/reference/envvars/ for all available environment variables.
That's it for this post. Try and let me know. You can find/get/download all files from GitLab here: https://gitlab.com/pppoudel/public_shared/tree/master/container_autodeploy
Looks like you're really into Docker, see my other related blog posts below:
- Using Docker Secrets with IBM WebSphere Liberty Profile Application Server
- Make your container deployment portable (this post)
- Experience sharing - Dock Datacenter
- Setting up CLI environment for IBM Bluemix
- Quick start with IBM Datapower Gateway Docker Edition
Setting up CommandLine Environment for IBM® Bluemix® Container Service
Last week, I had gone through couple of steps to
create an account (free - trial for 30 days) with IBM® Bluemix®, deploy my IBM Container (Docker based) and access my IBM Container running on Bluemix using Bluemix/CloudFoundry command line tools setup locally on my laptop. I have done it to prepare myself and also it's a kind of POC
for upcoming project work. I've decided to share these steps so that other people in the same situation can benefit from it. Below are steps:
Looks like you're really into Docker, see my other related blog posts below:
- Make sure you have your IBM Container deployed and running on IBM Bluemix. If you don't have one follow the below sub steps:
- Create a free account with IBM Bluemix (https://console.ng.bluemix.net/)
- Once the account is created, you can create an IBM Container. See below quick steps:
- Now it's time to setup the command line tools on your local desktop. I have used the Fedora v24.x running as a virtual machine.
- Download Bluemix_CLI_0.4.6_amd64.tar.gz from http://clis.ng.bluemix.net/ui/home.html and extract it to some directory:
$>tar -xvzf ~/Downloads/Bluemix_CLI_0.4.6_amd64.tar.gz
- Among others files, you'll see 'install_bluemix_cli' executable file under /Bluemix_CLI
$>sudo ./install_bluemix_cli
- Once it's installed, download CloudFoundry tool:
$>sudo wget -O /etc/yum.repos.d/cloudfoundry-cli.repo https://packages.cloudfoundry.org/fedora/cloudfoundry-cli.repo
- Install CloudFoundry CLI:
$>sudo yum install cf-cli
...
Installed:
cf-cli.x86_64 0:6.26.0-1 - Check the version:
$>cf -v
cf version 6.23.1+a70deb3.2017-01-13 -
Install the IBM Bluemix Container Service plug-in (cf ic) to use the native Docker CLI. More details about it can be found here.
$>cf install-plugin https://static-ice.ng.bluemix.net/ibm-containers-linux_x64
-
Verify the plugins:
$>cf plugins
Listing Installed Plugins...
OK
Plugin Name Version Command Name Command Help
IBM-Containers 0.8.964 ic IBM Containers plug-in -
It's time to login to CloudFoundry and run you container command to manage your container.
-
Login to Bluemix/CloudFoundry:
$>cf login -a https://api.ng.bluemix.net
Email> purna.poudel@gmail.com
Password>
Authenticating...
OK
-
Login to Container:
$> cf ic login
Deleting old configuration file...
Retrieving client certificates for IBM Containers...
Storing client certificates in /home/osboxes/.ice/certs/...
Storing client certificates in /home/osboxes/.ice/certs/containers-api.ng.bluemix.net/7b9e7846...
OK
The client certificates were retrieved.
Checking local Docker configuration...
OK
Authenticating with the IBM Containers registry host registry.ng.bluemix.net...
OK
You are authenticated with the IBM Containers registry.
...
-
Login to Bluemix/CloudFoundry:
-
It's time to manage your Container(s) from your desktop using command line
-
Let's check our running Container process(es)
$> cf ic ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d476a406-ba9 registry.ng.bluemix.net/ibmliberty:webProfile7 "" 2 days ago Running 169.46.21.44:9080->9080/tcp, 169.46.21.44:9443->9443/tcp sysgLibertyCont
-
Let's inspect the running Container
$> cf ic inspect d476a406-ba9
[
{
"BluemixApp": null,
"BluemixServices": null,
"Config": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": [],
"Dns": "",
"Env": [
"logging_password=",
"space_id=7b9e7846-0ec8-41da-83e6-209a02e1b14a",
"logstash_target=logmet.opvis.bluemix.net:9091",
"metrics_target=logmet.opvis.bluemix.net:9095"
],
"Hostname": "instance-002eacfa",
"Image": "registry.ng.bluemix.net/ibmliberty:webProfile7",
"ImageArchitecture": "amd64",
"Labels": {
"doc.url": "/docs/images/docker_image_ibmliberty/ibmliberty_starter.html"
},
"Memory": 256,
"MemorySwap": "",
....
-
Let's check our running Container process(es)
Looks like you're really into Docker, see my other related blog posts below:
- Using Docker Secrets with IBM WebSphere Liberty Profile Application Server
- Make your container deployment portable
- Experience sharing - Dock Datacenter
- Setting up CLI environment for IBM Bluemix (this post)
- Quick start with IBM Datapower Gateway Docker Edition
Quick start with IBM Datapower Gateway for Docker
Lately, IBM has done great job by providing development version of
Datapower in different flavors. It's really useful for doing POC stuff as
well as testing application in development environment. Today, I had a chance
to play with Docker edition of Datapower Gateway. Within few minutes, I was able to pull
the Datapower Docker image from the Docker hub, create the
Datapower Docker container, run, and play with Datapower. IBM Datapower Gateway for Docker web page (https://hub.docker.com/r/ibmcom/datapower/) has good information to start with. However, this image contains unconfigured Datapower Gateway, and you'll not be able to access
either Datapower Web Management Console or establish SSH connection to
Datapower even after the container is running. For that either you have to
access the Datapower in interactive mode and enable the admin-state of Web
Management and SSH, add configuration through Docker build or use (externalised) configuration
file during run. Below we'll discuss two (out of three) options.
1) [optional] Create 'config' and 'local' directories on your host machine's file system from where you are going to execute the docker run command later in the step #2. For example, I have created following directory structure under: /opt/docker_dev/datapowerbuild/
We are going to put docker configuration file(s) in those directories.
Docker is able to access these external configuration. It is very powerful
concept and explained as [data] volume in Docker documentation. Read
details here.
2) [optional] Create auto-startup.cfg configuration file and put under 'config' directory created in step #1, so that Docker Web Management and SSH admin state is enabled when Docker container runs.
Above script is taken from https://github.com/ibm-datapower/datapower-tutorials/blob/master/using-datapower-in-ibm-container-service/src/auto-startup.cfg
3) Execute docker run command. It's assumed that you already have Docker installed, configured and running on your host machine. If not follow Docker installation steps here:
Following command is based on IBM's instruction available at https://hub.docker.com/r/ibmcom/datapower/
Note: make sure your machine is connected to internet otherwise it will not be
able to pull the Datapower Docker image from Docker hub.
Some of the last line you'll see before logon prompt are:
4) logon to console using default userid 'admin' and password 'admin'.
5) Launch browser and type 'https://<host machine ip>:9090' to access Web Management console.
6) Create ssh connection to your host on port 2222 to access Datapower.
Note: If you have not performed optional steps #1 and #2, your web-mgmt and 'ssh' connection will not be available. Perform the optional steps #7.
7) [optional] Connect to Datapower console interactively and turn the Global Configuration mode.
See the screen shots below for better clarity.
Looks like you're really into Docker, see my other related blog posts below:
1) [optional] Create 'config' and 'local' directories on your host machine's file system from where you are going to execute the docker run command later in the step #2. For example, I have created following directory structure under: /opt/docker_dev/datapowerbuild/
$ ls -rl datapowerbuild |
2) [optional] Create auto-startup.cfg configuration file and put under 'config' directory created in step #1, so that Docker Web Management and SSH admin state is enabled when Docker container runs.
top; co |
Above script is taken from https://github.com/ibm-datapower/datapower-tutorials/blob/master/using-datapower-in-ibm-container-service/src/auto-startup.cfg
3) Execute docker run command. It's assumed that you already have Docker installed, configured and running on your host machine. If not follow Docker installation steps here:
Following command is based on IBM's instruction available at https://hub.docker.com/r/ibmcom/datapower/
$ docker run -it \ |
Some of the last line you'll see before logon prompt are:
20170201T014408.003Z
[0x00350014][mgmt][notice] ssh(SSH Service): tid(111): Operational
state up
|
5) Launch browser and type 'https://<host machine ip>:9090' to access Web Management console.
6) Create ssh connection to your host on port 2222 to access Datapower.
Note: If you have not performed optional steps #1 and #2, your web-mgmt and 'ssh' connection will not be available. Perform the optional steps #7.
7) [optional] Connect to Datapower console interactively and turn the Global Configuration mode.
configure terminal |
Enable web-mgmt admin-state
Enable ssh admin-state
Showing the status
Looks like you're really into Docker, see my other related blog posts below:
- Using Docker Secrets with IBM WebSphere Liberty Profile Application Server
- Make your container deployment portable
- Experience sharing - Dock Datacenter
- Setting up CLI environment for IBM Bluemix
- Quick start with IBM Datapower Gateway Docker Edition (this post)
Scripting Ideas
Important: This
page will be continually updated as I find new work-around or ideas while
working on scripts (any script - shell, windows, or others).
One of my teammates today seemed pretty frustrated while trying to run 'dos2unix' command in batch mode. His script (see below) was almost doing the thing, however instead of updating the file, the content were displayed on screen (stdout).
His script (with issue) that sent output to stdout:
Here is the corrected script, which correctly updates each file under
current directory by converting end of line from Windows format to Unix
format.
As you have noticed, the only thing missing was the last set of '{}', which
basically tells dos2unix to use the same filename for output as per input.
Below is example using 'exec' instead of 'xargs' to achieve the same.
Command reference links: find, xargs,dos2unix
OR
Note: in above example, any occurrence of 'iamold' in 'myfile.txt' will be
replace by 'iamnew' and written in 'myfile.txt.new'. Important thing here is
the variable $replaceme should be in double quote. Below variant does not
work. The variable '$replaceme' will not be expanded.
Command reference links: sed
Here, I'm finding which process/process ID is listening on port 9080. Here is how I can find out.
Note: the following has been tested on CentOS Linux.
1) Using 'netstat -lnp'
2) Using 'lsof -i :<port>'
3) Using ss -ntlp
Run file as:
Running dos2unix in batch mode:
One of my teammates today seemed pretty frustrated while trying to run 'dos2unix' command in batch mode. His script (see below) was almost doing the thing, however instead of updating the file, the content were displayed on screen (stdout).
His script (with issue) that sent output to stdout:
find . -type f -name "*.sh" |
xargs -i dos2unix {}; |
find . -type f -name "*.sh" |
xargs -i dos2unix {} {}; |
find . -type f -name ".sh" -exec
dos2unix {} {} \; |
Using variable in SED:
file="myfile.txt";
|
file="myfile.txt";
|
file="myfile.txt";
|
Finding which process owns/listens on which port
Here, I'm finding which process/process ID is listening on port 9080. Here is how I can find out.
Note: the following has been tested on CentOS Linux.
1) Using 'netstat -lnp'
$> netstat -lnp | grep 9080
|
2) Using 'lsof -i :<port>'
$> lsof -i :9080
|
3) Using ss -ntlp
$> ss -ntlp | grep 9080
|
Retrieving Certificate and Updating kestore file.
Following file show example of retrieving Google certificate from www.google.com and adding it to local key.jks file. script file: retrieveAndUpdateCert.sh
#! /bin/bash
|
Run file as:
$> ./retrieveAndUpdateCert.sh <Your keystore password>
|
AWK numerical processing tricks
1. If you have number with 1000 separator (,) like 84,959, AWK fails to process the number correctly unless you remove the separator (,) from input.
for example:
As seen from the above result, AWK only took the input values prefixed by comma. Fix is simple, just remove the "," from input value. The following line gives the correct result:
$> echo "84,959|34,600" | awk 'BEGIN{FS=OFS="|";}{print $1/1000,$2/1000}'
|
$> echo "84,959|34,600" | awk 'BEGIN{FS=OFS="|";}{gsub(",","",$1);gsub(",","",$2); print $1/1000,$2/1000}'
84.959|34.6
2. If you get some weird result while doing AWK numeric comparison, make sure the value is presented as number not string literal. For example:
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2);if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
|
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2)*1;if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
|
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2)+0;if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
|
AWK printing from specific column/field to the end
In the following example, matrix.csv (comma delimited file) data is piped to awk which processes one row at a time (excluding first header row), first column is a time in milliseconds, so it converts into displayable date and prints, but rest of the columns (starting from 2nd column) require no processing, so it prints as it is.
cat matrix.csv | awk 'BEGIN{FS=OFS=","}{if(NR > 1) {print strftime("%c", ($1 + 500)/1000), substr($0, index($0,$2))}}'
|
Using comma as a delimiter in for loop
By default 'for loop' expects input delimited by space (or tab or newline) character. However, if you need to use ',' (comma), one of the easiest way is to override Internal Field Separator (IFS) value. However, make sure to set it back to the original value. See the script below, it opens a set of firewall ports delimited by comma ','. Before the for loop, we set IFS="," and after the for loop, we set value back to space " ".
#!/bin/sh
|
Reset Oracle Directory Manager's Password
Have you ever had situation that you needed to execute a command for Oracle Directory Server which required root/Directory Manager's password and the password you had just did not work? I encountered one today and had to scramble to find a solution to reset it. 'pwdhash' tool that comes with Oracle Directory Server rescued me. Here is what I did:
- Before resetting the password, you may want to try few of your
guesses. Here is how you do it. Get the actual root/Directory Manager's
password from dse.ldif file. It's with attribute 'nsslapd-rootpw:'
something like:
nsslapd-rootpw: {SSHA256}WYChc/pNA34fD8RKo//ReBCsGstkz0Ux54gfsMaruXhMP89tAnMtd
- Then compare each of your guess with the encrypted password from
dse.ldif using 'pwdhash'. It has option to compare '-c'. Below is how
you do it. If password matches, you'll get message "password ok."
otherwise "password does not match." is displayed.
./pwdhash -D <instance-location> -c "<encrypted-password>" <your-guess-password>
$>cd
# Actual example from my ODS instance
/opt/ods/dsee7/bin
$>./pwdhash -D /opt/ods/dsee7/instances/dsInst2 -c "{SSHA256}WYChc/pNA34fD8RKo//ReBCsGstkz0Ux54gfsMaruXhMP89tAnMtd" myPassw0rd
./pwdhash: password does not match.
- If none of your guess matches then it's time to reset the password
hard way. Here is how to do it:
# Stop your Oracle Directory Instance
$>cd
/opt/ods/dsee7/bin
$>./dsadm stop /opt/ods/dsee7/instances/dsInst2
Directory Server instance '/opt/ods/dsee7/instances/dsInst2' stopped
# Generate the encrypted password
$>./pwdhash -D /opt/ods/dsee7/instances/dsInst2 -s SSHA256 myPassw0rd
{SSHA256}qOjAyposbx1LzM/LB4vk1ZKS2yNs2Oh0yDjo66GIjnMpIVMJMhi6fw==
- Take the generated encrypted password from step #3 and replace the value of attribute 'nsslapd-rootpw:' in dse.ldif file and save it.
- Restart the Oracle Directory Instance.
# Start your Oracle Directory Instance
$>cd
/opt/ods/dsee7/bin
$>./dsadm start /opt/ods/dsee7/instances/dsInst2
Directory Server instance '/opt/ods/dsee7/instances/dsInst2' started: pid=2982
However in future, if you just want to change the root/Directory Manager's password, you can use the 'dsconf' command with 'set-server-prop' option. Below is more detail:
|
Subscribe to:
Posts (Atom)