Showing posts with label Development. Show all posts
Showing posts with label Development. Show all posts

How to Use Openssl to Create Keys, CSR and Cert Bundle, Review and Verify and Install

There are number of tools available to create SSL/TLS key pair and CSR. Here I'm going to use openssl.

1. Let's first create a key pair

In this example, we are creating a key of type RSA with 2048-bit key length.  It is recommended that you create a password protected private key.

# Create a plain text key pair (private and public keys)
openssl genrsa -out myserver.key 2048
# if you need, extract the public key from the one generated above
openssl rsa -in myserver.key -pubout > myserver.pub
# Create password protected (encrypted with aes128/aes256)
openssl genrsa -aes128 -passout pass:<password> -out enc-myserver.key 2048
# Encrypt existing plain text private key
openssl rsa -aes128 -in myserver.key -passout pass:<password> -out enc-myserver.key


2. Let's create a CSR. 

Openssl allows to provide input information using a openssl configuration file while creating a CSR. Good thing about the configuration file is that it can be stored in the version control system like git and re-used. Look the config file example below. Let's call it mycsr.cnf

[ req ]
defaults_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
countryName = CA
stateOrProvinceName = Ontario
localityName = Toronto
organizationName = IT
OU = ITWork
commonName = myexampleserver.ca
[ v3_req ]
basicConstraints = CA:false
keyUsage = Digital Signature, Key Encipherment
extendedKeyUsage = TLS Web Server Authentication, TLS Web Client Authentication
subjectAltName = @alt_names
[alt_names]
DNS.1 = my1stdns.com
DNS.2 = my2nddns.com
DNS.3 = my3rddns.com

Here we are using mycsr.cnf to feed the necessary information required to create the CSR. Since we are using encrypted key, let's pass the password using option -passin pass:<password>. If you don't use the -passin option, it will prompt you for the password. Here, it will generate the myserver.csr

openssl req -new -key enc-myserver.key -passin pass:<password> -out myserver.csr -config mycsr.cnf

Note: you can also generate CSR using the existing private key and existing certificate. See the commands below. Openssl prior to version 3.x, may not support the '-copy_extensions copyall'.

openssl x509 -x509toreq [-copy_extensions copyall] -in <existing certificate>.crt -signkey <existing private key> -out myserver.csr

Review the generated CSR. In the example below, we are verifying the mycsr.csr created above.

openssl req -noout -text -in mycsr.csr


3. Send your CSR to CA and Get the Signed Certs

Once your Certificate Authority (CA) receives the CSR, they process it and may send a link from where signed certificate(s) can be downloaded. The provided link may contain download options for Root CA cert, one or more intermediate cert(s) and server/domain cert. Depending upon how and for which server/application you are installing certificate, you may want to create a single PEM file from all provided certs. Here is how you can do it:

cat server.crt intermediate.crt rootca.crt >> cert-bundle.pem

Notes:
  1. make sure the certificate file are in PEM format. In order to check, just open the file in text editor like Notepad++ and see if it starts with -----BEGIN and content is in 'ASCII'. Certs can be converted from other format to PEM using openssl commands as follows:


    # Convert DER to PEM
    openssl x509 -in mycert.der -out mycert.pem
    # Convert CER to PEM
    openssl x509 -in mycert.cer -out mycert.pem
    # Convert CRT to PEM:
    openssl x509 -in mycert.crt -out mycert.pem


  2. Open the merged file cert-bundle.pem above in text editor and make sure that each -----BEGIN is in new line.
  3. If you are not able to install the password protected key, remove the password as follows:

    openssl rsa -in enc-myserver.key -passin pass:<password>=> -out myserver.key


4. Install and Verify your Certificate

Installation really depends on what your target server/application is. Here I'm showing a quick example for nginx. Here is a configuration snippet to enable SSL/TLS for nginx:


     server {
         listen       443 ssl;
         server_name  myexampleserver.ca;

         ssl_certificate      <cert-location>/ssl-bundle.crt;
         ssl_certificate_key  <cert-location>/enc-myserver.key;
 ssl_password_file    <path-to-password-file>/key.pass;

         ssl_session_cache    shared:SSL:1m;
         ssl_session_timeout  5m;

         ssl_ciphers  HIGH:!aNULL:!MD5;
         ssl_prefer_server_ciphers  on;

         location / {
             root   html;
             index  index.html index.htm;
         }
     }

Once the configuration is updated, start the nginx and access default page in the browser like 'https://myexampleserver.ca'

5. [Optional] Create .p12 key store for your Keys and Certs 


 PKCS 12 is a industry standard for storing many cryptography objects in a single file. Here is how you can create a PKCS 12 archive.


# openssl pkcs12 -export -in CertPath.cer [-certfile ssl-bundle.crt] -inkey privateKeyPath.key [-passin pass:<private key password>] -passout pass:<.p12 file password> -out key.p12

openssl pkcs12 -export -in ssl-bundle.crt -inkey enc-myserver.key -passin pass:<private key password> -passout pass:<p12 certstore password> -out mycertarchive.p12

Notes: 
  1. if the file passed using option -infile/in has both certs and private key, then -inkey option is not required. 
  2. if the file passed using option -infile/in has all the certs (including the server, intermediate, and rootca) included, then the -certfile option is not required. Usually the practice is to pass server cert file using -infile/in option, private key using -inkey option and rootCA, intermediate certs using -certfile option.


6. [Optional] Use .p12 with Java Keytool or KeyStore Explorer (KSE) 

You can open the .p12 file directly into KSE and use KSE functionalities. You can use the Java keytool as well. Here is an example of listing certs using Java keytool:

  1. List certs using keytool

    keytool -v -list -storetype pkcs12 -keystore mycertarchive.p12


  2. Convert to JKS if necessary. You'll be prompted for passwords

    #keytool -importkeystore -srckeystore <.p12 file> -srcstoretype pkcs12 -destkeystore <.jks file> -deststoretype JKS

    keytool -importkeystore -srckeystore mycertarchive.p12 -srcstoretype pkcs12 -destkeystore mycertarchive.jks -deststoretype JKS

How to Write a Custom Ansible Callback Plugin to Post to MS Teams using Jinja2 Template as a Message Card

 

I spent several hours last week-end doing some research and putting together an Ansible callback plugin that posts messages to Microsoft Teams when specific event(s) occurs in Ansible playbook. I could not find a real good documentation or example to follow. Don't get me wrong, yes, there are documentation/blog for Slack or some even related to sending messages to Teams, but not the way, I wanted. I wanted to send custom messages using Office 365 connector card written in Jinja2 template, which could be customized using the value(s) of extra-vars, goup_vars/host_vars for both success and failure events. 

Finally, I've put together a fully functional callback plugin and wanted to share it with the community, so that people will not have to pull out their hair for the same. The plugin source code can be found in the GitHub (see the links below), but here I'm explaining the details. 

Why callback plugin? You don't really need to use callback plugin to post a message to any end point from Ansible. Ansible even has an Office 365 connector card plugin. But the problem starts when you want to capture the failure event and post message accordingly. From Ansible playbook it's not easy to capture the failure event, unless you put your entire playbook in 'try...' block or put each task in 'try..' block because you can't possibly know which task will fail in next run. With callback plugin, it becomes easy, as the corresponding method is automatically invoked by a particular event in Ansible playbook. 

Why Jinja2 template? It adds flexibility in creating custom messages. Also, it helps to make your callback plugin more universal, as message customization is externalized to Jinja2 template. To demonstrate all this, I've included a simple playbook that fakes the application deployment and posts success or failure deployment messages accordingly. 

Now, let's dive little into the details, starting with callback plugins. Ansible documentation describes callback plugins as "Callback plugins enable adding new behaviors to Ansible when responding to events...". Refer to the Callback Plugins page for general information. In this blog post, I'm not going to explain how to develop your own plugin, but only provide specific information on how this msteam plugin has been developed. If you have not previously written Ansible plugin, I'd suggest looking into Developing plugins section of the Ansible documentation for general guidelines. 

Class Import section:

from __future__ import (absolute_import, division, print_function)
from ansible.plugins.callback import CallbackBase
from jinja2 import Template
...

The 1st line above is required for any plugin and 2nd line is required for Callback plugins. 3rd line above is to work the Jinja2 template. 

Class body section:

As you can see in the lines below, I'm creating msteam with the CallbackModule(CallbackBase) as parent, that means, methods defined in the parent class are available to override. Refer to _init__.py to see what methods are available. For msteam plugin, I've overriden only specific version 2.0 methods as the intention is to use it with Ansible version 2.0 or later. Note: CallbackBase class defines regular as well as corresponding 'v2_*' methods.

class CallbackModule(CallbackBase):
    CALLBACK_VERSION = 2.0
    CALLBACK_TYPE = 'notification'
    CALLBACK_NAME = 'msteam'
    CALLBACK_NEEDS_WHITELIST = True

__init__ section:

See the comment in the code for details.

self.playbook_name = None

# Record the playbook start time. In this case I'm using Canada/Eastern
self.tz = timezone('Canada/Eastern')
self.dt_format = "%Y-%m-%d %H:%M:%S"
self.start_time = datetime.datetime.now(self.tz)

# Placeholder for extra-vars variable
self.extra_vars = None

# If you are executing your playbook from AWX/Tower
# Replace with your Ansible Tower/AWX base url
#self.v_at_base_url = "https://<Ansible tower host>:<port>

# To record whether the playbook variables are retrieved, so that we retrieve them just once.
self.pb_vars_retrieved = False

# Here you can assign your default MS Teams webhook url
self.v_msteam_channel_url = "<replace with your own MS Team webhook URL>"

# default MS Teams message card template. Here I'm assigning the one included in the example playbook
self.v_message_template = "templates/msteam_default_msg.json.j2"

# default job status in the beginning
self.job_status = "successful"

# If you need to post through proxies, uncomment the following and replace with your proxy URLs.
# self.proxies = {
# "http": "<http-proxy-url>",
# "https": "<https-proxy-url>",
# }

v2_playbook_on_start:

def v2_playbook_on_start(self, playbook):
    display.vvv(u"v2_playbook_on_start method is being called")
    self.playbook = playbook
    self.playbook_name = playbook._file_name

v2_playbook_on_play_start:

def v2_playbook_on_play_start(self, play):
     display.vvv(u"v2_playbook_on_play_start method is being called")
     self.play = play
     # get variable manager and retrieve extra-vars
     vm = play.get_variable_manager()
     self.extra_vars = vm.extra_vars
     self.play_vars = vm.get_vars(self.play)
     # The following is used to retrieve variables defined under group_vars or host_vars.
     # If the same variable is defined under both with the same scope,      # the one defined under host_vars takes precedence.
     self.host_vars = vm.get_vars()['hostvars']
     if not self.pb_vars_retrieved:
     self.get_pb_vars()

As you have noticed above, you have to obtain the variable manager from play to get the extra-vars object and use 'get_vars' to get the general playbook variables, and get_vars()['hostvars'] to get the group_vars/host_vars. Refer to the get_pb_vars() method to see how I have obtained the extra-vars, and playbook variables.


v2_playbook_on_stats:

def v2_playbook_on_stats(self, stats):
     display.vvv(u"v2_playbook_on_stats method is being called")
     if not self.pb_vars_retrieved:
          self.get_pb_vars()
     hosts = sorted(stats.processed.keys())
     self.hosts = hosts
     self.summary = {}
     self.end_time = datetime.datetime.now(self.tz)
     self.duration_time = int((self.end_time - self.start_time).total_seconds())
     # Iterate trough all hosts to check for failures
     for host in hosts:
          summary = stats.summarize(host)
          self.summary = summary
          if summary['failures'] > 0:
              self.job_status = "failed"
    
          if summary['unreachable'] > 0:
              self.job_status = "failed"
    
          display.vvv(u"summary for host %s :" % host)
          display.vvv(str(summary))
    
     # Add code here if you want to post to MS Teams per host
    
     # Just send a single notification whether it is a failure or success
     # Post message to MS Teams
     if(not self.disable_msteam_post):
          self.notify_msteam()
     else:
          display.vvv(u"Posting to MS Team has been disabled.")

As you have noticed above, I'm calling notify_msteam() method to post to MS Teams. I'm posting a single message at the end of the playbook execution. However, if you like to post for each host, see how to do that in the code (you have to call the notify_msteam() within the 'for' loop).

notify_msteam: 

I'm not going to post the entire code here, you can see it in the GitHub repository. Here are few important lines. The basic idea here is first to load the Jinja2 template from the given file, then render the template with values retrieved from extra-vars, playbook variable and group_vars/host_vars and finally post the message (see the commented section if you are using the proxy)

 
try:
     with open(self.v_message_template) as j2_file:
     template_obj = Template(j2_file.read())
except Exception as e:
     print("ERROR: Exception occurred while reading MS Teams message template %s. Exiting... %s" % (
     self.v_message_template, str(e)))
     sys.exit(1)
    
rendered_template = template_obj.render(
     v_ansible_job_status=self.job_status,
     v_ansible_job_id=self.tower_job_id,
     v_ansible_scm_revision=self.scm_revision,
     v_ansible_job_name=self.tower_job_template_name,
     v_ansible_job_started=self.start_time.strftime(self.dt_format),
     v_ansible_job_finished=self.end_time.strftime(self.dt_format),
     v_ansible_job_elapsed_time=self.duration_time,
     v_ansible_host_list=self.hosts,
     v_ansible_web_url=web_url,
     v_ansible_app_file=self.v_app_file,
     v_ansible_deployment_action=self.v_deployment_action,
     v_ansible_environment=self.v_environment,
     v_ansible_instance_name=self.v_instance_name,
     v_ansible_executed_from_tower=self.executed_from_tower
)

try:
     with SpooledTemporaryFile(max_size=0, mode='r+w') as tmpfile:
          tmpfile.write(rendered_template)
          tmpfile.seek(0)
          json_payload = json.load(tmpfile)
          display.vvv(json.dumps(json_payload))
except Exception as e:
     print("ERROR: Exception occurred while reading rendered template or writing rendered MS Teams message template. Exiting... %s" % str(e))
     sys.exit(1)
    
try:
     # using proxy
     # response = requests.post(url=self.v_msteam_channel_url,
     # data=json.dumps(json_payload), headers={'Content-Type': 'application/json'}, timeout=10, proxies=self.proxies)
    
     # without proxy
     response = requests.post(url=self.v_msteam_channel_url,
     data=json.dumps(json_payload), headers={'Content-Type': 'application/json'}, timeout=10)
    
     if response.status_code != 200:
          raise ValueError('Request to msteam returned an error %s, the response is:\n%s' % (
     response.status_code, response.text))
except Exception as e:
     print(
     "WARN: Exception occurred while sending notification to MS Teams. %s" % str(e))

Message card as Jinja2 template:

{
    "@type": "MessageCard",
    "@context": "http://schema.org/extensions",
    "themeColor": "{{ '#008000' if(v_ansible_job_status != 'failed') else '#FF0000' }}",
    "title": "Deployment of {{v_ansible_app_file}} on {{ v_ansible_environment }} environment {{'completed successfully' if(v_ansible_job_status == 'successful') else 'failed.' }}",
    "summary": "Ansible Job Summary",
    "sections": [{
        "activityTitle": "Job {{ v_ansible_job_id }} summary: ",
        "facts": [
        {% if v_ansible_executed_from_tower is sameas true %}
        {
            "name": "Playbook revision",
            "value": "{{ v_ansible_scm_revision }}"
        }, {
            "name": "Job name",
            "value": "{{ v_ansible_job_name }}"
        },
        {% endif %}
        {
            "name": "Job status",
            "value": "{{ v_ansible_job_status }}"
        }, {
            "name": "Job started at",
            "value": "{{ v_ansible_job_started }}"
        }, {
            "name": "Job finished at",
            "value": "{{ v_ansible_job_finished }}"
        }, {
            "name": "Job elapsed time (sec)",
            "value": "{{ v_ansible_job_elapsed_time }}"
        }, {
            "name": "Application (v_app_file)",
            "value": "{{ v_ansible_app_file }}"
        }, {
            "name": "Action (v_deployment_action)",
            "value": "{{ v_ansible_deployment_action }}"
        }, {
            "name": "Environment (v_environment)",
            "value": "{{ v_ansible_environment }}"
        }, {
            "name": "Hosts",
            "value": "{{ v_ansible_host_list | join(',') }}"
        },{
            "name": "Instance name(v_instance_name)",
            "value": "{{ v_ansible_instance_name | default('na') }}"
        }],
        "markdown": false
    }]
    {% if v_ansible_executed_from_tower is sameas true %}
    ,"potentialAction": [{
        "@context": "http://schema.org",
        "@type": "ViewAction",
        "name": "View on Ansible Tower",
        "target": [
            "{{ v_ansible_web_url }}"
        ]        
    }]
  {% endif %}
}

Note: In the example above, it adds the "View on Ansible Tower" button if the playbook is executed from Ansible Tower/AWX as shown below.



Success message posted by playbook executed on Ansible Tower




Failure message posted by playbook executed from command line

That's it. Hope it helps. Here are the GitHub links:

1. Callback plugin: https://github.com/pppoudel/callback_plugins

2. Example playbook: https://github.com/pppoudel/ansible_msteam_callback_plugin_using_jinja2_template_example

Custom Ant Task IsInList

I  had created this Custom Ant Task sometime ago while working on a project where I needed to check whether an item exists in the list. As I did not find any other efficient way to do it using any of the standard Ant tasks, I created one on my own. I'm publishing (see below GitHub project location) this Custom Ant Task source code as an Open Source. Feel free to use/modify/distribute it as per your need or suggest if you have any other better ways to do it.

What IsInList contains?
1) It contains one Java source file: com.sysgenius.tools.ant.customtask.IsInList.java


2) The GitHub project also has Ant build file build.xml to build the project from source code, sample-usage.xml - Ant build file that shows few usage scenarios of 'IsInList' task and README.txt that basically explains how to use it.

How to Use It?
Follow the steps below:

1) Make sure isinlist-<version>.jar file is in your build classpath. You can do it either by adding it into your $ANT_HOME/lib directory or by defining a custom library path like below and making a reference to it.

<path id="ant.opt.lib.path">
   <fileset dir="${basedir}/../target">
      <include name="isinlist-1.0.0.0.jar"/>
   </fileset>
</path>

2) Next, define the "isinlist" task, below is one of the few ways:

<typedef classname="com.sysgenius.tools.ant.customtask.IsInList" name="isinlist" classpathref="ant.opt.lib.path"/>

3) Use it, see the examples below:

Example 1:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";". Here you need to find out whether or not any item starting with "native_stdout.log" exists. In this case you can do lookup using regular expression (isRegEx="true"). In your build file, you'll need to have:

<property name="item.list" value="ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*"/>
<property name="regex.item.name" value="native_stdout.log"/>
<isinlist casesensitive="false" delimiter=";" value="${regex.item.name}" valueList="${item.list}" isRegEx="true"/>

Example 2:
You have a list of items like "ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*" separated by ";".
Here you need to find out whether an item called "release" exists in the given list. In this case you can use regular lookup, meaning isRegEx="false".

<property name="item.list" value="ci;Inting.*;release;SystemOut_16.01.23.log;native_stdout.*;native_stderr.*"/>
<property name="regular.item.name" value="release"/>
<isinlist casesensitive="false" delimiter=";" value="${regular.item.name}" valueList="${item.list}" isRegEx="false"/>

See the sample-usage.xml for complete example and more detail usage scenarios.

You can  get/dowload files from GitHub location: https://github.com/pppoudel/customanttasks.

Scripting Ideas

Important: This page will be continually updated as I find new work-around or ideas while working on scripts (any script - shell, windows, or others).

Running dos2unix in batch mode:


One of my teammates today seemed pretty frustrated while trying to run 'dos2unix' command in batch mode. His script (see below) was almost doing the thing, however instead of updating the file, the content were displayed on screen (stdout).

His script (with issue) that sent output to stdout:
find . -type f -name "*.sh" | xargs -i dos2unix {};
Here is the corrected script, which correctly updates each file under current directory by converting end of line from Windows format to Unix format.
find . -type f -name "*.sh" | xargs -i dos2unix {} {};
As you have noticed, the only thing missing was the last set of '{}', which basically tells dos2unix to use the same filename for output as per input. Below is example using 'exec' instead of 'xargs' to achieve the same.
find . -type f -name ".sh" -exec dos2unix {} {} \;
Command reference links: find, xargs,dos2unix

Using variable in SED:


file="myfile.txt";
replaceme="iamnew";
sed 's/iamold/'"${replaceMe}"'/g' < $file > $file".new";
OR
file="myfile.txt";
replaceme="iamnew";
sed "s/iamold/${replaceme}/g" < $file > $file".new";
Note: in above example, any occurrence of 'iamold' in 'myfile.txt' will be replace by 'iamnew' and written in 'myfile.txt.new'. Important thing here is the variable $replaceme should be in double quote. Below variant does not work. The variable '$replaceme' will not be expanded.
file="myfile.txt";
replaceme="iamnew";
sed 's/iamold/${replaceme}/g' < $file > $file".new";
Command reference links: sed


Finding which process owns/listens on which port


Here, I'm finding which process/process ID is listening on port 9080. Here is how I can find out.
Note: the following has been tested on CentOS Linux.

1) Using 'netstat -lnp'
$> netstat -lnp | grep 9080
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.) tcp6 0 0 :::9080 :::* LISTEN 3840/java


# using sudo:
$> sudo netstat -lnp | grep 9080
tcp6 0 0 :::9080 :::* LISTEN 3840/java


# Or find all ports in use by certain process/PID
$> sudo netstat -lnp | grep java
tcp6 0 0 :::9080 :::* LISTEN 3840/java
tcp6 0 0 :::10010 :::* LISTEN 3840/java
tcp6 0 0 :::9443 :::* LISTEN 3840/java
tcp6 0 0 127.0.0.1:57576 :::* LISTEN 3840/java

#by PID
$> sudo netstat -lnp | grep 3840
tcp6 0 0 :::9080 :::* LISTEN 3840/java
tcp6 0 0 :::10010 :::* LISTEN 3840/java
tcp6 0 0 :::9443 :::* LISTEN 3840/java
tcp6 0 0 127.0.0.1:57576 :::* LISTEN 3840/java

2) Using 'lsof -i :<port>'
$> lsof -i :9080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 3840 osboxes 339u IPv6 40626 0t0 TCP *:glrpc (LISTEN)

3) Using ss -ntlp
$> ss -ntlp | grep 9080
LISTEN 0 128 :::9080 :::* users:(("java",pid=3840,fd=339))


Retrieving Certificate and Updating kestore file.


Following file show example of retrieving Google certificate from www.google.com and adding it to local key.jks file. script file: retrieveAndUpdateCert.sh

#! /bin/bash
# Remote host to retrieve certificate from
RHOST=www.google.com
# Remote port
RPORT=443
# key store file path
KS_FILEPATH=/opt/secrets/key.jks
# Certificate Alias
CERT_ALIAS=googlecert

# Retrieve the certificate and put in temporary file '/tmp/cert.crt' in this case.
# Refer to https://www.openssl.org/docs/man1.0.2/apps/openssl.html for openssl command details.
true | openssl s_client -connect ${RHOST}:${RPORT} 2>/dev/null | openssl x509 -in /dev/stdin > /tmp/cert.crt
# Install certificate using keytool
# keytool comes with Java.
# Refer to https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html for keytool command details.
keytool -import -file /tmp/cert.crt -alias ${CERT_ALIAS} -keystore ${KS_FILEPATH} -storepass $1
# View certs in the keystore:
keytool -list -v -keystore ${KS_FILEPATH} -storepass $1

Run file as:
$> ./retrieveAndUpdateCert.sh <Your keystore password>


AWK numerical processing tricks



1. If you have number with 1000 separator (,) like 84,959, AWK fails to process the number correctly unless you remove the separator (,) from input. for example:
$> echo "84,959|34,600" | awk 'BEGIN{FS=OFS="|";}{print $1/1000,$2/1000}'
0.084|0.034
As seen from the above result, AWK only took the input values prefixed by comma. Fix is simple, just remove the "," from input value. The following line gives the correct result:

$> echo "84,959|34,600" | awk 'BEGIN{FS=OFS="|";}{gsub(",","",$1);gsub(",","",$2); print $1/1000,$2/1000}'
84.959|34.6

2. If you get some weird result while doing AWK numeric comparison, make sure the value is presented as number not string literal. For example:
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2);if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
99 is greater than 100
As seen from above, the result is not correct/expected. It is because, the value of num above is '99 ', i.e. there is a space character after 99, and AWK processes this as string comparison. Simple fix is to multiply the value by 1 or add 0 before doing numeric comparison.
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2)*1;if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
99 is less than 100
or
$> echo "Is 99 ( ninety nine) higher than 100?" | awk 'BEGIN{FS="(";}{num=substr($1,4,2)+0;if(num >= 100){ print num" is greater than 100"}else{print num" is less than 100"}}'
99 is less than 100

AWK printing from specific column/field to the end

In the following example, matrix.csv (comma delimited file) data is piped to awk which processes one row at a time (excluding first header row), first column is a time in milliseconds, so it converts into displayable date and prints, but rest of the columns (starting from 2nd column) require no processing, so it prints as it is.

cat matrix.csv | awk 'BEGIN{FS=OFS=","}{if(NR > 1) {print strftime("%c", ($1 + 500)/1000), substr($0, index($0,$2))}}'

Using comma as a delimiter in for loop

By default 'for loop' expects input delimited by space (or tab or newline) character. However, if you need to use ',' (comma), one of the easiest way is to override Internal Field Separator (IFS) value. However, make sure to set it back to the original value. See the script below, it opens a set of firewall ports delimited by comma ','. Before the for loop, we set IFS="," and after the for loop, we set value back to space " ".

#!/bin/sh
tcp_ports="179,443,80,2375,2376,2377,2380,4001,4443,4789,6443,6444,7001,7946,8080,10250,12376-12387"
udp_ports="4789,7946"

openFW() {
  IFS=",";
 for _port in $1; do
  echo "Opening ${_port}/$2";
  sudo firewall-cmd --permanent --zone=public --add-port=${_port}/$2;
 done
 IFS=" ";
}

openFW "${tcp_ports}" tcp;
openFW "${udp_ports}" udp;

# Recycle firewall
sudo firewall-cmd --reload

Updating Property Files using Shell Script

Recently, one of my colleagues asked me some hints on how to read updated properties from one property file, search the same property in the target property file (files) and update/replace the corresponding property value (s) in the target property file (s) if property key matches. He wanted to do this operation using just the regular shell script, so No ANT or Java or Perl. I've put together a simple script file for him and decided to publish it here so that other people with similar needs can be benefited as I did not find any other similar posting.


Let's say, here is how the source property file looks like:

--------- sFile.properties --------
# Updated property values
connection.username=mssqluser
connection.password=*********
connection.hostname=thishost.domain
connection.time.format=yyyy-MM-dd HH:mm:ss

Let's say, here is how one of the target property files looks like:

--------- tFile.properties --------
# Connection properties
connection.username=xyz
connection.password=abc456789
connection.hostname=localhost

connection.time.format =yyyy-MMM-dd HH:mm:ss 

 

Here is how the shell script file look like:

---------- propertyUpdater.sh ------

#!/bin/bash
# Source input property file that provides new/updated property values.
sFile='/cygdrive/c/temp/Bscripts/sFile.properties'

# Target directory where multiple property files whose property value(s) need to be updated/replaced.
tDir='/cygdrive/c/temp/Bscripts/props'

# Reads each line and assigns value to variable _line, excluding all commented (starting with #) lines and all empty lines
for _line in `cat "$sFile" | grep -v -e'#' | grep -v -e'^$'`; do
    echo "Reading line: $_line from source file: $sFile"
    _key=`echo $_line | cut -d '=' -f1`
    _value=`echo $_line | cut -d '=' -f2`
    echo "Retrieved property key: $_key with value: $_value"
    # Comment following 'for' loop if you are using 'tFile' variable.
    for _file in `find $tDir -type f -print | grep ".properties$"`; do
       echo "Updating target property file: $tDir/$_file"
       sed -i "s/^$_key=.*/$_line/g" "$tDir/$_file"
       # for those properties which have space between key and '=' sign.
       sed -i "s/^$_key[ \t]=.*/$_line/g" "$tDir/$_file"
    done   
done


--------- tFile.properties after the update -----

# Connection properties
connection.username=mssqluser
connection.password=*********
connection.hostname=thishost.domain
connection.time.format=yyyy-MM-dd HH:mm:ss

 

Hope, it helps!

Quick Data Verification Tool

Recently, I had to put together a tool to verify data (in Microsoft SQL server database tables) in no time. Even though the time was very short, I had few requirements to follow while creating the tool as guidance. Those requirements were:
  1. Flexibility -  so that the tool is usable to more than one projects with little or no modifications.
  2. Lightweight - no use of third party scripting framework like Ant/Nant and no use of programming languages like Java , c# etc.
Since the tool was intended specifically for Microsoft SQL Server on Windows environments, I decided to use batch scripting, SQLCMD (http://technet.microsoft.com/en-us/library/ms162773.aspx), and SQL scripting.

In order to make it flexible and not to hard code anything (like Servername, database name, table name etc), so that same SQL script could be used from one project to another for similar type of verification with little or no modification, I decided to externalize these attributes in a ';' delimitted .csv file. Each line of my .csv file followed the following format:
<db-connection-info>;<db-name>.<table-name>;[<verification-criteria-expected-row-count>];[<other-criteria-1>];[<other-criteria-2>];[<column-name-1>];[<column-name-2>];[…]

Now, I had to put together a batch script file that could read each line as a row from .csv file, split it into columns and create and assign those values to corresponding variables into SQLCMD in order to pass to the SQL script during execution. 
I've used 'for' loop in the batch script to realize it. 'For' loop allows you to choose delimiter character, skipping any line(s) on the top of the file and what columns to read.  Here is a fragment of batch script file:
for /F "tokens=1-7 skip=1 delims=;" %%c in (%dataFile%) do (
  
sqlcmd -S %%c -i %iSqlFiles% -v tableName="%%d" -v xRowCnt=%%e  -v col1-cond1=%%f -v col2-cond1=%%g -v col1Name="%%h" -v col2Name="%%i"  >> %oLogFile%
)

In this example, it skips the first line of .csv file (passed as variable %dataFile%), reads column (token) 1 to 7 from each line delimited by ';' and assigns to variable starting from letter 'c'. And also it executes SQLCMD command with the given options and provided SQL file '%iSqlFile%'.

One catch here is that the output file. Even though, the SQLCMD has -o <output-file> option, I did not use it here because this option does not allow to append into the existing file. If you have multiple sqlcmd command executing in a loop they either create new file each time deleting the previous one or create multiple files given the output file are unique for each execution. The easy out way out is just to redirect using stream redirection symbol '>>'.


The final part was to create the re-usable (kind of template) SQL script file. Fragment below shows how variable passed from SQLCMD command can be used in SQL script and without hard-coding the database name, table name, column name etc.

SELECT @CurrRowCount=count (*) FROM $(tableName) where $(tableName).$(col1Name) NOT LIKE '$(col1-cond1)' and $(tableName).$(col2Name) NOT LIKE '$(col2-cond1)';


I guess, the idea is clear, below you can find more details of my .csv file, batch script file and SQL script files.

Below are the complete file, which you may be able to use:
  • .csv file: testcases.csv
#<db-connection-info>;<db-name>.<table-name>;[<verification-criteria-expected-row-count>];[<other-criteria-1>];[<other-criteria-2>];[<column-name-1>];[<column-name-2>];[…]
tcp:mysqlserver1\mysqlinstance1,20234;mydatabase1.mytable1;25;col-cond1;col-cond-2;mycolumn1;mycolumn2
.;mydatabase2.mytable1;0

;col-cond1;col-cond-2;mycolumn1;mycolumn2
 mysqlserver1;mydatabase3.mytable1;10;
;col-cond1;col-cond-2;mycolumn1;mycolumn2
 ...


  • Batch script file: DataVerification.cmd

SETLOCAL
@echo off
REM ==============================================
REM DataVerification.cmd - executes a given SQL script containing logic.
REM
==============================================
  cls
set currDir=%~dp0
set dataFile=%currDir%\testcases.csv
REM SQLCMD can read (if required) multiple input sql files delimited by comma.
set iSqlFiles=%currDir%\RowCnt.sql
set oLogFile=%currDir%\DVerification.log
echo ============================================== > %oLogFile%
echo ============= Verification Details =========== >> %oLogFile%
echo ============================================== >> %oLogFile%
REM The following logic within for loop can be customized as per project requirement.
REM Currently it skips the first line in the csv file, and reads each line comma delimited line as row and column.
for /F "tokens=1-5 skip=1 delims=," %%c in (%dataFile%) do (
   echo ============================================== >> %oLogFile%
   echo Processing Database:Table: %%d >> %oLogFile%
   sqlcmd -S %%c -i %iSqlFiles% -v tableName="%%d" -v xRowCnt=%%e  -v col1-cond1=%%f -v col2-cond1=%%g -v col1Name="%%h" -v col2Name="%%i"  >> %oLogFile%
   echo
============================================== >> %oLogFile%
)
find "verification: failed" %oLogFile% > null
if "%ERRORLEVEL%" EQU "0" (
    echo Combined Verification FAILED. Please see the %oLogFile% for details.
) else (
    echo Combined Verification PASSED. Please see the %oLogFile% for details.
)


  • SQL file: RowCnt.sql
PRINT N'Generated as of: '+RTRIM(CAST(GETDATE() AS nvarchar(30)));
Declare @CurrRowCount int;
where $(tableName).$(col1Name) NOT LIKE '$(
col1-cond1)' and $(tableName).$(col2Name) NOT LIKE '$(col2-cond1)'
PRINT N'Total row count: '+CAST(@CurrRowCount AS nvarchar(5))+N'.'
if (@CurrRowCount > $(xRowCnt)) PRINT N'verification: passed.';
else PRINT N'verification: failed.';


Finally, when all test cases defined in testcases.csv are executed, you'll get either "Passed" (if all test cases pass) or "Failed" (even if a single test case fails).  You can see the details in the thus generated log file, similar to following ...

==============================================
============= Verification Details ===========
==============================================

==============================================
Table: mydatabase1.mytable1
Generated as of: <timestamp>
Total row count: xx.
verification: passed[failed].

============================================== ==============================================
Table: mydatabase2.mytable1
Generated as of: <timestamp>
Total row count: yy.
verification: passed[failed].
==============================================

Obviously, this is just an idea, you can develop your won SQL file(s) as per your data processing requirement, modify .csv file as your input requirement and change the 'for' loop as change happens in .csv file.