Updating Property Files using Shell Script

Recently, one of my colleagues asked me some hints on how to read updated properties from one property file, search the same property in the target property file (files) and update/replace the corresponding property value (s) in the target property file (s) if property key matches. He wanted to do this operation using just the regular shell script, so No ANT or Java or Perl. I've put together a simple script file for him and decided to publish it here so that other people with similar needs can be benefited as I did not find any other similar posting.


Let's say, here is how the source property file looks like:

--------- sFile.properties --------
# Updated property values
connection.username=mssqluser
connection.password=*********
connection.hostname=thishost.domain
connection.time.format=yyyy-MM-dd HH:mm:ss

Let's say, here is how one of the target property files looks like:

--------- tFile.properties --------
# Connection properties
connection.username=xyz
connection.password=abc456789
connection.hostname=localhost

connection.time.format =yyyy-MMM-dd HH:mm:ss 

 

Here is how the shell script file look like:

---------- propertyUpdater.sh ------

#!/bin/bash
# Source input property file that provides new/updated property values.
sFile='/cygdrive/c/temp/Bscripts/sFile.properties'

# Target directory where multiple property files whose property value(s) need to be updated/replaced.
tDir='/cygdrive/c/temp/Bscripts/props'

# Reads each line and assigns value to variable _line, excluding all commented (starting with #) lines and all empty lines
for _line in `cat "$sFile" | grep -v -e'#' | grep -v -e'^$'`; do
    echo "Reading line: $_line from source file: $sFile"
    _key=`echo $_line | cut -d '=' -f1`
    _value=`echo $_line | cut -d '=' -f2`
    echo "Retrieved property key: $_key with value: $_value"
    # Comment following 'for' loop if you are using 'tFile' variable.
    for _file in `find $tDir -type f -print | grep ".properties$"`; do
       echo "Updating target property file: $tDir/$_file"
       sed -i "s/^$_key=.*/$_line/g" "$tDir/$_file"
       # for those properties which have space between key and '=' sign.
       sed -i "s/^$_key[ \t]=.*/$_line/g" "$tDir/$_file"
    done   
done


--------- tFile.properties after the update -----

# Connection properties
connection.username=mssqluser
connection.password=*********
connection.hostname=thishost.domain
connection.time.format=yyyy-MM-dd HH:mm:ss

 

Hope, it helps!

SQL Server - few Tips & Tricks

These days, I’ve been working on a Business Intelligence project, powered by Microsoft Technologies like SQL Server 2012 (both 64 bit and 32 bit versions), Reporting Server, Analysis Server etc. In this blog, I’m going to continually post issues encountered during implementation (mostly deployment, integration and security related), troubleshooting ideas and possible solutions/work-around.

Issues with SQL Server Alias

Aliases are great way to make connection portable as they encapsulate connection details like host, port, protocol etc and provide a connection string ready to be used. However, there are few basic things that need to be taken care of, otherwise, you’ll be troubleshooting your connection problem forever especially in the environment where you have both 32 bit and 64 bit client applications running.
  • Aliases are used by client (not server) to establish connection to the server. Confusing? For example in a situation, where your SQLJob being run by SQLAgent on 64 bit SQL Server connects to database hosted on 32 bit SQL server to extract data. In this situation, 64 bit SQL server is considered as a client where as 32 bit SQL server is considered as a server. Similarly, if you are trying to connecto 64 bit SQL server, using SQL Server Management Studio (SSMS), then SSMS is considered as a client, and 64 bit SQL Server is considered as a  server. Since, Aliases are used by client, you need to make sure, the aliases thus created are compatible to respective clients.
  • When I say compatible, I mean, if your client is 32 bit, your alias needs to be created as 32 bit alias otherwise needs to be created as 64 bit. If you don’t know whether your client running as 32 bit or 64 bit and you assign not-compatible alias, you may encounter error something like, “... Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: ... Named Pipes Provider: Could not open a connection to SQL Server [53].... Extract ExceptionDetails [29] ... Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. …” The error message may vary depending upon the specific client trying to establish connection to specific server.

  • So, how to create compatible alias(es)? You can use “cliconfg.exe” to create the SQL server alias. Just be careful, if you are on 64 bit environment, default “cliconfg.exe” is 64 bit and creates alias for 64 bit client application, so in order to create an alias for 32 bit client like SSMS, you need to use “%windir%\SysWOW64\cliconfg.exe".  Apart from “cliconfg.exe”, you can also use the “SQL Server Configuration Manager” to create an alias. Unlike “cliconfg.exe”, SQL Server Configuration Manager, provides clearly visible and distinct areas to define 32 bit vs 64 bit aliases. See this screen shot (Scr_1.0). 

Scr_1.0

Note: If you are not sure whether your client is 32 bit or 64 bit, you can always create same alias (with same name) for both 64 bit and 32 bit clients.

Issue with DNS  

Just like an alias, DNS (Domain Name System) adds great value in making connection portable. Because, IPs can change, host name can change, but DNS may remain the same letting application to run continuously in the situation of change. You can use DNS while creating an alias in the server name field. However, DNS may not work in a situation if you are trying to connect to local SQL server using DNS locally. You may get an error (seen from SSMS), saying, “Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)". See screen shot Scr_2.0.
Scr_2.0

Details and suggestion on this issue, please see the point #5 of Dan Benediktson’s blog


Quick Data Verification Tool

Recently, I had to put together a tool to verify data (in Microsoft SQL server database tables) in no time. Even though the time was very short, I had few requirements to follow while creating the tool as guidance. Those requirements were:
  1. Flexibility -  so that the tool is usable to more than one projects with little or no modifications.
  2. Lightweight - no use of third party scripting framework like Ant/Nant and no use of programming languages like Java , c# etc.
Since the tool was intended specifically for Microsoft SQL Server on Windows environments, I decided to use batch scripting, SQLCMD (http://technet.microsoft.com/en-us/library/ms162773.aspx), and SQL scripting.

In order to make it flexible and not to hard code anything (like Servername, database name, table name etc), so that same SQL script could be used from one project to another for similar type of verification with little or no modification, I decided to externalize these attributes in a ';' delimitted .csv file. Each line of my .csv file followed the following format:
<db-connection-info>;<db-name>.<table-name>;[<verification-criteria-expected-row-count>];[<other-criteria-1>];[<other-criteria-2>];[<column-name-1>];[<column-name-2>];[…]

Now, I had to put together a batch script file that could read each line as a row from .csv file, split it into columns and create and assign those values to corresponding variables into SQLCMD in order to pass to the SQL script during execution. 
I've used 'for' loop in the batch script to realize it. 'For' loop allows you to choose delimiter character, skipping any line(s) on the top of the file and what columns to read.  Here is a fragment of batch script file:
for /F "tokens=1-7 skip=1 delims=;" %%c in (%dataFile%) do (
  
sqlcmd -S %%c -i %iSqlFiles% -v tableName="%%d" -v xRowCnt=%%e  -v col1-cond1=%%f -v col2-cond1=%%g -v col1Name="%%h" -v col2Name="%%i"  >> %oLogFile%
)

In this example, it skips the first line of .csv file (passed as variable %dataFile%), reads column (token) 1 to 7 from each line delimited by ';' and assigns to variable starting from letter 'c'. And also it executes SQLCMD command with the given options and provided SQL file '%iSqlFile%'.

One catch here is that the output file. Even though, the SQLCMD has -o <output-file> option, I did not use it here because this option does not allow to append into the existing file. If you have multiple sqlcmd command executing in a loop they either create new file each time deleting the previous one or create multiple files given the output file are unique for each execution. The easy out way out is just to redirect using stream redirection symbol '>>'.


The final part was to create the re-usable (kind of template) SQL script file. Fragment below shows how variable passed from SQLCMD command can be used in SQL script and without hard-coding the database name, table name, column name etc.

SELECT @CurrRowCount=count (*) FROM $(tableName) where $(tableName).$(col1Name) NOT LIKE '$(col1-cond1)' and $(tableName).$(col2Name) NOT LIKE '$(col2-cond1)';


I guess, the idea is clear, below you can find more details of my .csv file, batch script file and SQL script files.

Below are the complete file, which you may be able to use:
  • .csv file: testcases.csv
#<db-connection-info>;<db-name>.<table-name>;[<verification-criteria-expected-row-count>];[<other-criteria-1>];[<other-criteria-2>];[<column-name-1>];[<column-name-2>];[…]
tcp:mysqlserver1\mysqlinstance1,20234;mydatabase1.mytable1;25;col-cond1;col-cond-2;mycolumn1;mycolumn2
.;mydatabase2.mytable1;0

;col-cond1;col-cond-2;mycolumn1;mycolumn2
 mysqlserver1;mydatabase3.mytable1;10;
;col-cond1;col-cond-2;mycolumn1;mycolumn2
 ...


  • Batch script file: DataVerification.cmd

SETLOCAL
@echo off
REM ==============================================
REM DataVerification.cmd - executes a given SQL script containing logic.
REM
==============================================
  cls
set currDir=%~dp0
set dataFile=%currDir%\testcases.csv
REM SQLCMD can read (if required) multiple input sql files delimited by comma.
set iSqlFiles=%currDir%\RowCnt.sql
set oLogFile=%currDir%\DVerification.log
echo ============================================== > %oLogFile%
echo ============= Verification Details =========== >> %oLogFile%
echo ============================================== >> %oLogFile%
REM The following logic within for loop can be customized as per project requirement.
REM Currently it skips the first line in the csv file, and reads each line comma delimited line as row and column.
for /F "tokens=1-5 skip=1 delims=," %%c in (%dataFile%) do (
   echo ============================================== >> %oLogFile%
   echo Processing Database:Table: %%d >> %oLogFile%
   sqlcmd -S %%c -i %iSqlFiles% -v tableName="%%d" -v xRowCnt=%%e  -v col1-cond1=%%f -v col2-cond1=%%g -v col1Name="%%h" -v col2Name="%%i"  >> %oLogFile%
   echo
============================================== >> %oLogFile%
)
find "verification: failed" %oLogFile% > null
if "%ERRORLEVEL%" EQU "0" (
    echo Combined Verification FAILED. Please see the %oLogFile% for details.
) else (
    echo Combined Verification PASSED. Please see the %oLogFile% for details.
)


  • SQL file: RowCnt.sql
PRINT N'Generated as of: '+RTRIM(CAST(GETDATE() AS nvarchar(30)));
Declare @CurrRowCount int;
where $(tableName).$(col1Name) NOT LIKE '$(
col1-cond1)' and $(tableName).$(col2Name) NOT LIKE '$(col2-cond1)'
PRINT N'Total row count: '+CAST(@CurrRowCount AS nvarchar(5))+N'.'
if (@CurrRowCount > $(xRowCnt)) PRINT N'verification: passed.';
else PRINT N'verification: failed.';


Finally, when all test cases defined in testcases.csv are executed, you'll get either "Passed" (if all test cases pass) or "Failed" (even if a single test case fails).  You can see the details in the thus generated log file, similar to following ...

==============================================
============= Verification Details ===========
==============================================

==============================================
Table: mydatabase1.mytable1
Generated as of: <timestamp>
Total row count: xx.
verification: passed[failed].

============================================== ==============================================
Table: mydatabase2.mytable1
Generated as of: <timestamp>
Total row count: yy.
verification: passed[failed].
==============================================

Obviously, this is just an idea, you can develop your won SQL file(s) as per your data processing requirement, modify .csv file as your input requirement and change the 'for' loop as change happens in .csv file.

ArcSight Logger Installation Issue

We were trying to install & configure ArcSight Logger on 64 bit Red Hat Enterprise Linux Server release 6.1 (Santiago) . We encountered strange issue.  We had issue with the versions of Loggers:
  • ArcSight-logger-5.3.0.6684.0.bin
  • ArcSight-logger-5.3.1.6838.0.bin 
Let me explain below what's the issue was and what custom solution we found after few hours/days of investigation. We had to do investigation ourselves, as Software Vendor did not provide any satisfactory solution, even though we had opened a ticket with them. Their answers were very generic, which did not lead to any solution.



Problem Details:
While installing the ArcSight Logger in console mode (with '-i console' option):

  • Logger initialization failed while installing as “root” (we typically become root as 'sudo su -').  Error message, “The initialization of Logger software was unsuccessful. Reason: Platform initializer failed”.
  • Logger installation and initialization went Okay, while installing as non-root user like “arcsight”.  Please note, while installing as non-root user, the logger can not listen on privileged SSL port "443" and it cannot create linux service for application. 

 Since, our requirement was logger to be accessible on Port 443, we had to install & initialize it as root user.
Note: Even though, “root” is being used during installation, Arcsight http processes are owned by non-privileged user ‘nobody’ for security reason.

Investigation details/result:
As  we found, in both cases above, the installation itself was successful, but it failed during initialzation. We found that it was failing while executing the script file: <Logger-Install-Dir>/current/arcsight/service/function in the following line:
runAsOtherUser() {
        curr_user=`whoami`
        if [ -f "$CURRENT_HOME/arcsight/service/user.config" -a "X$curr_user" = "Xroot" ]; then
                source $CURRENT_HOME/arcsight/service/user.config
                su -c "$1" $CUSTOM_NONROOT_USER
        else
                eval $1
        fi
}

We got the segmentation fault at this point, as seen in the <Logger-Install-Dir>/current/arcsight/logger/logs/logger_init_driver.log
../service/functions: line 159: 23437 Segmentation fault      (core dumped) su -c "$1" $CUSTOM_NONROOT_USER

and the actual reason seemed to be NOT finding the “NSS_3.12.9” libraries.  As we noticed the below the error message in /var/log/secure log:

su: PAM unable to dlopen(/lib64/security/pam_ldap.so): /opt/arcsight/logger/current/local/nss/lib/libnss3.so: version `NSS_3.12.9' not found (required by /lib64/libldap-2.4.so.2)
su: PAM adding faulty module: /lib64/security/pam_ldap.so


We checked the installed RPM packages related to the nss* on our Logger server, and found that we actually had slightly higher version (nss_3.14.3) installed,. See below:

nss-3.14.3-4.el6_4.x86_64
nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64
nss-softokn-3.14.3-3.el6_4.x86_64
nss-softokn-freebl-3.14.3-3.el6_4.i686
nss-softokn-freebl-3.14.3-3.el6_4.x86_64
nss-sysinit-3.14.3-4.el6_4.x86_64
nss-tools-3.14.3-4.el6_4.x86_64
nss-util-3.14.3-3.el6_4.x86_64


since, it was not failing while using non root user, it's obvious that, nss related code library (located under <Logger-Install-Dir>/current/local/nss/lib/libnss3.so) is being used only while doing 'su *' in the script. And it's also obvious that ArcSight's libnss library code requires (hard coded ???) nss_3.12.9 version of nss libraries installed on the server.


Resolution:

  1. I guess, the simple resolution of this problem would be to lower the version of nss libraries to 3.12.9, however, in our case it's not possible because of operation policy not to go with lower version and always maintain the latest version of libraries. Many of your organization may have similar policy in effect, so read below what (work-around ) option you have until software vendor provides the fix:
  2. Here’s how we're able to successfully install and configure ArcSight Logger using the ‘root’ user without lowering the version of nss libraries. As previously discussed, the issue seemed to be incompatibility with the nss* libraries (software code seems to be hard-coded to require 3.12.9 version of nss libraries) and required some application specific script changes. 
         Run the installer as root in the console mode. As soon as you see the message, "Begin    Initialization ... The installation of Logger software was successfull ... Initialization will begin after pressing [Enter]...", open another terminal/command window, and remove the entry: <Logger-Install-Dir>/current/local/nss/lib from the LD_LIBRARY_PATH variable. Affected files:
  •  /opt/arcsight/logger/current/arcsight/logger/bin/scripts/relative_paths_env.sh
  • /opt/arcsight/logger/current/arcsight/service/functions
  • /opt/arcsight/logger/current/arcsight/logger/bin/scripts/web.sh
Once, the files updated, save the files.
Hit the [Enter] from the first console window and continue the initialization/configuration.
It seemed that once you remove the nss/lib path entry from the LD_LIBRARY_PATH, the application uses the latest version of nss libraries installed/available on your server.

A quick note on subnet mask

     Like myself, a lot of you may be not everyday network guys, but sometimes need to understand and work on network setup, configuration etc., specially while dealing with infrastructure projects. I had some confusions how the subnet mask or subnet are presented in IPv4 and had to refresh my mind going through documents and articles. I put together a note that is easy to understand and thought to post as it might be helpful for someone else also.


In this note, I am not going to explain the definition or other details regarding the subnet or subnet mask. If any of you are interested into that details, there are tons of materials available on the web including this Wikipedia post
     In IPv4, the subnet mask or subnet are usually presented in two forms. Below example is for 22 bit long network mask. That means 22 bits represent the network part and remaining 10 (32-22)  bits represent the host part and we only mask the network part. The available IP address(es) are calculated based on how many un-masked bits are available.

  • Classful network way: The routing prefix or subnet mask is usually specified in  quad-dotted decimal representation like
          255.255.252.0 (decimal format) 
          11111111.11111111.11111100.00000000 (binary format)
        
      Hope, you still remember, how to convert binary to decimal and vice-versa, if not, you can find this tool very handy: http://acc6.its.brooklyn.cuny.edu/~gurwitz/core5/nav2tool.html
      
  • Classless Inter-Domain Routing (CIDR) way: The routing prefix is presented as  network address followed by slash character (/) followed by bit-length of the prefix, i.e. <network address>/<bit-length-of-prefix>
          xxx.xxx.xx.0/2
          
          So, in the above example, 22 bits are masked and only remaining 10 bits are available for host IP addresses. Another example, below

           192.168.55.0 (IP) ; 255.255.255.0 (netmask) or 192.168.55.0/24 applies to IPs in the range of 192.168.55.0 - 192.168.55.255

University of Wisconsin maintains a very handy CIDR Conversion Table, which along with other information gives you CIDR prefix length, their Dotted Decimal Netmask, and corresponding Netmask in Binary format.

 
Hope, this quick note and referenced information will be helpful.