Few Tips on Secure vs. Non-Secure WAS Web Server Plug-in Connection

In this blog, I am focusing only on one aspect of WebSphere Application Server (WAS) Web Server plug-in to Application Server connection i.e. secure vs. non-secure, what has changed in recent versions of WAS and any tips and tricks.
By default, when plug-in configuration file (plugin-cfg.xml) is generated, it creates configuration for both secure (HTTPS) and non-secure (HTTP) transport channels for the communication to
the application server(s). Below is fragment of plugin-cfg.xml:
...
<ServerCluster CloneSeparatorChange="false" ... >
      <Server ConnectTimeout="0" ExtendedHandshake="false" ...>
<!-- non-secure (HTTP) transport channel -->
             <Transport Hostname="MyHost" Port="9080" Protocol="http"/>
<!-- secure (HTTPS) transport channel -->
<Transport Hostname="MyHost" Port="9443" Protocol="https">
                   <p
roperty Name="keyring" Value="c:\IBM\Plugins\plugin-key.kdb"/>
                   <Property Name="stashfile" Value="c:\IBM\Plugins\plugin-key.sth"/>
              </Transport>

       </Server>
  </ServerCluster>
...

When you have two transport channels defined, a lot of time it creates confusion because people may not know which communication channel is actually going to be used or whether they really need both channels in their configuration?
When you have both channels defined (just like shown in the example above), if the incoming traffic is secure (HTTPS), it automatically chooses the secure channel (MyHost:9443 in example) to create connection to the back-end application server, but if the incoming traffic is non-secure (HTTP), it by default chooses non-secure (HTTP) (MyHost:9080 in example).

  • What if reverse case scenario? 
  • Or only one type of back-end connection is defined/available and opposite type of incoming traffic is encountered? 

Looks like there are some recent changes and gotcha here.


Incoming secure (HTTPS) traffic and and only non-secure (HTTP) transport channel defined/available:

  • version 8.5.5.0 and later:


 In version 8.5.5.0 or latter, in this particular case, plug-in won't create any connection to the    application server, because it interprets this situation as a security risk and request fails. If plugin trace is enabled, you'll see something like below in the plugin.log:

[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ws_common: websphereFindTransport: Nosecure transports available
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ws_common: websphereWriteRequestReadResponse: Failed to find a transport
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ESI: getResponse: failed to get response: rc = 4
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - DEBUG: ESI: esiHandleRequest: failed to get response
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - DEBUG: ESI: esiRequestUrlStackDestroy
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ws_common: websphereHandleRequest: Failed to handle request

However, you can use the plug-in custom property UseInsecure=true in the plugin-cfg.xml file; In this case plugin will use non-secure (HTTP) transport channel to establish connection to the application server despite the secure incoming request. You can add custom property in two wasys:
1. You can directly modify the plugin-cfg.xml as follows:
<Config ASDisableNagle="false" AcceptAllContent="true" AppServerPortPreference="HostHeader" ... UseInsecure="true">
...
</Config>
2. Or add this property through WebSphere Administration Console (Servers > Web Servers > Web_server_name > Plug-in properties > Custom properties page) and regenerate the plugin-cfg.xml.

Once the UseInsecure=true custom property becomes effective, the above mentioned scenario can create connection successfully. Below are some relevant lines from plugin.log (trace enabled)

[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - TRACE: ws_common: websphereFindTransport: Finding the transport for server 021313E0
websphereFindTransport: Setting the transport(case 3): OND2C00981304.cihs.ad.gov.on.ca on 080
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereExecute: Executing the transaction with the app server reqInfo is e671f78 useExistingStream=0, client->stream=00000000
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: Getting the stream to the app server (keepalive 28)
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_transport: transportStreamDequeue: Checking for existing stream from the queue
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: calling blocking connect
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: Setting socket to non-block for ServerIOTimeout over HTTP
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: socket 3564 connected to OND2C00981304.cihs.ad.gov.on.ca:9080 timeout=900
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: lib_stream: openStream: Opening the  stream soc=3564

  • Previous versions:


By default, in previous versions of WAS plug-in for web server, if the web server plug-in received secure (HTTPS) request but could not create secure connection to the application server (either secure transport channel not defined or secure connection could not be established),
it would create a non-secure (HTTP) connection (if one is defined and available). If HTTP transport not defined, then no connection would be created.
So, the behaviour in older version of WAS that defaulted HTTPS to HTTP when secure connection was not available, was a real problem from security prospective. Read more about the problem as documented here:
http://www-01.ibm.com/support/docview.wss?uid=swg1PM85452.

As mentioned above, in WAS version 8.5.5.0 and later, it has been fixed and it only defaults from HTTPS to HTTP if it is explicitly configured to default when HTTPS connection can not be
established. There still seems to be some logging issue in version 8.5.5 with UseInsecure="true" and fix is available in fix pack 8.5.5.2. Read detail here: http://www-01.ibm.com/support/docview.wss?uid=swg1PM96173

Incoming HTTP traffic and only secure (HTTPS) transport channel to back end configured/available:

   In this case, there should be no issue as long as keyring that contains the certificate of back-end application server and stash file properly configured for the transport channel.

Note: In order to minimize confusion, if you are sure only secure (HTTPS) connection should be allowed for your implementation, you can simply comment out the non-secure transport channel configuration in the plugin-cfg.xml or vice-versa.

Hope these tips help to eradicate (if any) confusion you to might have related to plug-in to application server connection from security perspective.

Read more about the Web server plug-in connections in IBM Knowledge Center here: http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/cwsv_plugin_connections.html?lang=en.

Accelerated Cryptography using On-Chip Instructions : from Java Application Perspective

     In this blog, I am exploring accelerated crypto processing options using on-chip instructions. I am looking these options from applications (specifically Java) perspective. Let me start with one of my favourite application servers - WebSphere Application Server (WAS). One of the new features in IBM WebSphere Application Server version 8.5.0.1 lets you
take advantage of the Intel Advanced Encryption Standard (AES) New Instruction (AES-NI) set built-in (on-chip) within Intel Westmere and successor family of processors when dealing with AES cryptography. As per Intel, if exploited correctly, AES-NI not only boosts performance of cryptographic operations but also has security advantages. It may help to eliminate timing and cache based attacks. Since AES is currently one of the most popular block ciphers, wide range of applications are able to take benefit from these built-in instructions. Enabling this feature for WAS is easy. You need to define system property com.ibm.crypto.provider.doAESInHardware
and assign value true. You can do it by defining it under Generic JVM arguments setting through WebSphere Administration Console. However, here are few pre-requisites in order for it to work:

  • Java version: IBM SDK version 7 SR 3 or higher.
  • WAS version: 8.5.0.1 or higher
  • JCE provider: IBM JCE provider.
  •    Note: IBM PKCS11 provider does not use the Intel AES-NI instructions.
       Note: Update JCE provider in java.security file located under $JAVA_HOME/jre/lib/security
  • Processor: Intel Westmere and successor family of processors
In order to verify whether the underlying processor supports the AES-NI instructions or not, you can use the following system property to generate appropriate JCE tracing:
com.ibm.crypto.provider.AESNITrace=true
Basically, setting com.ibm.crypto.provider.doAESInHardware=true is no harm. if it is set and supported by underlying Intel processor, IBM JCE provider attempts to use AES-NI otherwise
it uses software module for cryptographic operations. Refer to IBM Knowledge Center for more information. For details on AES-NI, refer to
artcle Intel® Advanced Encryption Standard (Intel® AES) Instructions Set - Rev 3.01 by Shay Gueron (Intel) at https://software.intel.com/en-us/articles/intel-advanced-encryption-standard-aes-instructions-set

     If your platform is not WAS, but let's say, WebLogic on Solaris, you're well covered there as well. Starting from version 10 update 8, Solaris supports AES-NI using Solaris Cryptographic Framework (SCF). Java applications that use SunPKCS11 JCE provider will benefit AES acceleration for encryption/decryption through Intel AES-NI on Solaris. Very detail information about Java cryptography using AES-NI on Solaris can be found in Ramesh Nagappan's web log here. If you are looking Intel AES-NI support on Solaris in general, see Dan Anderson's blog Intel AES-NI Optimization on Solaris. Obviously, AES-NI support on Solaris is available only for Solaris x86 64-bit, running on a Intel microprocessor that supports the AES-NI instruction set, what about similar feature on Solaris powered by Sun/Oracle T series of processors? Guess what? Sun/Oracle SPARC processors are actually the leader in supporting hardware-accelerated crypto processing. Even though all T series chips supported some level of it, starting with T4, crypto became the part of the core instruction set accessible via non-privileged instructions. It is now one of the basic services offered by CPU. Very interesting blog about Java EE Application
Servers, SPARC T4, Solaris Containers, and Resource Pools by Jeff Taylor can be found here. If you are interested in utilizing SPARC on-chip crypto processor for applications hosted on WebSphere Application Server, this OTN white paper (http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/ibm-websphere-sparc-t5-2332327.pdf) gives a lot of information. Read the section, Impact of SPARC T5 Hardware Encryption for Secure Traffic. Specifically, in this section it talks about how to use Oracle Solaris Kernel module called Kernel SSL proxy (KSSL), which can be used for offloading operations such as SSL/TLS. KSSL processes SSL traffic via Oracle SCF in the Kernel and thus improves the performance. White paper also clearly shows the performance comparison between on-chip and software crypto modules.

      This is just a small attempt to put together few options available in terms of on-chip accelerated crypto processing for Java applications. Obviously, there are number of other solutions in the market not covered here. If you are researching for suitable cryptographic solution for your next project, you may start by reviewing Validated FIPS 140-1 and FIPS 140-2 Cryptographic Modules list, maintained by National Institute of Standards and Technology (NIST) here.
      As seen from different tests available from individual tester or from vendor, on-chip crypto accelerator really performs well in comparison to software module if implemented correctly. So, if your platform supports, consider this option for your next project and get both performance and security benefits.

Possible Issues caused by CA Wily Introscope APM BRTM feature

After going live with CA APM on week-ends, right next Monday morning when site opened for business, help desk started getting string of complains. Either both or one of the following were reported:
1) Users experiencing spinning wheel when clicking submit on their web page.
2) Some users were losing their sessions and were prompted to re-login to their web application.

Initially, we were completely lost as we did not know what's causing this. Odd thing, not all users were experiencing the difficulties. And also server resources (CPU, memory) usage were normal. Performance was also normal for those users who did not experience the above mentioned issues.   After analyzing the web server access log, we were able to put some pattern based on the affected users' User Agent information. Some how, all the affected users were using Internet Explorer (IE) browsers.  IE 9/Trident 5.0, IE 10/Trident 6.0.  Even though more than 60% of our users were using Firefox (different versions), none of them reported any such issues, only IE users were affected. With this information in hand, we asked QA to test with different browser and they were able to reproduce the issues with IE and not with Firefox.

     Next step was to identify what was causing the issues. Again, looking into the Access log, we were able to identify that the request URL coming from those users just before they were kicked out had query parameter 'WilyCmd=cmdMetrics' like '/abc/def.jsp?WilyCmd=cmdMetrics'.
We also noticed that number of http(s) requests were considerably higher than usual. Basically almost each request url has duplicate request with the above mentioned query parameter appended. This information allowed us to pin-point the newly installed monitoring tool's BRTM Business Transaction Monitoring" feature (which injects Java Script into response header so that it's executed on user's browser to make another request with query parameter 'WilyCmd=cmdMetrics' appended), as primary suspect.  In our case, we would normally see in average 450,000 requests/day, but with this feature in place we recorded 750,000 requests in average per day.
You can read more about BRTM feature here:
https://wiki.ca.com/display/APMDEVOPS98/APM+for+Browser+Response+Time+Monitor

In the above mentioned link, you can also see a short statement about IE, it states, "... The CA BRTM JavaScript is loaded asynchronously so it does not block the loading and execution of any application JavaScript files and other components. However, Internet Explorer (6 through 9) does not wait on asynchronous loads before it generates the load event. This limitation can affect metric generation for Internet Explorer."

From our own investigation, we concluded that BRTM feature caused the above mentioned issues when using certain versions of IE browser. As a work-around, we had to turn off this feature until it is fixed by the vendor. Disabling BRTM feature is easy, you need to assign 'false' to 'introscope.agent.brtm.enabled' property in 'IntroscopeAgent.profile' file and restart the application server.

Lesson learned: until the issue is fixed, make sure you test your web application using all supported browsers before enabling this feature into your production environment.

WebSphere Application Server (WAS) High Performance Extensible Logging (HPEL) - some tips and tricks

As we know, WebSphere Application Server generally supports two types of logging mechanisms - Basic and High Performance Extensible Logging (HPEL).  If your application(s) logs with high frequency, there is a real possibility that system will be benefited (performance wise), if you enable HPEL. On the performance benefit of HPEL, IBM documentation says,
"HPEL has been designed and tested to significantly outperform the existing basic log and trace facility. One result is that the application server can  run with trace enabled while causing less impact to performance than tracing the same components using basic logging. Another result is that applications that frequently write to the logs might run faster with HPEL ..."
Reference: https://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/ctrb_HPELOverview.html

I have recently worked on a project to change logging from Basic to HPEL and based on my experience, I'm listing few tips and tricks below.


1. Enabling HPEL:

If you have just few App Servers in your server farm, then you can do this task manually. However, if you need to enable this for dozens of App Servers then, it's better to automate the task using script. WAS InfoCenter provides some fragments of scripts here and there, but nothing that you can take and execute. Below is a simple Jython script that I put together. It enables or disables HPEL based on passed argument to all servers within the given WAS Cell. You can also exclude specific server(s) as needed. Below script excludes 'nodeagent' and 'dmgr'.

enableHPEL.py
import sys
global AdminConfig
global AdminControl

action = sys.argv[0]

if action == 'true':
enableHPEL='true'
enableRAS='false'
else:
enableHPEL='false'
enableRAS='true'
#endElse


#Get Cell, nodes, and servers
cellName = AdminControl.getCell( )
cellID = AdminConfig.getid("/Cell:"+cellName+"/" )
nodes = AdminConfig.list("Node", cellID ).split()

for nodeID in nodes:
nodeName = AdminConfig.showAttribute(nodeID, "name")
servers = AdminConfig.list("Server", nodeID ).split()
for serverID in servers:
srvName = AdminConfig.showAttribute(serverID, "name" )
print "/Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName
if srvName == 'nodeagent' or srvName == 'dmgr':
continue
#endIf
HPELService = AdminConfig.getid("/Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName+"/HighPerformanceExtensibleLogging:/");
RASLogService = AdminConfig.getid("/Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName+"/RASLoggingService:/");
if enableHPEL == 'true':
print "Enabling HPELService for /Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName
AdminConfig.modify(HPELService, "[[enable true]]")
AdminConfig.modify(RASLogService, "[[enable false]]")
#endIf
if enableRAS == 'true':
print "Enabling RASLogService for /Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName
AdminConfig.modify(HPELService, "[[enable false]]")
AdminConfig.modify(RASLogService, "[[enable true]]")
#endIf
print "Saving configuration for /Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName
AdminConfig.save()
print "Synchronizing configuration for /Cell:"+cellName+"/Node:"+nodeName+"/Server:"+srvName
Sync1 = AdminControl.completeObjectName('type=NodeSync,process=nodeagent,node='+nodeName+',*')
synced = AdminControl.invoke(Sync1, 'sync')
print "Synchronization completed: "+synced
#endFor
#endFor

Above script is for full WAS profile, for WAS Liberty profile you have to do it manually for one WAS liberty server and may copy the configuration to other Liberty servers. You can find details here: https://www-01.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/twlp_confHPEL.html?cp=SSEQTP_8.5.5
Or use Eclipse WAS Liberty tool as outlined here: https://www-01.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/t_edit_server_config.html?cp=SSEQTP_8.5.5

If you need to customize HPEL setting for full WAS profile further (like changing size of log repository, location of repository, buffering etc.)  using wsadmin scripting, you can follow the following link for Full WAS profile:
http://www-01.ibm.com/support/knowledgecenter/SSD28V_8.5.5/com.ibm.websphere.base.doc/ae/ttrb_confHPELwsadmin.html
Similar options for WAS Liberty profile can be found here:
https://www-01.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/twlp_confHPEL.html?cp=SSEQTP_8.5.5

You can execute the above mentioned script as follows using wsadmin.sh[bat].
Enable HPEL:
wsadmin -username <username> -password <password> -lang jython -f ~/enableHPEL.py true

Disable HPEL:
wsadmin -username <username> -password <password> -lang jython -f ~/enableHPEL.py false

Note: provided script changes the configuration using wsadmin's AdminConfig object, which requires App Server to be restarted in order to take effect. You can use AdminControl object to do the same in runtime and that takes effect immediately.


2. Usability:

One of the major concerns with HPEL I found was usability issue. Once HPEL is enabled, the System logs and traces are written in binary, and people can not just open the log files using any text editor as they used to do it. There is an option to write plain text log also, but by enabling that option we loose certain performance benefit of HPEL. Even though IBM provides pretty good tools ('logViewer.sh[bat]' with full WAS profile, and 'binaryLog' with WAS Liberty profile) either to convert the binary logs into text format or just monitor the logs in real time, it's not easy to convince people to change their way of doing things. It takes time. One catch though, IBM's tools require WAS installed in the machine where tools are supposed to run. Not everybody who needs to look logs has WAS installed and your log repository server where you copy your binary log repositories for processing also may not have WAS installed. So, alternative is to convert logs on the App Server, where its generated  (may not be a good practice as it uses CPU cycles of production server)  and provide text logs to whoever need to review them.
I thought that it would be really convenient if we can have a tool to process these binary logs without WAS. And indeed, with little customization, you can use IBM's tool without WAS, which I am going to explain here:

2.1. Preparing custom 'logViewer' tool so that it can be run independent of WAS:

2.1.A) Create following directory structure under /tmp and copy corresponding file from your WAS installation:

/tmp/hpelviewer
bin
logViewer.sh (copy from WAS_INSTALLED_DIR/bin directory)
logViewer.bat (copy from WAS_INSTALLED_DIR/bin directory)
lib
bootstrap.jar (copy from WAS_INSTALLED_DIR/lib directory)
com.ibm.hpel.logging.jar  (copy from WAS_INSTALLED_DIR/plugins directory)
startup.jar  (copy from WAS_INSTALLED_DIR/lib directory)
properties
WsHeader  (copy from WAS_INSTALLED_DIR/properties directory)
WsLevels.properties  (copy from WAS_INSTALLED_DIR/properties directory)


2.1.B) Once the files are copied, open the logViewer.sh from /tmp/hpelviewer/bin and change as follow (actual script lines are in blue):
#Set JAVA_HOME at the top:
JAVA_HOME=/cygdrive/c/java/jdk1.7.0_45
#leave following line as it is:
binDir=`dirname "$0"`
#Comment the following line:
#. "$binDir/setupCmdLine.sh"
#Instead set the WAS_HOME:
setWASHome()
{
    CUR_DIR=$(pwd)
WAS_DIR="${binDir}"/..
cd "${WAS_DIR}"
WAS_HOME=$(pwd)
cd "${CUR_DIR}"  
}
setWASHome

#Leave following lines as it is:
if [ -f ${JAVA_HOME}/bin/java ]; then
    JAVA_EXE="${JAVA_HOME}/bin/java"
else
    JAVA_EXE="${JAVA_HOME}/jre/bin/java"
fi


#comment the following lines as we need to make them compatible to Unix and Cygwin:
#WAS_HEADER="-Dlogviewer.custom.header=${WAS_HOME}/properties/WsHeader"
#WAS_LEVELS="-Dlogviewer.custom.levels=${WAS_HOME}/properties/WsLevels.properties"
#Instead add the following lines:
PATH=.:"$JAVA_HOME"/bin/:"$JAVA_HOME"/jre/bin:$PATH
WAS_CLASSPATH="$WAS_HOME/properties:$WAS_HOME/lib/bootstrap.jar:$WAS_HOME/lib/com.ibm.hpel.logging.jar"
WAS_HEADER_PROP="${WAS_HOME}/properties/WsHeader"
WAS_LEVELS_PROP="${WAS_HOME}/properties/WsLevels.properties"
WAS_LOGGING="-Djava.util.logging.manager=com.ibm.ws.bootstrap.WsLogManager -Djava.util.logging.configureByServer=true"
# to be used as default log repository path
LOG_ROOT="$WAS_HOME/logs"

##
## Determine the platform and convert path accordingly
##
case $OSTYPE in
  cygwin)
    uname=CYGWIN_NT
        ;;
  *)
    uname=`uname`
esac

case $uname in
  CYGWIN_*)
WAS_HEADER_PROP=$(cygpath -w "$WAS_HEADER_PROP")
WAS_LEVELS_PROP=$(cygpath -w "$WAS_LEVELS_PROP")
LOG_ROOT=$(cygpath -w "$LOG_ROOT")
WAS_HOME=$(cygpath -w "$WAS_HOME")
        WAS_CLASSPATH=$(cygpath -pw "$WAS_CLASSPATH")
        PATH=$(cygpath -pw "$PATH")
        ;;
  *)
;;
esac
OSGI_INSTALL="-Dosgi.install.area=$WAS_HOME"
WAS_HEADER="-Dlogviewer.custom.header=$WAS_HEADER_PROP"
WAS_LEVELS="-Dlogviewer.custom.levels=$WAS_LEVELS_PROP"

#Finally, modify the Java execution command and options as follow:

${JAVA_EXE} \
-Dwas.install.root="$WAS_HOME" \
-Duser.install.root="$WAS_HOME" \
-Dlog.repository.root="$LOG_ROOT" \
$WAS_LOGGING \
$WAS_HEADER \
$WAS_LEVELS \
-classpath $WAS_CLASSPATH  com.ibm.ws.bootstrap.WSLauncher \
com.ibm.ws.logging.hpel.viewer.LogViewer "$@"

2.1.C) Save the changes.

Note: In the above script 'cygpath' has been used to make it compatible with Cygwin. for more information about 'cygpath' see http://www.cygwin.com/cygwin-ug-net/cygpath.html

2.2. Preparing custom 'binaryLog' tool (for Liberty profile) so that it can be run independent of WAS:

2.2.A) Create following directory structure under /tmp and copy corresponding file from your WAS installation:
/tmp/hpelviewer
bin
binaryLog
binaryLog.bat
tools
ws-binarylogviewer.jar
lib
com.ibm.ws.kernel.cmdline_1.0.10.jar
com.ibm.ws.logging.hpel_1.0.10.jar

2.2.B) customization of binaryLog is easier as IBM has written it in in such a way that it can be run under Cygwin as well as other Unix like platform. Just add the following two lines (actual lines to be added are in blue) in the beginning of the script.
WLP_INSTALL_DIR=..
#optionally set JAVA_HOME
JAVA_HOME=/cygdrive/c/java/jdk1.8.0

2.2.C) Save the file.

Note: follow similar tricks if you need to modify corresponding BAT files for Windows.
Once your customization is complete, just create a zip file (for example hpelviewer.zip) making hpelviewer as root directory of the zip,  and give it your users.
They can just extract the zip and and execute as follow. Only thing they need is Java installed on their machine.
Here is an exmple:
cd <hpelviewer_extract_dir>/hpelviewer/bin
./logViewer.sh -repositoryDir <repositoryDir>
Or:
cd <hpelviewer_extract_dir>/hpelviewer/bin
./binaryLog view  {serverName | repositoryPath}

Note: Since the property files, jar files and script files for WAS full profile and WAS Liberty profile are unique, you can actually create a single zip file by combining files for both. Here is how the file structure for combined  tool would look like:

hpelviewer
bin
logViewer.sh
logViewer.bat
binaryLog
binaryLog.bat
tools
ws-binarylogviewer.jar
lib
bootstrap.jar
com.ibm.hpel.logging.jar
startup.jar
com.ibm.ws.kernel.cmdline_1.0.10.jar
com.ibm.ws.logging.hpel_1.0.10.jar
properties
WsHeader
WsLevels.properties

Create a zip and give it to your users, so that they can use it to view binary logs generated by both WAS full profile or Liberty profile.

Happy HPEL !!!

Updating Property Files using Shell Script

Recently, one of my colleagues asked me some hints on how to read updated properties from one property file, search the same property in the target property file (files) and update/replace the corresponding property value (s) in the target property file (s) if property key matches. He wanted to do this operation using just the regular shell script, so No ANT or Java or Perl. I've put together a simple script file for him and decided to publish it here so that other people with similar needs can be benefited as I did not find any other similar posting.


Let's say, here is how the source property file looks like:

--------- sFile.properties --------
# Updated property values
connection.username=mssqluser
connection.password=*********
connection.hostname=thishost.domain
connection.time.format=yyyy-MM-dd HH:mm:ss

Let's say, here is how one of the target property files looks like:

--------- tFile.properties --------
# Connection properties
connection.username=xyz
connection.password=abc456789
connection.hostname=localhost

connection.time.format =yyyy-MMM-dd HH:mm:ss 

 

Here is how the shell script file look like:

---------- propertyUpdater.sh ------

#!/bin/bash
# Source input property file that provides new/updated property values.
sFile='/cygdrive/c/temp/Bscripts/sFile.properties'

# Target directory where multiple property files whose property value(s) need to be updated/replaced.
tDir='/cygdrive/c/temp/Bscripts/props'

# Reads each line and assigns value to variable _line, excluding all commented (starting with #) lines and all empty lines
for _line in `cat "$sFile" | grep -v -e'#' | grep -v -e'^$'`; do
    echo "Reading line: $_line from source file: $sFile"
    _key=`echo $_line | cut -d '=' -f1`
    _value=`echo $_line | cut -d '=' -f2`
    echo "Retrieved property key: $_key with value: $_value"
    # Comment following 'for' loop if you are using 'tFile' variable.
    for _file in `find $tDir -type f -print | grep ".properties$"`; do
       echo "Updating target property file: $tDir/$_file"
       sed -i "s/^$_key=.*/$_line/g" "$tDir/$_file"
       # for those properties which have space between key and '=' sign.
       sed -i "s/^$_key[ \t]=.*/$_line/g" "$tDir/$_file"
    done   
done


--------- tFile.properties after the update -----

# Connection properties
connection.username=mssqluser
connection.password=*********
connection.hostname=thishost.domain
connection.time.format=yyyy-MM-dd HH:mm:ss

 

Hope, it helps!

SQL Server - few Tips & Tricks

These days, I’ve been working on a Business Intelligence project, powered by Microsoft Technologies like SQL Server 2012 (both 64 bit and 32 bit versions), Reporting Server, Analysis Server etc. In this blog, I’m going to continually post issues encountered during implementation (mostly deployment, integration and security related), troubleshooting ideas and possible solutions/work-around.

Issues with SQL Server Alias

Aliases are great way to make connection portable as they encapsulate connection details like host, port, protocol etc and provide a connection string ready to be used. However, there are few basic things that need to be taken care of, otherwise, you’ll be troubleshooting your connection problem forever especially in the environment where you have both 32 bit and 64 bit client applications running.
  • Aliases are used by client (not server) to establish connection to the server. Confusing? For example in a situation, where your SQLJob being run by SQLAgent on 64 bit SQL Server connects to database hosted on 32 bit SQL server to extract data. In this situation, 64 bit SQL server is considered as a client where as 32 bit SQL server is considered as a server. Similarly, if you are trying to connecto 64 bit SQL server, using SQL Server Management Studio (SSMS), then SSMS is considered as a client, and 64 bit SQL Server is considered as a  server. Since, Aliases are used by client, you need to make sure, the aliases thus created are compatible to respective clients.
  • When I say compatible, I mean, if your client is 32 bit, your alias needs to be created as 32 bit alias otherwise needs to be created as 64 bit. If you don’t know whether your client running as 32 bit or 64 bit and you assign not-compatible alias, you may encounter error something like, “... Source: "Microsoft SQL Server Native Client 11.0"  Hresult: 0x80004005  Description: ... Named Pipes Provider: Could not open a connection to SQL Server [53].... Extract ExceptionDetails [29] ... Description: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. …” The error message may vary depending upon the specific client trying to establish connection to specific server.

  • So, how to create compatible alias(es)? You can use “cliconfg.exe” to create the SQL server alias. Just be careful, if you are on 64 bit environment, default “cliconfg.exe” is 64 bit and creates alias for 64 bit client application, so in order to create an alias for 32 bit client like SSMS, you need to use “%windir%\SysWOW64\cliconfg.exe".  Apart from “cliconfg.exe”, you can also use the “SQL Server Configuration Manager” to create an alias. Unlike “cliconfg.exe”, SQL Server Configuration Manager, provides clearly visible and distinct areas to define 32 bit vs 64 bit aliases. See this screen shot (Scr_1.0). 

Scr_1.0

Note: If you are not sure whether your client is 32 bit or 64 bit, you can always create same alias (with same name) for both 64 bit and 32 bit clients.

Issue with DNS  

Just like an alias, DNS (Domain Name System) adds great value in making connection portable. Because, IPs can change, host name can change, but DNS may remain the same letting application to run continuously in the situation of change. You can use DNS while creating an alias in the server name field. However, DNS may not work in a situation if you are trying to connect to local SQL server using DNS locally. You may get an error (seen from SSMS), saying, “Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)". See screen shot Scr_2.0.
Scr_2.0

Details and suggestion on this issue, please see the point #5 of Dan Benediktson’s blog


Quick Data Verification Tool

Recently, I had to put together a tool to verify data (in Microsoft SQL server database tables) in no time. Even though the time was very short, I had few requirements to follow while creating the tool as guidance. Those requirements were:
  1. Flexibility -  so that the tool is usable to more than one projects with little or no modifications.
  2. Lightweight - no use of third party scripting framework like Ant/Nant and no use of programming languages like Java , c# etc.
Since the tool was intended specifically for Microsoft SQL Server on Windows environments, I decided to use batch scripting, SQLCMD (http://technet.microsoft.com/en-us/library/ms162773.aspx), and SQL scripting.

In order to make it flexible and not to hard code anything (like Servername, database name, table name etc), so that same SQL script could be used from one project to another for similar type of verification with little or no modification, I decided to externalize these attributes in a ';' delimitted .csv file. Each line of my .csv file followed the following format:
<db-connection-info>;<db-name>.<table-name>;[<verification-criteria-expected-row-count>];[<other-criteria-1>];[<other-criteria-2>];[<column-name-1>];[<column-name-2>];[…]

Now, I had to put together a batch script file that could read each line as a row from .csv file, split it into columns and create and assign those values to corresponding variables into SQLCMD in order to pass to the SQL script during execution. 
I've used 'for' loop in the batch script to realize it. 'For' loop allows you to choose delimiter character, skipping any line(s) on the top of the file and what columns to read.  Here is a fragment of batch script file:
for /F "tokens=1-7 skip=1 delims=;" %%c in (%dataFile%) do (
  
sqlcmd -S %%c -i %iSqlFiles% -v tableName="%%d" -v xRowCnt=%%e  -v col1-cond1=%%f -v col2-cond1=%%g -v col1Name="%%h" -v col2Name="%%i"  >> %oLogFile%
)

In this example, it skips the first line of .csv file (passed as variable %dataFile%), reads column (token) 1 to 7 from each line delimited by ';' and assigns to variable starting from letter 'c'. And also it executes SQLCMD command with the given options and provided SQL file '%iSqlFile%'.

One catch here is that the output file. Even though, the SQLCMD has -o <output-file> option, I did not use it here because this option does not allow to append into the existing file. If you have multiple sqlcmd command executing in a loop they either create new file each time deleting the previous one or create multiple files given the output file are unique for each execution. The easy out way out is just to redirect using stream redirection symbol '>>'.


The final part was to create the re-usable (kind of template) SQL script file. Fragment below shows how variable passed from SQLCMD command can be used in SQL script and without hard-coding the database name, table name, column name etc.

SELECT @CurrRowCount=count (*) FROM $(tableName) where $(tableName).$(col1Name) NOT LIKE '$(col1-cond1)' and $(tableName).$(col2Name) NOT LIKE '$(col2-cond1)';


I guess, the idea is clear, below you can find more details of my .csv file, batch script file and SQL script files.

Below are the complete file, which you may be able to use:
  • .csv file: testcases.csv
#<db-connection-info>;<db-name>.<table-name>;[<verification-criteria-expected-row-count>];[<other-criteria-1>];[<other-criteria-2>];[<column-name-1>];[<column-name-2>];[…]
tcp:mysqlserver1\mysqlinstance1,20234;mydatabase1.mytable1;25;col-cond1;col-cond-2;mycolumn1;mycolumn2
.;mydatabase2.mytable1;0

;col-cond1;col-cond-2;mycolumn1;mycolumn2
 mysqlserver1;mydatabase3.mytable1;10;
;col-cond1;col-cond-2;mycolumn1;mycolumn2
 ...


  • Batch script file: DataVerification.cmd

SETLOCAL
@echo off
REM ==============================================
REM DataVerification.cmd - executes a given SQL script containing logic.
REM
==============================================
  cls
set currDir=%~dp0
set dataFile=%currDir%\testcases.csv
REM SQLCMD can read (if required) multiple input sql files delimited by comma.
set iSqlFiles=%currDir%\RowCnt.sql
set oLogFile=%currDir%\DVerification.log
echo ============================================== > %oLogFile%
echo ============= Verification Details =========== >> %oLogFile%
echo ============================================== >> %oLogFile%
REM The following logic within for loop can be customized as per project requirement.
REM Currently it skips the first line in the csv file, and reads each line comma delimited line as row and column.
for /F "tokens=1-5 skip=1 delims=," %%c in (%dataFile%) do (
   echo ============================================== >> %oLogFile%
   echo Processing Database:Table: %%d >> %oLogFile%
   sqlcmd -S %%c -i %iSqlFiles% -v tableName="%%d" -v xRowCnt=%%e  -v col1-cond1=%%f -v col2-cond1=%%g -v col1Name="%%h" -v col2Name="%%i"  >> %oLogFile%
   echo
============================================== >> %oLogFile%
)
find "verification: failed" %oLogFile% > null
if "%ERRORLEVEL%" EQU "0" (
    echo Combined Verification FAILED. Please see the %oLogFile% for details.
) else (
    echo Combined Verification PASSED. Please see the %oLogFile% for details.
)


  • SQL file: RowCnt.sql
PRINT N'Generated as of: '+RTRIM(CAST(GETDATE() AS nvarchar(30)));
Declare @CurrRowCount int;
where $(tableName).$(col1Name) NOT LIKE '$(
col1-cond1)' and $(tableName).$(col2Name) NOT LIKE '$(col2-cond1)'
PRINT N'Total row count: '+CAST(@CurrRowCount AS nvarchar(5))+N'.'
if (@CurrRowCount > $(xRowCnt)) PRINT N'verification: passed.';
else PRINT N'verification: failed.';


Finally, when all test cases defined in testcases.csv are executed, you'll get either "Passed" (if all test cases pass) or "Failed" (even if a single test case fails).  You can see the details in the thus generated log file, similar to following ...

==============================================
============= Verification Details ===========
==============================================

==============================================
Table: mydatabase1.mytable1
Generated as of: <timestamp>
Total row count: xx.
verification: passed[failed].

============================================== ==============================================
Table: mydatabase2.mytable1
Generated as of: <timestamp>
Total row count: yy.
verification: passed[failed].
==============================================

Obviously, this is just an idea, you can develop your won SQL file(s) as per your data processing requirement, modify .csv file as your input requirement and change the 'for' loop as change happens in .csv file.