It's in You to Give

Today (March 27, 2018), I proudly wore my jacket with the Lapel Pin of Canadian Blood Services  on it again. And yes, I donated blood for the 2nd time, and hoping to do so much more often going forward. Don't get me wrong, this post is not really to celebrate my donation, but to encourage others like myself who are just starting to donate or thinking about it. We all need to come forward and do this noble thing, because our people, communities and countries need blood all the time.

It does not cost anything. As the Canadian Blood Services puts it in simple words, "It's in you to give." Surprisingly, blood donors get some health benefits as well. See Donor health benefits section of Wikipedia.
Finally, voluntary blood donation is a very important concept and we need to support it. See World Health Organisation's paper entitled "Towards 100% Voluntary Blood Donation - A Global Framework for Action".
Believe me, it's not hard. If I can do it, anyone can. Just make sure you're well hydrated before sitting in for the donation. If you are in Canada call the Canadian Blood Services at 1 888 2 DONATE (1-888-236-6283) or visit their website at www.blood.ca to schedule your appointment. If you are in any other jurisdiction, contact your national blood services to donate!

Update as of July 05, 2019:
Cheers again! Donated today for the third time and felt awesome.

How to Parse Apache error_log for Troubleshooting & Reporting


Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

Apache error_log can be useful while troubleshooting production problem. So parsing and analysing the content of this file regularly helps in maintaining the overall health of the system. If mod_mpmstats enabled, error_log also contains Multi-Processing Modules (MPM) stats data. MPM stats can be used for both troubleshooting and performance tuning. http://publib.boulder.ibm.com/httpserv/manual70/mod/mod_mpmstats.html provides more details about MPM stats.
Since, error_log does not contain the Web server name, in order to co-relate the data to corresponding Web server, it is advisable that you put error_log files for each Web server under corresponding directory, named after the Web server. It is specially important when you are parsing logs from multiple Web servers. Script takes directory name as Web server name for the purpose of reporting and analysis. For example, let's say, you have Web servers 'webSrv01, webSrv02, webSrv03 ... etc., then put logs from each Web server under corresponding directories as shown below:

 /tmp/webSrv01
    error_log
    error_log_2017.09.05.log
    access_log
 /tmp/webSrv02
    error_log
    error_log_2017.09.05.log
    access_log

The naming suffix for historical files can be different from one environment to another. So, if you have different suffix for historical files, you can tweak the find script. Currently the fragment of script that finds error_log looks like this:

find $rootcontext -name "error_log*" -type f | grep "$logFileName"
where $rootcontext is root path.

Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/webErrorLogParser.sh for details.

Note: script is written to parse the date format like '[Thu Dec 14 08:13:08 2017]' in error_log. If your error_log uses different date format, you may need to tweak the section of script which parses the date.

How to execute:
You can see all the available options, by just launching:
$> ./webErrorLogParser.sh

See below for few examples:
# processing current day's logs
$> ./webErrorLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./webErrorLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./webErrorLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>


Output
Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/03_WebErrorLogSummaryRpt.txt
  • $rptDir/WebErrorLogMpmStatsRpt_all.csv
  • $rptDir/WebErrorLogRpt_all.csv
Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/RecycleHistoryRpt_all.csv
  • $pDir/MPMStatsHistoryRpt.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/03_WebErrorLogSummaryRpt.txt
And here is a sample MPM stats report https://github.com/pppoudel/log-parser/blob/master/sample_reports/WebErrorLogMpmStatsRpt_all.csv

See my other posts in this series
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  3. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

Log Parsing, Analysis, Correlation, and Reporting Engine

 
   In the last few months, I have been helping to identify and resolve production issues (both performance and product related). I had to analyze vast amount of logs, identify performance degradation and deviation, and issues related to Java heap and Garbage Collection (GC), as well as  different issues affecting the health of WebSphere Application Server (WAS). In order to do the above-mentioned tasks efficiently, I have employed different tools (both open source and commercial). Even-though these tools are readily available, and usually good what they do, they may not be as effective as we like for our particular circumstances and we end-up writing our own custom tool or script to complement in certain areas. Same story here, I ended up writing a custom tool (let me call it a Log Parser)  for log parsing, analyzing, making correlation, and reporting. I'm sharing my custom Log Parser here, hoping that it may be useful for other people as well. It is written in AWK and Shell script. It processes the following logs:
  • SystemOut.log generated by IBM WebSphere Application Server (WAS)  
  • access_log and error_log logs generated by Apache or IBM HTTP Server(IHS)
  • native_stdout.log or verbose GC logs generated by Java Virtual Machine (JVM).
Let me shed some light on the internal functioning of the Parser visually. See the diagram 1.0 below




Diagram 1.0

As depicted in the diagram, the Parser is made of a set of script files (collection of different Parsers) and wrapper script - together acting like a suite. Each parser can be executed independently or invoked by the wrapper script. The Parser is driven by the logic in the script and is controlled by the input parameters and their values (control parameters, threshold parameters, correlation parameters, and transaction baseline values). It consumes the logs and writes different reports as an output.
Most interesting part is the input here.  The Feedback loop/mechanism as shown in the diagram is to let the analyst know that he/she should continually refine the threshold and other applicable input parameter values based on output analysis.This feedback mechanism makes the Parser - a kind of expert engine. So, it is very important to regularly update your threshold values, filter keywords, and maintain an well established performance baseline. Parser helps you to maintain feedback mechanism, because it collects vital statistics and updates historical data files, doing so, it is not only collecting important data, but also quantifying the system events. Quantification helps to compare, generate alerts and make decisions. For example, you quantify in average how many particular errors you get per day or per hour or per server, or per transaction and based on that you define your threshold value. Let's say, based on a month long of observation, average number of daily transaction errors from server A, fluctuates from 10 to 30 in normal situation. So, your high mark for normal situation is 30. Based on this data, you can define your threshold value 35 for that particular error for that server.
What are the key benefits of using this Parser:
  1. Make troubleshooting faster and effective with built-in intelligence from lesson learned and baseline data. Parser identifies critical errors, their frequency, location, key performance numbers, current state of the environment like how many users, sessions, transactions, (if any) anomalies in the system, which help to narrow down the issue(s). 
  2. Automatically collect key statistical data (performance, error or usage) and build a data mart. Parser collects all vital statistics like performance numbers, performance range, hourly user/session statistics, heap snapshots etc. and updates historical data files. These data can be used to generate history report and also in decision making process.
  3.  Auto generate key summary reports for internal consumption and create delimited data files, which can be imported to spread sheet like Excel to prepare management reports. Basically, it can provide visibility to your entire application infrastructure. 
  4.  Create correlation. Parser creates correlation so that it becomes easier to identify and map transaction path (Web server to the Application server). 
  5. Generate warning for possible future incidents/events. Parser can provide early warning of possible future events. Here is an example of generated warning: "2.18383e+06 : average of Perm Generation After Full GC exceeds threshold 2097152 (K).  There is a possibility of OutOfMemory in near future because of Not sufficient PermGen Space for AppSrv04"

Getting started is very simple. No big-bang installation, or configuration is required. If you are running in Unix like environment, you just download the script, and launch the Parser from the directory where it is downloaded. If you are on Windows, you need Cygwin or Bash Shell that comes with MINGW to execute it.

How to execute?

You can see all the available options, by just executing:

$> ./masterLogParser.sh

Manadatory option '--rootcontext' or '-c' missing

-c|--rootcontext: Required. Source path from where log files are read.
-t|--rpttype: Optional. Values are: 'daily' or 'ondemand'. 'ondemand' is default value.
It is used to control logic - like whether or not to update historical data files.
Only 'daily' option creates and updates historical data files.
-d|--recorddate: Optional. It is the log entry date. Meaning log entries with that date will be processed.
It takes the format 'YYYY-MM-DD'. Default is to use current date. However, if 'daily' is chosen as 2nd argument, and log entry date is not provided, it defaults to 'date - 1 day'.
-l|--rptloc: Optional. It is report directory where all generated reports are written.
Default value is /tmp/
-o|--procoption: Optional. It represents the processing option. Values can be 'full' or 'partial'.
Default value is 'partial'. This option is currently being used only for Verbose GC log parser.

Here are few examples:

# processing current day's logs
$> ./masterLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./masterLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./masterLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>

See masterLogParser.sh in github: https://github.com/pppoudel/log-parser/blob/master/masterLogParser.sh

Input

1. thresholdValues.csv

As name implies, this file contains pre-defined name and threshold value pair for certain condition or events. Parser lookups these pre-defined condition, and when it detects one in a log file, it compares with threshold value and triggers/writes notification into output file (00_Alert.txt) if logged event exceeds the threshold value. Threshold can be performance based like 'notify if maximum response time exceeds 9 seconds' or event based like 'notify if maximum fatal count for a JVM exceeds 5'
Format:
Each line in thresholdValues.csv has multiple columns separated by pipe '|' and represent threshold definition for one complete event condition. See below:

event-name|value|server-identifier|event-description
e.g.
httpAvgRespTimeTh|2.5|http|Threshold for Average response time in sec.



Where:
event-name: name of the event like httpAvgRespTimeTh (http) Average Response Time threshold.
value: threshold value for this specific event. In this case it is 2.5 seconds.
server-identifier: Which log/server this value belongs to. In this case it is 'http' server.
event-description: provides some details what this threshold is about.

See a sample thresholdValues.csv in github: https://github.com/pppoudel/log-parser/blob/master/thresholdValues.csv

2. perfBaseLine.csv

This file contains pre-defined transactions (request URIs) and their baseline response time (in seconds). I suppose, you can get content for this file from your performance test result.

Format:
Each line in perfBaseLine.csv has two columns separated by pipe '|' which represent performance value for a given transaction (request/response). See below:

request-name|response-time (in seconds)
e.g.
finManagement/account_add.do|1.57756

Where:
request-name: represents request/response URI or transaction name, whatever you call it. In this case it is finManagement/account_add.do
response-time: response time for the transaction to complete in seconds. 1.5 seconds in this case.
See a sample perfBaseLine.csv in github: https://github.com/pppoudel/log-parser/blob/master/perfBaseLine.csv

3. WASCustomFilter.txt

Currently this input file is only consumed by websphereLogParser. It  defines some custom error/keywords. It is to tell parser that you're interested and like to know if certain keywords or string in general are logged (because of certain condition) in a log file, which may be  non-standard and specific to your environment/application.

Format:
It uses Regular expression to define custom error/keywords. Each new error definition goes to new line. See below:

Error.*Getting.*Folder
503.*Service.*Temporarily.*Unavailable
CORBA.*NO_RESPONSE
ORA-01013:

See a sample WASCustomFilter.txt in github: https://github.com/pppoudel/log-parser/blob/master/WASCustomFilter.txt

4. WAS_CloneIDs.csv

This file contains information that defines relationship (mapping) between HTTP session clone ID and WAS name. Clone ID constitutes part of HTTP session and can be logged into Web Server access_log. With the relationship in hand, we can generate helpful analytical data that helps to identify transaction/request path end to end. Easiest way to find out clone ID for each WAS is to look your plugin-cfg.xml file.

Format:
Each line in WAS_CloneIDs.csv four columns separated by pipe '|'. See below:

cloneID|WAS-name|hostname
e.g.
23532em3r|AppSrv01|washost082

Where:
cloneID cloneID is part of jSession. 23532em3r in above example. Refer to https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/txml_httpsessionclone.html
WAS-name  WebSphere Application Server (WAS) name. AppSrv01 in above example.
hostname Hostname of machine/server where particular WAS resides. washost082 in above example.


See a sample WAS_CloneIDs.csv in github: https://github.com/pppoudel/log-parser/blob/master/WAS_CloneIDs.csv

Output:

Each Parser update Alert file, and history reports (only if report type is 'daily') as well as generate summary report and other report files. For the complete list, see '#--------- Report/Output files -------#'  and '#--------- History Report/Output files -------#' sections in each script file.

For further detail of each individual parser, visit the following blog posts:
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  3. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log
  4. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

How to Parse WebSphere Application Server Logs for Troubleshooting & Reporting


Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

WebsphereLogParser parses IBM WebSphere Application Server SystemOut.log. This is one of the parsers included in the suite that I have posted. This particular parser script expects that the SystemOut log follows the default/basic message formats outlined by IBM in JVM log interpretation document. Since, SystemOut.log does not contain the WAS server name, in order to relate the data to corresponding WAS JVM, it is advisable that you put SystemOut logs for each WAS under corresponding directory, named after the WAS name. It is specially important when you are parsing logs from multiple WAS servers. Script takes directory name as WAS name for the purpose of reporting. For example, let's say, you have Application servers 'appSrv01, appSrv02, appSrv03 ... etc.), then put logs from each Application Server under corresponding directories like:

 /tmp/appSrv01
    SystemOut.log
    SystemOut_2017.09.05.log
    SystemErr.log
 /tmp/appSrv02
    SystemOut.log
    SystemOut_2017.09.05.log
    SystemErr.log

It parses both zipped file and or regular file. By default, it finds and processes following files in a given path:

SystemOut.log
SystemOut.log.zip
SystemOut.zip
SystemOut_'$recYY'.'$rec0MM'.'$rec0DD'_.*
SystemOut_'$recNYY'.'$recN0MM'.'$recN0DD'_.*
Where:
recYY is Year like 17 (17 represent year of 2017)
rec0MM is Month like 01 (01 represent month of January)
rec0DD is Day like 01 (01 represents the first day of a month)
recNYY/recN0M/recN0DD = (recYY/rec0MM/rec0DD)+1 day

The naming suffix for historical files can be different from one environment to another. So, if you have different suffix for historical files, you need to tweak the find script. Currently it looks like this:

find $rootcontext -name "SystemOut*" -type f | \
  egrep '(SystemOut.log$|SystemOut.log.zip$|SystemOut.zip$|SystemOut_'$recYY'.'$rec0MM'.'$rec0DD'_.*|SystemOut_'$recNYY'.'$recN0MM'.'$recN0DD'_.*)'
where $rootcontext is root path.

Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/websphereLogParser.sh for details.

Note: script is written to parse the date format like '[4/23/17 8:13:22:137 EDT]' in SystemOut.log. If your SystemOut.log uses different date format, you may need to tweak the section of script which parses the date.

How to execute:

You can see all the available options, by just launching:
$> ./websphereLogParser.sh

Few examples are here:
# processing current day's logs
$> ./websphereLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./websphereLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./websphereLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>


Output
Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/01_WASLogSummaryRpt.txt
  • $rptDir/WASLogErrRpt_all.csv
  • $rptDir/WASLogFilteredErrRpt.csv
  • $rptDir/WASLogSummaryByErrCmpRpt.csv
  • $rptDir/WASLogSummaryByErrClassRpt.csv
  • $rptDir/WASLogSummaryByErrExpRpt.csv
  • $rptDir/WASLogSummaryByErrMsgRpt.csv
  • $rptDir/WASLogSummaryByWarnCmpRpt.csv
  • $rptDir/WASLogSummaryByWarnClassRpt.csv
  • $rptDir/WASLogSummaryByWarnExpRpt.csv
  • $rptDir/WASLogSummaryByWarnMsgRpt.csv
Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/RecycleHistoryRpt_all.csv
  • $pDir/WASOutOfMemoryHistoryRpt.csv
  • $pDir/WASTransactionTimeOutHistoryRpt.csv
  • $pDir/WASSHungThreadHistoryRpt.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/01_WASLogSummaryRpt.txt
See my other posts in this series
  1. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  2. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log
  3. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

How to Parse Java GC logs for Troubleshooting & Reporting

Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

Java Garbage Collection (GC) log format may depend on Java version, Java Virtual Machine (JVM) settings, JVM providers etc. This particular parser has been tested with verbose GC output from WebSphere Application Server 8.5.x, configured to use IBM Java version 7.0.4.0 with the following JVM configuration.

<jvmEntries xmi:id="JavaVirtualMachine_12315382660776" verboseModeGarbageCollection="true" verboseModeJNI="false" initialHeapSize="8192" maximumHeapSize="8192" runHProf="false" hprofArguments="" debugMode="false" debugArgs="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777" genericJvmArguments="-XX:MaxPermSize=2560m -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+UseParallelOldGC -XX:ParallelGCThreads=16 -XX:-TraceClassUnloading -XX:+UseCompressedOops -XX:+AlwaysPreTouch -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 " executableJarFileName="" disableJIT="false">

If your log format is different, you may need to tweak the parser script a little bit.

Note: this parser is not designed to replace any of your existing parser, but rather to complement in terms of data gathering, and visualization. See, the sample summary report.

Also, it may generate alert messages like the one seen below:
12 : number of Full GC exceeds threshold of 6 for AppSrv04 on 2016-11-29 Old Generation Heap space after Full GC exceeded threshold of 4700000(K) for AppSrv03. There is possibility of OutOfMemory in near future because of Not sufficient Heap space 

Other than summary report and alert messages written in 00_Alert.txt, it also produces GCstats_all.csv. It is a pipe '|' delimitted file, which captures all relevant data for each day for each JVM. Data written in GCstats_all.csv can be imported to spread sheet like Excel to create graph, chart and table to present to  the management.
Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/javaGCStatsParser.sh for details.

Note: script is written to parse the date format like '2017-05-25T08:11:50.666-0400' in native_stdout.log. If your native_stdout.log uses different date format, you may need to tweak the section of script which parses the date.


How to execute:

You can see all the available options, by just launching:
$> ./javaGCStatsParser.sh

Few examples are here:
# processing current day's logs
$> ./javaGCStatsParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./javaGCStatsParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./javaGCStatsParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>


Output

Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/04_GCSummaryRpt.txt
  • $rptDir/GCstatsRpt_all.csv
Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/GCHistoryRpt_all.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/04_GCSummaryRpt.txt
See my other posts in this series
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  3. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log

How to Parse Apache access_Log for Troubleshooting & Reporting


Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

Access log is a great source of information (for troubleshooting, performance analysis, user trend reporting etc.) as it records all requests processed by Apache Web server. What information to capture in access log is controlled using CustomLog and LogFormat directives. Visit Apache site (https://httpd.apache.org/docs/2.4/logs.html#accesslog) for more information about the access log.
This particular Log Parser that I'm discussing here is written to parse the access_log generated using the following log format:
LogFormat "%h %l %u %t \"%r\" %>s %b JSESSIONID=\"%{JSESSIONID}C\" UID=\"%{UID}C\" %D %I %O \"%{User-agent}i\" %v" common

Note: if your access_log is generated using different LogFormat, you may need to tweak the script a little bit.

Finding log files: currently parser finds all access_log in the given path if:
$recDate == $currDate
or access_log.$rec0MM$rec0DD$recYY
if ($recDate < $currDate).
Where:
recDate: Optional. It is the log entry date. Meaning log entries with that date will be processed. It takes the format 'YYYY-MM-DD'. Default is to use current date. However, if 'daily' is chosen as 2nd argument, and log entry date is not provided, it defaults to 'date - 1 day'.
currDate: Optional. It is the log entry date. Meaning log entries with that date will be processed. It takes the format 'YYYY-MM-DD'. Default is to use current date. However, if 'daily' is chosen
as 2nd argument, and log entry date is not provided, it defaults to 'date - 1 day'.
rec0MM: rec0MM is Month like 01 (01 represent month of January)
rec0DD: rec0DD is Day like 01 (01 represents the first day of a month)
recYY: recYY is Year like 17 (17 represent year of 2017)

Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/webAccessLogParser.sh for details.

Note: script is written to parse the date format like '13/Jun/2015:10:32:04 -0400' in access_log. If your access_log uses different date format, you may need to tweak the section of script which parses date.

How to execute:

You can see all the available options, by just launching:
$> ./webAccessLogParser.sh

Few examples are here:
# processing current day's logs
$> ./webAccessLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./webAccessLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./webAccessLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>

Output

Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/02_WebAccessLogSummaryRpt.txt
  • $rptDir/WebAccessLogRpt_all.csv
  • $rptDir/WebAccessLog_discardedRpt.csv
  • $rptDir/WebAccessLogSummaryByDomainRpt.csv
  • $rptDir/WebAccessLogSummaryByTransactionRpt.csv
  • $rptDir/WebAccessLogSummaryByUIDRpt.csv
  • $rptDir/WebAccessLogSummaryByRC400PlusURLRpt.csv
  • $rptDir/WebAccessLogSummaryByUidSessionRpt.csv
  • $rptDir/WebAccessLogSummaryUnknowUARpt.csv
  • $rptDir/WebHourlyDomainUsageByUid.csv
  • $rptDir/WebHourlyDomainUsageBySess.csv
  • $rptDir/WebDlyDomainUsage.csv

Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/WebPerfHistoryRpt.csv
  • $pDir/WebHourlyAvgRespTimeHistoryRpt.csv
  • $pDir/WebUniqueUsersHourlyHistoryRpt_all.csv
  • $pDir/WebRequestTypeHistoryRpt.csv
  • pDir/WebResponseCodeHistoryRpt.csv
  • $pDir/WebStatsByIHSHistoryRpt.csv
  • $pDir/WebStatsByWASHistoryRpt.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/02_WebAccessLogSummaryRpt.txt
See my other posts in this series
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log
  3. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

How to use Docker Swarm Configs service with WebSphere Liberty Profile


   In order to make your dockerized application portable, you can externalize (Docker container using configuration from outside of Docker image) configuration that changes from one environment to another (from DEV to QA, UAT, Prod etc.). This helps to maintain a generic docker image for your dockerized application and also get rid of most of the bind-mount configuration files and/or environment variables used by your container. Following Docker Swarm services are extremely useful in externalizing the configuration:
  • Docker Secrets (available in Docker 1.13 and higher version)
  • Docker Configs (available in Docker 17.06 and higher version)
You can use Docker Secrets to externalize configuration that are confidential in nature, and Docker Configs for general configuration that has potential to be changed from one environment to another.
In this blog post, I will use Dockerized application powered by WebSphere Application Server Liberty Profile (WLP) to show how to use Docker Configs service to externalize server.xml. You can look my other blog - Using Docker Secrets with IBM WebSphere Liberty Profile Application Server, to learn how to use Docker Secrets.


So, here is my server.xml for my WLP application to be used in this post as an example.

<server description="TestWLPApp">
   <featuremanager>
      <feature>javaee-7.0</feature>
      <feature>localConnector-1.0</feature>
      <feature>ejbLite-3.2</feature>
      <feature>jaxrs-2.0</feature>
      <feature>jpa-2.1</feature>
      <feature>jsf-2.2</feature>
      <feature>json-1.0</feature>
      <feature>cdi-1.2</feature>
      <feature>ssl-1.0</feature>
   </featuremanager>
   <include location="/run/secrets/app_enc_key.xml"/>
   <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/>
   <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/>
   <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/>
   <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/>
   <applicationmonitor updatetrigger="mbean"/>
   <datasource id="wlpappDS" jndiname="wlpappDS">
      <jdbcdriver libraryref="OracleDBLib"/>
      <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
   </datasource>  
    <library id="OracleDBLib">
       <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
    </library>
    <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/>
</server>

As you can see in above server.xml, the following items were created as Docker Secrets:


  • <include location="/run/secrets/app_enc_key.xml"/>

  • <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" ...

  • <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" ...


See, Create Docker Secrets paragraph of  Using Docker Secrets with IBM WebSphere Liberty Profile Application Server to create these confidential configuration items.

Once confidential configuration items are created using Docker Secrets, follow the steps below to create general configuration items using Docker Configs.
  1. Connect to Docker UCP using client bundle. 
  2. Create configuration item for server.xml using docker config create ...command.
    Important: both the client and daemon API must both be at least at version 1.30 to use this command.

    $> docker config create dev_wlp_server_config_v1.0 /mnt/nfs/dockershared/wlpapp/server.xml_v1.0

    9i5edohyzyrvopuz988caxw4r

    Note: here dev_wlp_server_config_v1.0 is configuration item name which gets configuration from /mnt/nfs/dockershared/wlpapp/server.xml_v1.0. I've decided to version the configuration item, so that in future if I need to update the configuration, it becomes easier.

  3. verify that the configuration item created

     $> docker config ls

    ID                        NAME                       CREATED        UPDATED
    9i5edohyzyrvopuz988caxw4r dev_wlp_server_config_v1.0 18 seconds ago 18 seconds ago
    geuerj6t98d8eeu8nqvvxgtw9 com.docker.license-0       5 days ago     5 days ago
    vdzwhpe91iptvuiro654u3lue com.docker.ucp.config-1    5 days ago     5 days ago

  4. Use configuration item. Below example shows using YAML.

    docker-compose.yml
    version: "3.3"
    services:
       wlpappsrv: 

          image: 192.168.56.102/osboxes/wlptest:1.0
          networks:
             - my_hrm_network
          secrets:
             - keystore.jks
             - truststore.jks
             - app_enc_key.xml
          ports:
             - 9080
             - 9443
          configs:
             - source: dev_wlp_server_config_v1.0
               target: /opt/ibm/wlp/usr/servers/defaultServer/server.xml
               mode: 0444

         deploy:
            placement:
               constraints: [node.role == worker]
               mode: replicated
               replicas: 4
               resources:
                  limits:
                     memory: 2048M
               restart_policy:
                  condition: on-failure
                  max_attempts: 3
                  window: 6000s
               labels:
                  - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
                  - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network
    secrets:
       keystore.jks:
          external: true
       truststore.jks:
          external: true
       app_enc_key.xml:
          external: true
    configs:
        dev_wlp_server_config_v1.0:
         external: true

    Note: if you don't want to create configuration item in advance (step #2 above), you can also specify configuration file in the YAML file itself. Replace external: true in the above example with file: /mnt/nfs/dockershared/wlpapp/server.xml_v1.0

    If you want to use use docker service create ... command, instead of YAML file, here is how you can use config

    docker service create \
     --name wlpappsrv \
     --config  source=dev_wlp_server_config_v1.0,target=/opt/ibm/wlp/usr/servers/defaultServer/server.xml,mode=0444 \
     ... \
     192.168.56.102/osboxes/wlptest:1.0

  5. Validate compose file:
    $> docker-compose -f docker-compose.yml config
    WARNING: Some services (opal) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm. WARNING: Some services (opal) use the 'configs' key, which will be ignored. Compose does not support 'configs' configuration - use `docker stack deploy` to deploy to a swarm.

  6. Deploy the service as a stack:
    $> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP


How to refresh/update or rotate configuration


Configuration item created by Docker Configs service is immutable, however, there is a way to rotate configuration. Let's say, you need to update some configuration value in server.xml, like you have to reference to new version of JDBC driver.  See the steps below:
  1. Create another configuration item that references to updated server.xml
    $>docker config create dev_wlp_server_config_v2.0 \
      /mnt/nfs/var/dockershared/dev_PAL/server.xml_v2.0



    o4173tet99vuwuz1fma4dqd2j

  2. Update the service so that it references to the newly created configuration item
    $>docker service update \
     --config-rm dev_wlp_server_config_v1.0 \
     --config-add  source=dev_wlp_server_config_v2.0,target=/opt/ibm/wlp/usr/servers/defaultServer/server.xml
    \
     wlpappsrv

  3. [optional] Once the service is fully updated, you can remove the old configuration item:
    $> docker config rm dev_wlp_server_config_v1.0

  4. [optional] If you need to see which configuration item is attached to the service, you can run 'docker service inspect <service-name>' command.
    $>docker service inspect wlpappsrv

    ...
    "Configs": [
      {
        "File": {
          "Name": "/opt/ibm/wlp/usr/servers/defaultServer/server.xml",
          "UID": "0",
          "GID": "0",
          "Mode": 292
        },
         "ConfigID": "o4173tet99vuwuz1fma4dqd2j",
         "ConfigName": "dev_wlp_server_config_v2.0"
      }
    ]
    ...


For more information about Docker Swarm Configs service, review the following Docker documentations: