Log Parsing, Analysis, Correlation, and Reporting Engine

 
   In the last few months, I have been helping to identify and resolve production issues (both performance and product related). I had to analyze vast amount of logs, identify performance degradation and deviation, and issues related to Java heap and Garbage Collection (GC), as well as  different issues affecting the health of WebSphere Application Server (WAS). In order to do the above-mentioned tasks efficiently, I have employed different tools (both open source and commercial). Even-though these tools are readily available, and usually good what they do, they may not be as effective as we like for our particular circumstances and we end-up writing our own custom tool or script to complement in certain areas. Same story here, I ended up writing a custom tool (let me call it a Log Parser)  for log parsing, analyzing, making correlation, and reporting. I'm sharing my custom Log Parser here, hoping that it may be useful for other people as well. It is written in AWK and Shell script. It processes the following logs:
  • SystemOut.log generated by IBM WebSphere Application Server (WAS)  
  • access_log and error_log logs generated by Apache or IBM HTTP Server(IHS)
  • native_stdout.log or verbose GC logs generated by Java Virtual Machine (JVM).
Let me shed some light on the internal functioning of the Parser visually. See the diagram 1.0 below




Diagram 1.0

As depicted in the diagram, the Parser is made of a set of script files (collection of different Parsers) and wrapper script - together acting like a suite. Each parser can be executed independently or invoked by the wrapper script. The Parser is driven by the logic in the script and is controlled by the input parameters and their values (control parameters, threshold parameters, correlation parameters, and transaction baseline values). It consumes the logs and writes different reports as an output.
Most interesting part is the input here.  The Feedback loop/mechanism as shown in the diagram is to let the analyst know that he/she should continually refine the threshold and other applicable input parameter values based on output analysis.This feedback mechanism makes the Parser - a kind of expert engine. So, it is very important to regularly update your threshold values, filter keywords, and maintain an well established performance baseline. Parser helps you to maintain feedback mechanism, because it collects vital statistics and updates historical data files, doing so, it is not only collecting important data, but also quantifying the system events. Quantification helps to compare, generate alerts and make decisions. For example, you quantify in average how many particular errors you get per day or per hour or per server, or per transaction and based on that you define your threshold value. Let's say, based on a month long of observation, average number of daily transaction errors from server A, fluctuates from 10 to 30 in normal situation. So, your high mark for normal situation is 30. Based on this data, you can define your threshold value 35 for that particular error for that server.
What are the key benefits of using this Parser:
  1. Make troubleshooting faster and effective with built-in intelligence from lesson learned and baseline data. Parser identifies critical errors, their frequency, location, key performance numbers, current state of the environment like how many users, sessions, transactions, (if any) anomalies in the system, which help to narrow down the issue(s). 
  2. Automatically collect key statistical data (performance, error or usage) and build a data mart. Parser collects all vital statistics like performance numbers, performance range, hourly user/session statistics, heap snapshots etc. and updates historical data files. These data can be used to generate history report and also in decision making process.
  3.  Auto generate key summary reports for internal consumption and create delimited data files, which can be imported to spread sheet like Excel to prepare management reports. Basically, it can provide visibility to your entire application infrastructure. 
  4.  Create correlation. Parser creates correlation so that it becomes easier to identify and map transaction path (Web server to the Application server). 
  5. Generate warning for possible future incidents/events. Parser can provide early warning of possible future events. Here is an example of generated warning: "2.18383e+06 : average of Perm Generation After Full GC exceeds threshold 2097152 (K).  There is a possibility of OutOfMemory in near future because of Not sufficient PermGen Space for AppSrv04"

Getting started is very simple. No big-bang installation, or configuration is required. If you are running in Unix like environment, you just download the script, and launch the Parser from the directory where it is downloaded. If you are on Windows, you need Cygwin or Bash Shell that comes with MINGW to execute it.

How to execute?

You can see all the available options, by just executing:

$> ./masterLogParser.sh

Manadatory option '--rootcontext' or '-c' missing

-c|--rootcontext: Required. Source path from where log files are read.
-t|--rpttype: Optional. Values are: 'daily' or 'ondemand'. 'ondemand' is default value.
It is used to control logic - like whether or not to update historical data files.
Only 'daily' option creates and updates historical data files.
-d|--recorddate: Optional. It is the log entry date. Meaning log entries with that date will be processed.
It takes the format 'YYYY-MM-DD'. Default is to use current date. However, if 'daily' is chosen as 2nd argument, and log entry date is not provided, it defaults to 'date - 1 day'.
-l|--rptloc: Optional. It is report directory where all generated reports are written.
Default value is /tmp/
-o|--procoption: Optional. It represents the processing option. Values can be 'full' or 'partial'.
Default value is 'partial'. This option is currently being used only for Verbose GC log parser.

Here are few examples:

# processing current day's logs
$> ./masterLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./masterLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./masterLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>

See masterLogParser.sh in github: https://github.com/pppoudel/log-parser/blob/master/masterLogParser.sh

Input

1. thresholdValues.csv

As name implies, this file contains pre-defined name and threshold value pair for certain condition or events. Parser lookups these pre-defined condition, and when it detects one in a log file, it compares with threshold value and triggers/writes notification into output file (00_Alert.txt) if logged event exceeds the threshold value. Threshold can be performance based like 'notify if maximum response time exceeds 9 seconds' or event based like 'notify if maximum fatal count for a JVM exceeds 5'
Format:
Each line in thresholdValues.csv has multiple columns separated by pipe '|' and represent threshold definition for one complete event condition. See below:

event-name|value|server-identifier|event-description
e.g.
httpAvgRespTimeTh|2.5|http|Threshold for Average response time in sec.



Where:
event-name: name of the event like httpAvgRespTimeTh (http) Average Response Time threshold.
value: threshold value for this specific event. In this case it is 2.5 seconds.
server-identifier: Which log/server this value belongs to. In this case it is 'http' server.
event-description: provides some details what this threshold is about.

See a sample thresholdValues.csv in github: https://github.com/pppoudel/log-parser/blob/master/thresholdValues.csv

2. perfBaseLine.csv

This file contains pre-defined transactions (request URIs) and their baseline response time (in seconds). I suppose, you can get content for this file from your performance test result.

Format:
Each line in perfBaseLine.csv has two columns separated by pipe '|' which represent performance value for a given transaction (request/response). See below:

request-name|response-time (in seconds)
e.g.
finManagement/account_add.do|1.57756

Where:
request-name: represents request/response URI or transaction name, whatever you call it. In this case it is finManagement/account_add.do
response-time: response time for the transaction to complete in seconds. 1.5 seconds in this case.
See a sample perfBaseLine.csv in github: https://github.com/pppoudel/log-parser/blob/master/perfBaseLine.csv

3. WASCustomFilter.txt

Currently this input file is only consumed by websphereLogParser. It  defines some custom error/keywords. It is to tell parser that you're interested and like to know if certain keywords or string in general are logged (because of certain condition) in a log file, which may be  non-standard and specific to your environment/application.

Format:
It uses Regular expression to define custom error/keywords. Each new error definition goes to new line. See below:

Error.*Getting.*Folder
503.*Service.*Temporarily.*Unavailable
CORBA.*NO_RESPONSE
ORA-01013:

See a sample WASCustomFilter.txt in github: https://github.com/pppoudel/log-parser/blob/master/WASCustomFilter.txt

4. WAS_CloneIDs.csv

This file contains information that defines relationship (mapping) between HTTP session clone ID and WAS name. Clone ID constitutes part of HTTP session and can be logged into Web Server access_log. With the relationship in hand, we can generate helpful analytical data that helps to identify transaction/request path end to end. Easiest way to find out clone ID for each WAS is to look your plugin-cfg.xml file.

Format:
Each line in WAS_CloneIDs.csv four columns separated by pipe '|'. See below:

cloneID|WAS-name|hostname
e.g.
23532em3r|AppSrv01|washost082

Where:
cloneID cloneID is part of jSession. 23532em3r in above example. Refer to https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/txml_httpsessionclone.html
WAS-name  WebSphere Application Server (WAS) name. AppSrv01 in above example.
hostname Hostname of machine/server where particular WAS resides. washost082 in above example.


See a sample WAS_CloneIDs.csv in github: https://github.com/pppoudel/log-parser/blob/master/WAS_CloneIDs.csv

Output:

Each Parser update Alert file, and history reports (only if report type is 'daily') as well as generate summary report and other report files. For the complete list, see '#--------- Report/Output files -------#'  and '#--------- History Report/Output files -------#' sections in each script file.

For further detail of each individual parser, visit the following blog posts:
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  3. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log
  4. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

How to Parse WebSphere Application Server Logs for Troubleshooting & Reporting


Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

WebsphereLogParser parses IBM WebSphere Application Server SystemOut.log. This is one of the parsers included in the suite that I have posted. This particular parser script expects that the SystemOut log follows the default/basic message formats outlined by IBM in JVM log interpretation document. Since, SystemOut.log does not contain the WAS server name, in order to relate the data to corresponding WAS JVM, it is advisable that you put SystemOut logs for each WAS under corresponding directory, named after the WAS name. It is specially important when you are parsing logs from multiple WAS servers. Script takes directory name as WAS name for the purpose of reporting. For example, let's say, you have Application servers 'appSrv01, appSrv02, appSrv03 ... etc.), then put logs from each Application Server under corresponding directories like:

 /tmp/appSrv01
    SystemOut.log
    SystemOut_2017.09.05.log
    SystemErr.log
 /tmp/appSrv02
    SystemOut.log
    SystemOut_2017.09.05.log
    SystemErr.log

It parses both zipped file and or regular file. By default, it finds and processes following files in a given path:

SystemOut.log
SystemOut.log.zip
SystemOut.zip
SystemOut_'$recYY'.'$rec0MM'.'$rec0DD'_.*
SystemOut_'$recNYY'.'$recN0MM'.'$recN0DD'_.*
Where:
recYY is Year like 17 (17 represent year of 2017)
rec0MM is Month like 01 (01 represent month of January)
rec0DD is Day like 01 (01 represents the first day of a month)
recNYY/recN0M/recN0DD = (recYY/rec0MM/rec0DD)+1 day

The naming suffix for historical files can be different from one environment to another. So, if you have different suffix for historical files, you need to tweak the find script. Currently it looks like this:

find $rootcontext -name "SystemOut*" -type f | \
  egrep '(SystemOut.log$|SystemOut.log.zip$|SystemOut.zip$|SystemOut_'$recYY'.'$rec0MM'.'$rec0DD'_.*|SystemOut_'$recNYY'.'$recN0MM'.'$recN0DD'_.*)'
where $rootcontext is root path.

Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/websphereLogParser.sh for details.

Note: script is written to parse the date format like '[4/23/17 8:13:22:137 EDT]' in SystemOut.log. If your SystemOut.log uses different date format, you may need to tweak the section of script which parses the date.

How to execute:

You can see all the available options, by just launching:
$> ./websphereLogParser.sh

Few examples are here:
# processing current day's logs
$> ./websphereLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./websphereLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./websphereLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>


Output
Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/01_WASLogSummaryRpt.txt
  • $rptDir/WASLogErrRpt_all.csv
  • $rptDir/WASLogFilteredErrRpt.csv
  • $rptDir/WASLogSummaryByErrCmpRpt.csv
  • $rptDir/WASLogSummaryByErrClassRpt.csv
  • $rptDir/WASLogSummaryByErrExpRpt.csv
  • $rptDir/WASLogSummaryByErrMsgRpt.csv
  • $rptDir/WASLogSummaryByWarnCmpRpt.csv
  • $rptDir/WASLogSummaryByWarnClassRpt.csv
  • $rptDir/WASLogSummaryByWarnExpRpt.csv
  • $rptDir/WASLogSummaryByWarnMsgRpt.csv
Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/RecycleHistoryRpt_all.csv
  • $pDir/WASOutOfMemoryHistoryRpt.csv
  • $pDir/WASTransactionTimeOutHistoryRpt.csv
  • $pDir/WASSHungThreadHistoryRpt.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/01_WASLogSummaryRpt.txt
See my other posts in this series
  1. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  2. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log
  3. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

How to Parse Java GC logs for Troubleshooting & Reporting

Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

Java Garbage Collection (GC) log format may depend on Java version, Java Virtual Machine (JVM) settings, JVM providers etc. This particular parser has been tested with verbose GC output from WebSphere Application Server 8.5.x, configured to use IBM Java version 7.0.4.0 with the following JVM configuration.

<jvmEntries xmi:id="JavaVirtualMachine_12315382660776" verboseModeGarbageCollection="true" verboseModeJNI="false" initialHeapSize="8192" maximumHeapSize="8192" runHProf="false" hprofArguments="" debugMode="false" debugArgs="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=7777" genericJvmArguments="-XX:MaxPermSize=2560m -XX:+PrintTenuringDistribution -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+UseParallelOldGC -XX:ParallelGCThreads=16 -XX:-TraceClassUnloading -XX:+UseCompressedOops -XX:+AlwaysPreTouch -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 " executableJarFileName="" disableJIT="false">

If your log format is different, you may need to tweak the parser script a little bit.

Note: this parser is not designed to replace any of your existing parser, but rather to complement in terms of data gathering, and visualization. See, the sample summary report.

Also, it may generate alert messages like the one seen below:
12 : number of Full GC exceeds threshold of 6 for AppSrv04 on 2016-11-29 Old Generation Heap space after Full GC exceeded threshold of 4700000(K) for AppSrv03. There is possibility of OutOfMemory in near future because of Not sufficient Heap space 

Other than summary report and alert messages written in 00_Alert.txt, it also produces GCstats_all.csv. It is a pipe '|' delimitted file, which captures all relevant data for each day for each JVM. Data written in GCstats_all.csv can be imported to spread sheet like Excel to create graph, chart and table to present to  the management.
Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/javaGCStatsParser.sh for details.

Note: script is written to parse the date format like '2017-05-25T08:11:50.666-0400' in native_stdout.log. If your native_stdout.log uses different date format, you may need to tweak the section of script which parses the date.


How to execute:

You can see all the available options, by just launching:
$> ./javaGCStatsParser.sh

Few examples are here:
# processing current day's logs
$> ./javaGCStatsParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./javaGCStatsParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./javaGCStatsParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>


Output

Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/04_GCSummaryRpt.txt
  • $rptDir/GCstatsRpt_all.csv
Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/GCHistoryRpt_all.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/04_GCSummaryRpt.txt
See my other posts in this series
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webAccessLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) access_log
  3. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log

How to Parse Apache access_Log for Troubleshooting & Reporting


Note: if you haven't already, see Log Parsing, Analysis, Correlation, and Reporting Engine post first.

Access log is a great source of information (for troubleshooting, performance analysis, user trend reporting etc.) as it records all requests processed by Apache Web server. What information to capture in access log is controlled using CustomLog and LogFormat directives. Visit Apache site (https://httpd.apache.org/docs/2.4/logs.html#accesslog) for more information about the access log.
This particular Log Parser that I'm discussing here is written to parse the access_log generated using the following log format:
LogFormat "%h %l %u %t \"%r\" %>s %b JSESSIONID=\"%{JSESSIONID}C\" UID=\"%{UID}C\" %D %I %O \"%{User-agent}i\" %v" common

Note: if your access_log is generated using different LogFormat, you may need to tweak the script a little bit.

Finding log files: currently parser finds all access_log in the given path if:
$recDate == $currDate
or access_log.$rec0MM$rec0DD$recYY
if ($recDate < $currDate).
Where:
recDate: Optional. It is the log entry date. Meaning log entries with that date will be processed. It takes the format 'YYYY-MM-DD'. Default is to use current date. However, if 'daily' is chosen as 2nd argument, and log entry date is not provided, it defaults to 'date - 1 day'.
currDate: Optional. It is the log entry date. Meaning log entries with that date will be processed. It takes the format 'YYYY-MM-DD'. Default is to use current date. However, if 'daily' is chosen
as 2nd argument, and log entry date is not provided, it defaults to 'date - 1 day'.
rec0MM: rec0MM is Month like 01 (01 represent month of January)
rec0DD: rec0DD is Day like 01 (01 represents the first day of a month)
recYY: recYY is Year like 17 (17 represent year of 2017)

Review the actual script available in github - https://github.com/pppoudel/log-parser/blob/master/webAccessLogParser.sh for details.

Note: script is written to parse the date format like '13/Jun/2015:10:32:04 -0400' in access_log. If your access_log uses different date format, you may need to tweak the section of script which parses date.

How to execute:

You can see all the available options, by just launching:
$> ./webAccessLogParser.sh

Few examples are here:
# processing current day's logs
$> ./webAccessLogParser.sh --rootcontext <log-path>

# processing yesterday's logs with historical report updates
$> ./webAccessLogParser.sh --rootcontext <log-path> --rpttype daily

# processing any day's logs updates
$> ./webAccessLogParser.sh --rootcontext <log-path> --recorddate <date in (YYYY-MM-DD) format>

Output

Report/Output files:
  • $rptDir/00_Alert.txt
  • $rptDir/02_WebAccessLogSummaryRpt.txt
  • $rptDir/WebAccessLogRpt_all.csv
  • $rptDir/WebAccessLog_discardedRpt.csv
  • $rptDir/WebAccessLogSummaryByDomainRpt.csv
  • $rptDir/WebAccessLogSummaryByTransactionRpt.csv
  • $rptDir/WebAccessLogSummaryByUIDRpt.csv
  • $rptDir/WebAccessLogSummaryByRC400PlusURLRpt.csv
  • $rptDir/WebAccessLogSummaryByUidSessionRpt.csv
  • $rptDir/WebAccessLogSummaryUnknowUARpt.csv
  • $rptDir/WebHourlyDomainUsageByUid.csv
  • $rptDir/WebHourlyDomainUsageBySess.csv
  • $rptDir/WebDlyDomainUsage.csv

Where $rptDir is report directory. Default value is $TMP/$recDate

History Report/Output files:
# These are historical reports. Each run will append record in existing report file.
  • $pDir/WebPerfHistoryRpt.csv
  • $pDir/WebHourlyAvgRespTimeHistoryRpt.csv
  • $pDir/WebUniqueUsersHourlyHistoryRpt_all.csv
  • $pDir/WebRequestTypeHistoryRpt.csv
  • pDir/WebResponseCodeHistoryRpt.csv
  • $pDir/WebStatsByIHSHistoryRpt.csv
  • $pDir/WebStatsByWASHistoryRpt.csv
Where $pDir is parent of $rptDir.

See sample summary report in github - https://github.com/pppoudel/log-parser/blob/master/sample_reports/02_WebAccessLogSummaryRpt.txt
See my other posts in this series
  1. websphereLogParser.sh for parsing, analyzing and reporting WebSphere Application Server (WAS) SystemOut.log
  2. webErrorLogParser.sh for parsing, analyzing and reporting Apache/IBM HTTP Server (IHS) error_log
  3. javaGCStatsParser.sh for parsing, analyzing and reporting Java verbose Garbage Collection (GC) log

How to use Docker Swarm Configs service with WebSphere Liberty Profile


   In order to make your dockerized application portable, you can externalize (Docker container using configuration from outside of Docker image) configuration that changes from one environment to another (from DEV to QA, UAT, Prod etc.). This helps to maintain a generic docker image for your dockerized application and also get rid of most of the bind-mount configuration files and/or environment variables used by your container. Following Docker Swarm services are extremely useful in externalizing the configuration:
  • Docker Secrets (available in Docker 1.13 and higher version)
  • Docker Configs (available in Docker 17.06 and higher version)
You can use Docker Secrets to externalize configuration that are confidential in nature, and Docker Configs for general configuration that has potential to be changed from one environment to another.
In this blog post, I will use Dockerized application powered by WebSphere Application Server Liberty Profile (WLP) to show how to use Docker Configs service to externalize server.xml. You can look my other blog - Using Docker Secrets with IBM WebSphere Liberty Profile Application Server, to learn how to use Docker Secrets.


So, here is my server.xml for my WLP application to be used in this post as an example.

<server description="TestWLPApp">
   <featuremanager>
      <feature>javaee-7.0</feature>
      <feature>localConnector-1.0</feature>
      <feature>ejbLite-3.2</feature>
      <feature>jaxrs-2.0</feature>
      <feature>jpa-2.1</feature>
      <feature>jsf-2.2</feature>
      <feature>json-1.0</feature>
      <feature>cdi-1.2</feature>
      <feature>ssl-1.0</feature>
   </featuremanager>
   <include location="/run/secrets/app_enc_key.xml"/>
   <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/>
   <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/>
   <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/>
   <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/>
   <applicationmonitor updatetrigger="mbean"/>
   <datasource id="wlpappDS" jndiname="wlpappDS">
      <jdbcdriver libraryref="OracleDBLib"/>
      <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
   </datasource>  
    <library id="OracleDBLib">
       <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
    </library>
    <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/>
</server>

As you can see in above server.xml, the following items were created as Docker Secrets:


  • <include location="/run/secrets/app_enc_key.xml"/>

  • <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" ...

  • <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" ...


See, Create Docker Secrets paragraph of  Using Docker Secrets with IBM WebSphere Liberty Profile Application Server to create these confidential configuration items.

Once confidential configuration items are created using Docker Secrets, follow the steps below to create general configuration items using Docker Configs.
  1. Connect to Docker UCP using client bundle. 
  2. Create configuration item for server.xml using docker config create ...command.
    Important: both the client and daemon API must both be at least at version 1.30 to use this command.

    $> docker config create dev_wlp_server_config_v1.0 /mnt/nfs/dockershared/wlpapp/server.xml_v1.0

    9i5edohyzyrvopuz988caxw4r

    Note: here dev_wlp_server_config_v1.0 is configuration item name which gets configuration from /mnt/nfs/dockershared/wlpapp/server.xml_v1.0. I've decided to version the configuration item, so that in future if I need to update the configuration, it becomes easier.

  3. verify that the configuration item created

     $> docker config ls

    ID                        NAME                       CREATED        UPDATED
    9i5edohyzyrvopuz988caxw4r dev_wlp_server_config_v1.0 18 seconds ago 18 seconds ago
    geuerj6t98d8eeu8nqvvxgtw9 com.docker.license-0       5 days ago     5 days ago
    vdzwhpe91iptvuiro654u3lue com.docker.ucp.config-1    5 days ago     5 days ago

  4. Use configuration item. Below example shows using YAML.

    docker-compose.yml
    version: "3.3"
    services:
       wlpappsrv: 

          image: 192.168.56.102/osboxes/wlptest:1.0
          networks:
             - my_hrm_network
          secrets:
             - keystore.jks
             - truststore.jks
             - app_enc_key.xml
          ports:
             - 9080
             - 9443
          configs:
             - source: dev_wlp_server_config_v1.0
               target: /opt/ibm/wlp/usr/servers/defaultServer/server.xml
               mode: 0444

         deploy:
            placement:
               constraints: [node.role == worker]
               mode: replicated
               replicas: 4
               resources:
                  limits:
                     memory: 2048M
               restart_policy:
                  condition: on-failure
                  max_attempts: 3
                  window: 6000s
               labels:
                  - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
                  - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
    networks:
       my_hrm_network:
          external:
             name: my_hrm_network
    secrets:
       keystore.jks:
          external: true
       truststore.jks:
          external: true
       app_enc_key.xml:
          external: true
    configs:
        dev_wlp_server_config_v1.0:
         external: true

    Note: if you don't want to create configuration item in advance (step #2 above), you can also specify configuration file in the YAML file itself. Replace external: true in the above example with file: /mnt/nfs/dockershared/wlpapp/server.xml_v1.0

    If you want to use use docker service create ... command, instead of YAML file, here is how you can use config

    docker service create \
     --name wlpappsrv \
     --config  source=dev_wlp_server_config_v1.0,target=/opt/ibm/wlp/usr/servers/defaultServer/server.xml,mode=0444 \
     ... \
     192.168.56.102/osboxes/wlptest:1.0

  5. Validate compose file:
    $> docker-compose -f docker-compose.yml config
    WARNING: Some services (opal) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm. WARNING: Some services (opal) use the 'configs' key, which will be ignored. Compose does not support 'configs' configuration - use `docker stack deploy` to deploy to a swarm.

  6. Deploy the service as a stack:
    $> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP


How to refresh/update or rotate configuration


Configuration item created by Docker Configs service is immutable, however, there is a way to rotate configuration. Let's say, you need to update some configuration value in server.xml, like you have to reference to new version of JDBC driver.  See the steps below:
  1. Create another configuration item that references to updated server.xml
    $>docker config create dev_wlp_server_config_v2.0 \
      /mnt/nfs/var/dockershared/dev_PAL/server.xml_v2.0



    o4173tet99vuwuz1fma4dqd2j

  2. Update the service so that it references to the newly created configuration item
    $>docker service update \
     --config-rm dev_wlp_server_config_v1.0 \
     --config-add  source=dev_wlp_server_config_v2.0,target=/opt/ibm/wlp/usr/servers/defaultServer/server.xml
    \
     wlpappsrv

  3. [optional] Once the service is fully updated, you can remove the old configuration item:
    $> docker config rm dev_wlp_server_config_v1.0

  4. [optional] If you need to see which configuration item is attached to the service, you can run 'docker service inspect <service-name>' command.
    $>docker service inspect wlpappsrv

    ...
    "Configs": [
      {
        "File": {
          "Name": "/opt/ibm/wlp/usr/servers/defaultServer/server.xml",
          "UID": "0",
          "GID": "0",
          "Mode": 292
        },
         "ConfigID": "o4173tet99vuwuz1fma4dqd2j",
         "ConfigName": "dev_wlp_server_config_v2.0"
      }
    ]
    ...


For more information about Docker Swarm Configs service, review the following Docker documentations:

How to Deploy Application in WebSphere Liberty Cluster



I'm writing a series of blog posts on How to Create and Configure WebSphere Liberty Cluster End-to-End. This particular post focuses on application deployment. Below listed are all the posts in this series:
  1. How to Create and Configure WebSphere Liberty Cluster End-to-End
  2. How to Deploy Application in WebSphere Liberty Cluster
  3. How to Setup Front-End Web Server for WebSphere Liberty Cluster
In order to explain it better, I've created an example topology of WebSphere Liberty Profile (WLP) Collective with a Collective Controller and two Collective/Cluster member servers. Example topology contains IBM HTTP Server (IHS) as a Front-End and also a Deployment/Tool server. See diagram 1.0 for details.

Diagram 1.0
Example  Topology: WLP Collective with Front-End & Deployment Server

Note: in order to complete all the steps in this blog post, first you need to complete required steps outlined in How to Create and Configure WebSphere Liberty Cluster End-to-End

Deploying an application in WLP server or cluster

There are few ways to deploy an application into WLP server or cluster. Application can be simply dropped into a pre-defined dropins (${server.config.dir}/dropins) directory, or by adding an application definition entry to the server configuration.
In this blog post, we will explore script driven, remote application deployment for distributed platforms (multiple WLP servers or cluster managed by WLP Collective Controller) by utilizing secure JMX REST connection. Refer to Configuring secure JMX connection to Liberty for more details.
We'll be using manageAppOnCluster.py script and libraries available in IBM developerWorks to manage application(s) on WLP cluster.

Download manageAppOnCluster.zip

You can download it from https://developer.ibm.com/wasdev/downloads/#asset/scripts-jython-Install_or_uninstall_an_application_to_cluster. Here I have downloaded and extracted manageAppOnCluster.zip in Deployment/Tool server (machine: 01 in example topology) under /opt/workspace/wlpDeployment directory.

Download and Install Jython. 

You can download Jython from http://www.jython.org/downloads.html
General installation instruction can be found at https://wiki.python.org/jython/InstallationInstructions
Console based installation is very straight forward. See below, I'm installing standalone Jython under /opt/jython directory on Machine: 01

# Create /opt/jython directory
$> sudo -p mkdir /opt/jython

# Change the ownership if needed
$> sudo chown R wasadmin:wasgrp /opt/jython

# Install Jython
$> java -jar jython-installer-2.7.0.jar --console

Welcome to Jython !
You are about to install Jython version 2.7.0
(at any time, answer c to cancel the installation)
For the installation process, the following languages are available: English, German
Please select your language [E/g] >>> E
Do you want to read the license agreement now ? [y/N] >>> N
Do you accept the license agreement ? [Y/n] >>> Y
The following installation types are available:
   1. All (everything, including sources)
   2. Standard (core, library modules, demos and examples, documentation)
   3. Minimum (core)
   9. Standalone (a single, executable .jar)
Please select the installation type [ 1 /2/3/9] >>> 9
Please enter the target directory >>> /opt/jython Your java version to start Jython is: Oracle Corporation / 1.8.0_102
Your operating system version is: Linux / 3.10.0-514.el7.x86_64
Summary:
  - standalone
Please confirm copying of files to directory /opt/jython [Y/n] >>> Y
  10 %
20 %
  ...
  ...
Packing standalone jython.jar ...
  90 %
  100 %
Do you want to show the contents of README ? [y/N] >>> N
Congratulations! You successfully installed Jython 2.7.0 to directory /opt/jython.

You'll also need restConnector.jar and restConnector.py in order to be able to make remote JMX REST connection. If you have WLP installed on the machine, you should be good as these files are installed as part of WLP installation. If you don't have WLP installed on the machine, or you don't want to install WLP, you can just copy these files from another machine where it is installed. Easiest way is to tar the content of entire ${wlp.install.dir}/clients directory and copy it to Deployment/Tool server where you can untar it.
Here, I'm going to create a wlpclient.tar file on Machine: 02, as WLP is installed there and copy to Machine: 01.

$> cd /opt/ibm/wlp
$> tar -cvf /tmp/wlpclient.tar clients/
tar: Removing leading `/' from member names
clients/
clients/restConnector.jar
clients/jython/
clients/jython/restConnector.py
clients/jython/README

Copy the wlpclient.tar to Deployment/Tool server and extract it.

# Create directory /opt/ibm/wlp in Machine 02 (if it doesn't exist)
$> sudo mkdir /opt/ibm/wlp

# Change ownership as required.
$> sudo chown -R wasadmin:wasgrp

# Extract under /opt/ibm/wlp
$> tar -xvf wlpclient.tar -C /opt/ibm/wlp

In order to create secure connection, you'll also need signer certificate from the Collective Controller. Just copy trust.jks file (located under ${server.config.dir}/resources/security/ directory) from Collective Controller server and put into Deployment/tool server.
Now, for convenient, create a script file (name it wlpMgmt.sh), which look something like this:

wlpMgmt.sh
#! /bin/bash
# wlpMgmt.sh
export JYTHONPATH=/opt/ibm/wlp/clients/jython/restConnector.py
java -cp /opt/jython/jython.jar:/opt/ibm/wlp/clients/restConnector.jar:/opt/ibm/wlp/clients/jython/:/opt/workspace/wlpDeployment/lib/ org.python.util.jython "$@"


Prepare your application

For this exercise, I'm using a sample JavaHelloWorldApp.war application downloaded from https://github.com/IBM-Bluemix/java-helloworld/tree/master/target
I've put the application under /opt/workspace/wlpDeployment/apps directory in Machine:1.

Deploy the Application

Here, we are going to use wlpMgmt.sh (created above) to setup environment and call Jython and execute manageAppOnCluster.py to create secure connection to Liberty Controller and deploy application remotely.


$> /opt/workspace/wlpDeployment


$> ./wlpMgmt.sh manageAppOnCluster.py \
   --install=/opt/workspace/wlpDeployment/apps/JavaHelloWorldApp.war \
   --truststore=/opt/workspace/wlpDeployment/resources/security/trust.jks \
   --truststorePassword=<replace_with_your_password> \
   --host=waslibclnt01 \
   --port=9443 \
   --user=wasadmin \
   --password=<replace_with_your_password> \
   --clusterName=wlpCluster \

Connecting to the server...
Successfully connected to the server "waslibclnt01:9443"
Uploading application JavaHelloWorldApp.war to waslibmem01,/opt/ibm/wlp/usr,wlpSrv01
Updating server config for waslibmem01,/opt/ibm/wlp/usr,wlpSrv01
Complete
Uploading application JavaHelloWorldApp.war to waslibmem02,/opt/ibm/wlp/usr,wlpSrv02
Updating server config for waslibmem02,/opt/ibm/wlp/usr,wlpSrv02
Complete


Verify application deployed successfully

  1. Verify application binary is copied to each member server (in this case Machine: 03wlpSrv01, and Machine: 04wlpSrv02). By default, it is copied to ${server.config.dir}/apps directory of each cluster member server. Check and make sure it is copied. 
  2. Verify that server.xml of each member server is updated. In my case, it has added the following line in the member server's server.xml:

    <application name="JavaHelloWorldApp" location="${server.config.dir}/apps/JavaHelloWorldApp.war"/>

  3. Access application and make sure it is available. In our case, it is accessible as:
    • Machine: 03 URL: http://waslibmem01:9081/JavaHelloWorldApp/
    • Machine: 04 URL: http://waslibmem02:9081/JavaHelloWorldApp/

Note: If for any reason you need to undeploy/uninstall the application, here is how to do it:


$> ./appMgmt.sh manageAppOnCluster.py \
  --uninstall=JavaHelloWorldApp.war \
  --truststore=/opt/workspace/wlpDeployment/resources/security/trust.jks \
  --truststorePassword=<replace_with_your_password> \
  --host=waslibctlr01 \
  --port=9443 \
  --user=wasadmin \
  --password=<replace_with_your_password>; \
  --clusterName=wlpCluster
Connecting to the server...
Successfully connected to the server "waslibctlr01:9443"
Removing application JavaHelloWorldApp.war from waslibmem01,/opt/ibm/wlp/usr,wlpSrv01
Updating server config for waslibmem01,/opt/ibm/wlp/usr,wlpSrv01
Complete
Removing application JavaHelloWorldApp.war from waslibmem01,/opt/ibm/wlp/usr,wlpSrv02
Updating server config for waslibmem01,/opt/ibm/wlp/usr,wlpSrv02
Complete

uninstall removes the application binary as well as <application> definition from server.xml.

Next: proceed to Setup Front-End Web Server for WebSphere Liberty Cluster -->


Looks like you're really interested in WebSphere Liberty Profile, see my other related blog posts below:


How to Setup Front-End Web Server for WebSphere Liberty Cluster



I'm writing a series of blog posts on How to Create and Configure WebSphere Liberty Cluster End-to-End. This particular post focuses on setting up Front-End Web Server. Below listed are all the posts in this series:
  1. How to Create and Configure WebSphere Liberty Cluster End-to-End
  2. How to Deploy Application in WebSphere Liberty Cluster
  3. How to Setup Front-End Web Server for WebSphere Liberty Cluster
In order to explain it better, I've created an example topology of WebSphere Liberty Profile (WLP) Collective with a Collective Controller and two Collective/Cluster member servers. Example topology contains IBM HTTP Server (IHS) as a Front-End and also a Deployment/Tool server. See diagram 1.0 for details.

Diagram 1.0
Example  Topology: WLP Collective with Front-End & Deployment Server

Note: in order to complete all the steps in this blog post, first you need to complete required steps in blog posts 1, and 2 in this series as stated above.

IHS powered by Apache with Plug-in for WebSphere Application Server (WAS) uses pre-configured workload management (WLM) policies to dispatch web requests to the appropriate cluster members and their containers. Having front-end web server like IHS also helps to boost security. To prevent single point of failure at the web server level, deploy an additional (backup or active) web server.
If you like to compare/review any other available front-end option, refer to Selecting a front end for your WebSphere Application Server topology.


Install IHS and IHS Plug-in for WAS 

Note: In the example topology above IHS and Plug-in is installed and configured on Machine: 05.

Providing installation instruction of IHS/Apache is out of scope for this post. This post assumes that you have Apache/IHS instance available to configure and integrate with WLP server. If you need to install IHS and WAS plug-in for IHS, refer to following:

For IHS:
https://www.ibm.com/support/knowledgecenter/en/SS7JFU_8.5.5/com.ibm.websphere.ihs.doc/ihs/tihs_silentinstall.html.

For Plug-in:
Installing the Web Server Plug-ins using the command line

Generate WAS plugin for IHS.

Generating WAS plug-in for IHS (plugin-cfg.xml) is easy but somehow trickier in WLP. Starting version 16.0.0.3, pluginUtility command comes with WLP, which makes it easy to generate plugin-cfg.xml. In fact from version 16.0.0.3, plugin-cfg.xml for each WLP server is generated automatically (triggered by different server events). You can see something like this in messages.log
com.ibm.ws.webcontainer.osgi.mbeans.PluginGenerator I SRVE9103I: A configuration file for a web server plugin was automatically generated for this server at /opt/ibm/wlp/usr/servers/wlpSrv02/logs/state/plugin-cfg.xml.

See Automatic generation of the plugin-cfg.xml file in IBM Knowledge Center for more details.
And with the help of pluginUtility, we can either merge individual plugin-cfg.xml or generate a brand new merged plugin-cfg.xml. As part of this exercise, we are going to generate merged plugin-cfg.xml. See command in action below:

$> cd /opt/ibm/wlp/bin
$> ./pluginUtility generate \
  --server=wasadmin:<replace_with_your_password>@waslibctlr01:9443 \
  --cluster=wlpCluster \
  --targetPath=/tmp/plugin-cfg.xml

The remote target server waslibctlr01:9443 will be used to generate the webserver plugin configuration file.

SSL trust has not been established with the target server.

Certificate chain information:
Certificate [0]
Subject DN: CN=waslibctlr01, OU=wlpCntlr, O=ibm, C=us
...
Do you want to accept the above certificate chain? (y/n) y

The generation of the webserver plugin configuration file was successful for the target collective controller.

As per input command above, it generates plugin-cfg.xml under /tmp, which we will review later.
For pluginUtility command details and all available options, refer to pluginUtility command page at IBM Knowledge Center.

Note: If you are using WLP version 8.5.5.x where pluginUtility command is not available, you can call the generateClusterPluginConfig operation on the ClusterManager MBean to generate a merged plugin-cfg.xml file for all started cluster members. For working code example, refer to topic Generating a merged plug-in configuration for Liberty servers at IBM Knowledge Center.

Troubleshooting:
If your plugin-cfg.xml file generation fails with following error:
The generation of the webserver plugin configuration file failed for the target collective controller.
Analyze the logs of the target collective controller to diagnose the problem.

And you see following message in your Collective Controller messages.log
... FFDC1015I: An FFDC Incident has been created: "java.net.NoRouteToHostException: No route to host (Host unreachable) com.ibm.ws.collective.repository.internal.ClusterManager 323" at ffdc_17.11.11_18.08.48.0.log
...
It is possibly caused by the firewall on the server machine(s). Make sure to open port(s) (listed in the server.xml) for communications on all member server(s) as well as controller.

Here is how our generated plugin-cfg.xml looks like:
<?xml version="1.0" encoding="UTF-8"?>
<!-- This config file was generated by plugin's merge tool v1.0.0.2 on 2017.11.13 at 16:50:31 GMT -->
<Config ASDisableNagle="false" AcceptAllContent="false" AppServerPortPreference="HostHeader" ChunkedResponse="false" FIPSEnable="false" IISDisableNagle="false" IISPluginPriority="High" IgnoreDNSFailures="false" RefreshInterval="60" ResponseChunkSize="64" SSLConsolidate="false" TrustedProxyEnable="false" VHostMatchingCompat="false">
  <Log LogLevel="Debug" Name="/opt/IBM/WebSphere/Plugins/logs/webserver1/http_plugin.log"/>
  <Property Name="ESIEnable" Value="true"/>
  <Property Name="ESIMaxCacheSize" Value="1024"/>
  <Property Name="ESIInvalidationMonitor" Value="false"/>
  <Property Name="ESIEnableToPassCookies" Value="false"/>
  <Property Name="PluginInstallRoot" Value="/opt/IBM/WebSphere/Plugins"/>
  <!-- Server Clusters -->
  <ServerCluster CloneSeparatorChange="false" GetDWLMTable="false" IgnoreAffinityRequests="true" LoadBalance="Round Robin" Name="Shared_2_Cluster_0" PostBufferSize="0" PostSizeLimit="-1" RemoveSpecialHeaders="true" RetryInterval="60" ServerIOTimeoutRetry="-1">
    <Server CloneID="8b2ad3bf-c30d-4c7e-9fab-ec856dfc20b7" ConnectTimeout="5" ExtendedHandshake="false" LoadBalanceWeight="20" MaxConnections="-1" Name="default_node_defaultServer_1" ServerIOTimeout="900" WaitForContinue="false">
      <Transport Hostname="waslibmem01" Port="9081" Protocol="http"/>
      <Transport Hostname="waslibmem01" Port="9444" Protocol="https">
        <Property Name="keyring" Value="/opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.kdb"/>
        <Property Name="stashfile" Value="/opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.sth"/>
      </Transport>
    </Server>
    <Server CloneID="9c9bb9b9-d4b9-4bf8-98ef-336e3c5c436d" ConnectTimeout="5" ExtendedHandshake="false" LoadBalanceWeight="20" MaxConnections="-1" Name="default_node_defaultServer_0" ServerIOTimeout="900" WaitForContinue="false">
      <Transport Hostname="waslibmem02" Port="9081" Protocol="http"/>
      <Transport Hostname="waslibmem02" Port="9444" Protocol="https">
        <Property Name="keyring" Value="/opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.kdb"/>
        <Property Name="stashfile" Value="/opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.sth"/>
      </Transport>
    </Server>
    <PrimaryServers>
      <Server Name="default_node_defaultServer_1"/>
      <Server Name="default_node_defaultServer_0"/>
    </PrimaryServers>
  </ServerCluster>
  <!-- Virtual Host Groups -->
  <VirtualHostGroup Name="/cell/sharedCell_2/vHostGroup/shared_host_0">
    <VirtualHost Name="*:443"/>
    <VirtualHost Name="*:80"/>
  </VirtualHostGroup>
  <!-- URI Groups -->
  <UriGroup Name="/cell/sharedCell_2/application/default_host_defaultServer_default_node_Cluster_URIs">
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/IBMJMXConnectorREST/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/ibm/api/*"/>
    <Uri AffinityCookie="JSESSIONID" AffinityURLIdentifier="jsessionid" Name="/JavaHelloWorldApp/*"/>
  </UriGroup>
  <!-- Routes -->
  <Route ServerCluster="Shared_2_Cluster_0" UriGroup="/cell/sharedCell_2/application/default_host_defaultServer_default_node_Cluster_URIs" VirtualHostGroup="/cell/sharedCell_2/vHostGroup/shared_host_0"/>
</Config>

As you can see, I've highlighted few lines above and let's discuss about those.
  1. pluginUtility does not create keystore related files (plugin-key.kdb and plugin-key.sth) referenced in the plugin-cfg.xml file. Later in the post, we will talk how to create these file manually.
  2. Generated plugin-cfg.xml has default values. If you need to generate plugin-cfg.xml with certain values, define pluginConfiguration element (see below) in server.xml and regenerate the plugin-cfg.xml.

    <pluginConfiguration webserverPort="80"
      webserverSecurePort="443"
      sslKeyringLocation="path/to/sslkeyring"
      sslStashfileLocation="path/to/stashfile"
      sslCertlabel="definedbyuser"/>

    For all available configuration attributes of pluginConfiguration, see IBM Knowledge Center


Establishing Secure communication between IHS/Plug-in and WLP servers

As you can see in the generated plugin-cfg.xml, it has both http and https transport definitions. Https transport references the plugin-key.kdb/sth files, however, these files are not generated by pluginUtility. So, we need to create them manually. Below, you'll find two ways to create them:

1) Create key database (kdb) file and import Signer Certificate. For this purpose, you can use 'gskcmd' or 'IKEYMAN' that are installed as part of IHS. Below I'll show how to use 'gskcmd'.

$> cd /opt/IBM/HTTPServer/bin
# creates .kdb,sth, and .rdb files
$> ./gskcmd -keydb -create -db /opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.kdb \
  -pw <replace_with_your_password> \
  -type kdb -expire 7300 -stash

# Add signer certificate. If you are using WAS generated keys on WAS Liberty,

# member root certificate can be extracted from trust.jks file under 
# ${server.config.dir}/resources/security directory.

$> ./gskcmd -cert -add -db /opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.kdb \
  -pw <replace_with_your_password> \
  -file /tmp/myWASSigner.crt


# List the added certificates
$> ./gskcmd -cert -list -db /opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.kdb \
   -pw <replace_with_your_password>

2) Convert JKS file into KDB file. The following command takes jks file as an input and creates plugin-key.kdb, plugin-key.rdb, and plugin-key.sth files.
If you are using WAS generated keys, you can find signer certificate in trust.jks file on any member server under ${server.config.dir}/resources/security directory.

$> ./gskcmd -keydb -convert -db /tmp/trust.jks \
  -pw <replace_with_your_password> \
  -new_format kdb \
  -target /opt/IBM/WebSphere/Plugins/config/webserver1/plugin-key.kdb \
  -stash


IHS Certificate

In order to secure communication between the client/browser and IHS, you need to have either CA signed certificate (preferred) or self-signed certificate for IHS. For this exercise, we are going to use self-signed certificate. Please note that if your front-end communication is not secure, then back-end communication (from plug-in to WAS WLP) is also going to be not secure (by default). If you like to know more detail on how it works, see Few Tips on Secure vs. Non-Secure WAS Web Server Plug-in Connection.
Below command shows how to create a self-signed certificate using gskcmd:

$> cd /opt/IBM/HTTPServer/bin

# run gskcmd to create kdb file to store private and public keys for IHS
$> ./gskcmd -keydb -create -db /opt/IBM/HTTPServer/ssl/ihs.kdb \
  -pw <replace_with_your_password> \
  -type kdb -expire 7300 -stash


# create self signed cert:
$> ./gskcmd -cert -create -db /opt/IBM/HTTPServer/ssl/ihs.kdb \
  -label ihs_cert -pw <replace_with_your_password> \
  -type kdb -size 2048 -expire 7300 -default_cert yes \
  -dn "CN=waslibhihs01, OU=WebSphere, O=SysGenius, C=CA"

IHS and plug-in Configuration

Now, we have everything in order to configure IHS and WAS plug-in for IHS. This configuration is very straight forward and can be done manually. For details refer to https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.wlp.doc/ae/twlp_admin_webserver_plugin.html

Verify content of plugin-cfg.xml
  •  Make sure .kdb/.rdb/.sth file location is correct. 
  • log file location is correct. 
  • Both (http and https) transport definitions (host and port) are correct and resolvable from IHS server. You can do a telnet test for both http and https transport to verify. 
Update plugin-cfg.xml if required.



Update httpd.conf

1) Add plug-in related configuration in httpd.conf:

LoadModule was_ap22_module /opt/IBM/WebSphere/Plugins/bin/64bits/mod_was_ap22_http.so
WebSpherePluginConfig /opt/IBM/WebSphere/Plugins/config/webserver1/plugin-cfg.xml

2) Add configuration for front-end secure communication

  • Uncomment/enable the SSL module configuration directive.
  • Create an SSL virtual host stanza in the httpd.conf file using the following examples and directives.

LoadModule ibm_ssl_module modules/mod_ibm_ssl.so
Listen 443

<VirtualHost *:443>
   SSLEnable
</VirtualHost>

SSLDisable


KeyFile "/opt/IBM/HTTPServer/ssl/ihs.kdb"

Start IHS:

#Start IHS:
$> /opt/IBM/HTTPServer/bin

$> ./apachectl -k start

# See if it's started $> ps -ef | grep httpd
$> ps -ef | grep httpd
root 9 1 0 14:39 ? 00:00:00 /opt/IBM/HTTPServer/bin/httpd -d /opt/IBM/HTTPServer -k start
nobody 13 9 0 14:39 ? 00:00:00 /opt/IBM/HTTPServer/bin/httpd -d /opt/IBM/HTTPServer -k start
nobody 14 9 0 14:39 ? 00:00:00 /opt/IBM/HTTPServer/bin/httpd -d /opt/IBM/HTTPServer -k start

Review the error_log (under /opt/IBM/HTTPServer/logs) and make sure it does not have any error and it loaded plug-in successfully

[Sat Nov 18 14:39:21.325566 2017] [ibm_ssl:notice] [pid 9:tid 139878623614720] Using GSKit version 8.0.50.77
[Sat Nov 18 14:39:21.391142 2017] [was_ap24:notice] [pid 9:tid 139878623614720] ---------------------------------------------------
[Sat Nov 18 14:39:21.391164 2017] [was_ap24:notice] [pid 9:tid 139878623614720] WebSphere Plugins loaded.
[Sat Nov 18 14:39:21.391167 2017] [was_ap24:notice] [pid 9:tid 139878623614720] Bld version: 9.0.0.4
[Sat Nov 18 14:39:21.391170 2017] [was_ap24:notice] [pid 9:tid 139878623614720] Bld date: Apr 11 2017, 00:11:26
[Sat Nov 18 14:39:21.391172 2017] [was_ap24:notice] [pid 9:tid 139878623614720] Webserver: IBM_HTTP_Server/9.0.0.4 (Unix)
[Sat Nov 18 14:39:21.391174 2017] [was_ap24:notice] [pid 9:tid 139878623614720] ---------------------------------------------------
[Sat Nov 18 14:39:21.391234 2017] [:notice] [pid 9:tid 139878623614720] Using config file /opt/IBM/HTTPServer/conf/httpd.conf
[Sat Nov 18 14:39:21.391840 2017] [mpm_event:notice] [pid 9:tid 139878623614720] CoreDumpDirectory not set; core dumps may not be written for child process crashes
[Sat Nov 18 14:39:21.391841 2017] [mpm_event:notice] [pid 9:tid 139878623614720] AH00489: IBM_HTTP_Server/9.0.0.4 (Unix) configured -- resuming normal operations

And from http_plugin.log

[18/Nov/2017:14:39:21.39104] 00000009 07ac1700 - PLUGIN: Plugins loaded.
[18/Nov/2017:14:39:21.39105] 00000009 07ac1700 - PLUGIN: --------------------System Information-----------------------
[18/Nov/2017:14:39:21.39106] 00000009 07ac1700 - PLUGIN: Bld version: 9.0.0.4
[18/Nov/2017:14:39:21.39107] 00000009 07ac1700 - PLUGIN: Bld date: Apr 11 2017, 00:11:45
[18/Nov/2017:14:39:21.39107] 00000009 07ac1700 - PLUGIN: Webserver: IBM_HTTP_Server/9.0.0.4 (Unix)
[18/Nov/2017:14:39:21.39108] 00000009 07ac1700 - PLUGIN: OS : Linux x86_64
[18/Nov/2017:14:39:21.39109] 00000009 07ac1700 - PLUGIN: Hostname = waslibhihs01
[18/Nov/2017:14:39:21.39110] 00000009 07ac1700 - PLUGIN: NOFILES = hard: 65536, soft: 65536
[18/Nov/2017:14:39:21.39110] 00000009 07ac1700 - PLUGIN: MAX COREFILE SZ = hard: INFINITE, soft: INFINITE
[18/Nov/2017:14:39:21.39111] 00000009 07ac1700 - PLUGIN: DATA = hard: INFINITE, soft: INFINITE
[18/Nov/2017:14:39:21.39112] 00000009 07ac1700 - PLUGIN: --------------------------------------------------------------


Access the application through IHS:

https://waslibhihs01/JavaHelloWorldApp

Hurray! Here is JavaHelloWorldApp

And, check the access_log, you'll see something like this:

192.168.56.1 - - [18/Nov/2017:14:40:17 +0000] "GET /JavaHelloWorldApp/ HTTP/1.1" 200 705 13237 192.168.56.109:9444 +
192.168.56.1 - - [18/Nov/2017:14:40:17 +0000] "GET /JavaHelloWorldApp/style.css HTTP/1.1" 200 1157 973 +
192.168.56.1 - - [18/Nov/2017:14:40:17 +0000] "GET /JavaHelloWorldApp/SimpleServlet HTTP/1.1" 200 12 15625 192.168.56.108:9444 +
192.168.56.1 - - [18/Nov/2017:14:40:17 +0000] "GET /JavaHelloWorldApp/ HTTP/1.1" 200 705 13397 192.168.56.109:9444 +
192.168.56.1 - - [18/Nov/2017:14:40:17 +0000] "GET /JavaHelloWorldApp/SimpleServlet HTTP/1.1" 200 12 12708 192.168.56.108:9444 +

If you encounter any issue, make sure Web Server Plug-in, WebSphere Liberty and IHS (powered by Apache) are compatible. Refer to Supported combinations of IBM HTTP Server, WebSphere Application Server, and the WebSphere WebServer Plug-in for more details.

Thank you coming so far! It is the end of this series!


Looks like you're really interested in WebSphere Liberty Profile, see my other related blog posts below: