Showing posts with label Security. Show all posts
Showing posts with label Security. Show all posts

How to Use Openssl to Create Keys, CSR and Cert Bundle, Review and Verify and Install

There are number of tools available to create SSL/TLS key pair and CSR. Here I'm going to use openssl.

1. Let's first create a key pair

In this example, we are creating a key of type RSA with 2048-bit key length.  It is recommended that you create a password protected private key.

# Create a plain text key pair (private and public keys)
openssl genrsa -out myserver.key 2048
# if you need, extract the public key from the one generated above
openssl rsa -in myserver.key -pubout > myserver.pub
# Create password protected (encrypted with aes128/aes256)
openssl genrsa -aes128 -passout pass:<password> -out enc-myserver.key 2048
# Encrypt existing plain text private key
openssl rsa -aes128 -in myserver.key -passout pass:<password> -out enc-myserver.key


2. Let's create a CSR. 

Openssl allows to provide input information using a openssl configuration file while creating a CSR. Good thing about the configuration file is that it can be stored in the version control system like git and re-used. Look the config file example below. Let's call it mycsr.cnf

[ req ]
defaults_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
countryName = CA
stateOrProvinceName = Ontario
localityName = Toronto
organizationName = IT
OU = ITWork
commonName = myexampleserver.ca
[ v3_req ]
basicConstraints = CA:false
keyUsage = Digital Signature, Key Encipherment
extendedKeyUsage = TLS Web Server Authentication, TLS Web Client Authentication
subjectAltName = @alt_names
[alt_names]
DNS.1 = my1stdns.com
DNS.2 = my2nddns.com
DNS.3 = my3rddns.com

Here we are using mycsr.cnf to feed the necessary information required to create the CSR. Since we are using encrypted key, let's pass the password using option -passin pass:<password>. If you don't use the -passin option, it will prompt you for the password. Here, it will generate the myserver.csr

openssl req -new -key enc-myserver.key -passin pass:<password> -out myserver.csr -config mycsr.cnf

Note: you can also generate CSR using the existing private key and existing certificate. See the commands below. Openssl prior to version 3.x, may not support the '-copy_extensions copyall'.

openssl x509 -x509toreq [-copy_extensions copyall] -in <existing certificate>.crt -signkey <existing private key> -out myserver.csr

Review the generated CSR. In the example below, we are verifying the mycsr.csr created above.

openssl req -noout -text -in mycsr.csr


3. Send your CSR to CA and Get the Signed Certs

Once your Certificate Authority (CA) receives the CSR, they process it and may send a link from where signed certificate(s) can be downloaded. The provided link may contain download options for Root CA cert, one or more intermediate cert(s) and server/domain cert. Depending upon how and for which server/application you are installing certificate, you may want to create a single PEM file from all provided certs. Here is how you can do it:

cat server.crt intermediate.crt rootca.crt >> cert-bundle.pem

Notes:
  1. make sure the certificate file are in PEM format. In order to check, just open the file in text editor like Notepad++ and see if it starts with -----BEGIN and content is in 'ASCII'. Certs can be converted from other format to PEM using openssl commands as follows:


    # Convert DER to PEM
    openssl x509 -in mycert.der -out mycert.pem
    # Convert CER to PEM
    openssl x509 -in mycert.cer -out mycert.pem
    # Convert CRT to PEM:
    openssl x509 -in mycert.crt -out mycert.pem


  2. Open the merged file cert-bundle.pem above in text editor and make sure that each -----BEGIN is in new line.
  3. If you are not able to install the password protected key, remove the password as follows:

    openssl rsa -in enc-myserver.key -passin pass:<password>=> -out myserver.key


4. Install and Verify your Certificate

Installation really depends on what your target server/application is. Here I'm showing a quick example for nginx. Here is a configuration snippet to enable SSL/TLS for nginx:


     server {
         listen       443 ssl;
         server_name  myexampleserver.ca;

         ssl_certificate      <cert-location>/ssl-bundle.crt;
         ssl_certificate_key  <cert-location>/enc-myserver.key;
 ssl_password_file    <path-to-password-file>/key.pass;

         ssl_session_cache    shared:SSL:1m;
         ssl_session_timeout  5m;

         ssl_ciphers  HIGH:!aNULL:!MD5;
         ssl_prefer_server_ciphers  on;

         location / {
             root   html;
             index  index.html index.htm;
         }
     }

Once the configuration is updated, start the nginx and access default page in the browser like 'https://myexampleserver.ca'

5. [Optional] Create .p12 key store for your Keys and Certs 


 PKCS 12 is a industry standard for storing many cryptography objects in a single file. Here is how you can create a PKCS 12 archive.


# openssl pkcs12 -export -in CertPath.cer [-certfile ssl-bundle.crt] -inkey privateKeyPath.key [-passin pass:<private key password>] -passout pass:<.p12 file password> -out key.p12

openssl pkcs12 -export -in ssl-bundle.crt -inkey enc-myserver.key -passin pass:<private key password> -passout pass:<p12 certstore password> -out mycertarchive.p12

Notes: 
  1. if the file passed using option -infile/in has both certs and private key, then -inkey option is not required. 
  2. if the file passed using option -infile/in has all the certs (including the server, intermediate, and rootca) included, then the -certfile option is not required. Usually the practice is to pass server cert file using -infile/in option, private key using -inkey option and rootCA, intermediate certs using -certfile option.


6. [Optional] Use .p12 with Java Keytool or KeyStore Explorer (KSE) 

You can open the .p12 file directly into KSE and use KSE functionalities. You can use the Java keytool as well. Here is an example of listing certs using Java keytool:

  1. List certs using keytool

    keytool -v -list -storetype pkcs12 -keystore mycertarchive.p12


  2. Convert to JKS if necessary. You'll be prompted for passwords

    #keytool -importkeystore -srckeystore <.p12 file> -srcstoretype pkcs12 -destkeystore <.jks file> -deststoretype JKS

    keytool -importkeystore -srckeystore mycertarchive.p12 -srcstoretype pkcs12 -destkeystore mycertarchive.jks -deststoretype JKS

Using Docker Secrets with IBM WebSphere Liberty Profile Application Server


In this blog post, I'm discussing how to utilize Docker Secrets (a Docker Swarm service feature) to manage sensitive data (like password encryption keys, SSH private keys, SSL certificates etc.) for Dockerized application powered by IBM WebSphere Liberty Profile (WLP) application server. Docker Secrets helps to centrally manage these sensitive information while in rest or in transit (encrypted and securely transmitted to only those containers that need access to it and has explicit access to it). It is out of scope for this post to go deep into Docker secretes, however, if you need to familiarize yourself with Docker Secretes, refer to https://docs.docker.com/engine/swarm/secrets/.
Note: if you like to know how to program encryption/decryption within your Java application using passwordutilities-1.0 feature of WLP, see my blog How to use WLP passwordUtilities feature for encryption/decryption
I'm going to write this post in a tutorial style, so that anyone interested to try can follow the steps.

Pre-requisite: In order to follow the steps outlined here, you have to have following:

  1. Good working knowledge of Docker
  2. Configured Docker Swarm environment (using Docker 1.13 or higher version) with at least one manager and one worker node or Docker Datacenter with Universal Control Plane (UCP) having manager node, worker node(s). It's good to have a separate Docker client node, so that you can remotely connect to manager and execute commands. 
  3. Good working knowledge of IBM WebSphere Liberty Profile (https://developer.ibm.com/wasdev/blog/2013/03/29/introducing_the_liberty_profile/).

Here is brief description, how we are going to utilize Docker Secretes with WLP.

  1. Password encryption key that is used to encrypt password for WLP KeyStore, TrustStore and any other password(s) used by WLP applications will be externalized and stored as Docker Secretes.
  2. Private key such as one stored in KeyStore (used to enable secure communication in WLP) will be externalized and stored as Docker Secretes.

Here are some obvious benefits:

  1. Centrally manage all sensitive data. Since Docker enforces access control, only people with enough/right privilege(s) will have access to sensitive data.
  2. Only those container(s) and service(s) will have access to private/sensitive data which has explicit access as per need basis.
  3. Private information remains private while in rest or in transit.
  4. New Docker image created by 'docker commit' will not contain any sensitive data and also dump/package created by WLP server dump or package command, will not contain encryption key as it's externalized. See more insights about WLP password encryption here: https://www.ibm.com/support/knowledgecenter/en/SS7K4U_8.5.5/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/cwlp_pwd_encrypt.html and managing Docker Secrets here: https://docs.docker.com/engine/swarm/secrets/

Enough talk, now, let's start the real work. Below are the major steps that we'll carry out:

  1. Create Docker secrets for following that is being used by WLP:
    • KeyStore
    • Truststore
    • Password Encryption key
  2. Build Docker image based on websphere-liberty:webProfile7
  3. Create network
  4. Put together docker-compose.xml for deployment.
  5. Deploy application as Docker service.


Create Docker Secrets

Here, we're going to use Docker Commandline (CLI) and we'll execute Docker command  from Docker client node remotely. You need have following three environment variables correctly setup in order to execute command remotely. Refer to https://docs.docker.com/engine/reference/commandline/cli/#description for detail.
  • DOCKER_TLS_VERIFY
  • DOCKER_CERT_PATH
  • DOCKER_HOST
If you are using Docker Datacenter, you can use GUI based UCP Admin Console to create the same. Note: label com.docker.ucp.access.label="<value>" is not mandatory unless you have access constraint defined. For detail refer to Authentication and authorization
1) Create Docker Secrete with name keystore.jks, which basically is key database that stores private key to be used by WLP.

#Usage: docker secret create [OPTIONS] SECRET file|- 
#Create a secret from a file or STDIN as content 

$> docker secret create keystore.jks /mnt/nfs/dockershared/wlpapp/keystore.jks --label com.docker.ucp.access.label="dev"
 idc9em1u3fki8k0z77ol91sh4 

2) Following command creates secret called truststore.jks using physical Java keystore file which contains trust certificates

$> docker secret create truststore.jks /mnt/nfs/dockershared/wlpapp/truststore.jks --label com.docker.ucp.access.label="dev"
w8qs1o7pwrvl96nuamv97sb9t

Finally create the Docker secret call app_enc_key.xml, which basically refers to the fragment of xml wich contains definintion of password encryption key

$> docker secret create app_enc_key.xml /mnt/nfs/dockershared/wlpapp/app_enc_key.xml --label com.docker.ucp.access.label="dev"
kj3hcw4ss71hnudfgr6g32mxm

Note: Docker secrets are available under '/run/secrets/' at runtime to any container which has explicit access to that secret.
Here is how the /mnt/nfs/dockershared/wlpapp/app_enc_key.xml look like:

<server> 
   <variable name="wlp.password.encryption.key" value="#replaceMe#">
   </variable>
</server>

Note: Make sure to replace the string '#replaceMe#' with your own password encryption key.

Let's check and make sure all our secrets are properly created and listed:

$> docker secret ls
ID                        NAME             CREATED        UPDATED
idc9em1u3fki8k0z77ol91sh4 keystore.jks     3 hours ago    3 hours ago
kj3hcw4ss71hnudfgr6g32mxm app_enc_key.xml  21 seconds ago 21 seconds ago
w8qs1o7pwrvl96nuamv97sb9t truststore.jks   3 hours ago    3 hours ago


Building Docker Image:

Now, let's first encrypt our keystore and trusstore passwords using the pre-defined encryption key and put together the server.xml for WLP server. We are going to use securityUtility tool that ships with IBM WLP to encrypt our password.
Note: make sure your password encryption key matches to the one that is defined by 'wlp.password.encryption.key' property in app_enc_key.xml.
Here I'm encoding my example password '#myStrongPassw0rd#' using encryption key '#replaceMe#' with encoding option 'aes'.
Please note that encoding option 'xor' ignores the encryption key and uses default.

$> cd /opt/ibm/wlp/bin
$> ./securityUtility encode #myStrongPassw0rd# --encoding=aes --key=#replaceMe#
{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==

Now, we have our Docker secrets created and we have encrypted our password. It's time to put together our server.xml for WLP application server and build the Docker image. Here is how my server.xml looke like.

<server description="TestWLPApp">
   <featuremanager> 
      <feature>javaee-7.0</feature> 
      <feature>localConnector-1.0</feature> 
      <feature>ejbLite-3.2</feature> 
      <feature>jaxrs-2.0</feature> 
      <feature>jpa-2.1</feature> 
      <feature>jsf-2.2</feature> 
      <feature>json-1.0</feature> 
      <feature>cdi-1.2</feature> 
      <feature>ssl-1.0</feature> 
   </featuremanager> 
   <include location="/run/secrets/app_enc_key.xml"/> 
   <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/> 
   <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/> 
   <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
   <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
   <applicationmonitor updatetrigger="mbean"/> 
   <datasource id="wlpappDS" jndiname="wlpappDS"> 
      <jdbcdriver libraryref="OracleDBLib"/> 
      <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
   </datasource>  
    <library id="OracleDBLib"> 
       <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
    </library> 
    <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/> 
</server>

As you can see, the location of defaultKeyStore, defaultTrustStore, and app_enc_key.xml is pointing to directory '/run/secrets'. It is, as mentioned before, because all private data created by Docker Secrets will be available for the assigned services under '/run/secrets' of the corresponding container.

Now let's put together Dockerfile.

FROM websphere-liberty:webProfile7
COPY /mnt/nfs/dockershared/wlpapp/server.xml /opt/ibm/wlp/usr/servers/defaultServer/
RUN installUtility install --acceptLicense defaultServer
COPY /mnt/nfs/dockershared/wlpapp/wlptest.war /apps/wlpapp/war/
COPY /mnt/nfs/dockershared/wlpapp/ojdbc6-11.2.0.1.0.jar /apps/wlpapp/shared_lib/
CMD ["/opt/ibm/java/jre/bin/java","-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar","-Djava.awt.headless=true","-jar","/opt/ibm/wlp/bin/tools/ws-server.jar","defaultServer"]

Note: above, I'm copying my server.xml into /opt/ibm/wlp/usr/servers/defaultServer/ before running the installUtility as I'm adding few features required by my application including, ssl-1.0.

Finally, we're going to build the Docker image.

$> docker build -t 192.168.56.102/osboxes/wlptest:1.0 .
Sending build context to Docker daemon 56.9 MB
Step 1/7 : FROM websphere-liberty:webProfile7
---> c035090355f5
...
Step 4/7 : RUN installUtility install --acceptLicense defaultServer
---> Running in 2bce0d02e253
Checking for missing features required by the server ...
...
Successfully built 07fef794348e

Note: 192.168.56.102 is my local Docker Trusted Registry (DTR).

Once, the image is successfully built, make sure the image is available on all nodes of Docker Swarm. I'm not going show details how you distribute the image.
> If you are using DTR, You can first push the image to the registry (using 'docker push ...', then connect to Docker Swarm host and execute 'docker pull ...'),
> Other option is to use 'docker save ...' to save the image as tar file then load the image into Swarm using 'docker load ...'.
 Here, I'm deploying this into Docker Datacenter which has two UCP worker nodes, one UCP manager node and DTR node. I'm also going to use the HTTP routing mesh (HRM), and User defined Overlay networks in swarm mode.
Note: User defined Docker network and HRM are NOT necessary to utilize the Docker secrets.


Create Overlay network:

$> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
naf8hvyx22n6lsvb4bq43z968

Note: Label 'com.docker.ucp.mesh.http=true' is required while creating network in order to utilize HRM.

Put together docker-compose.yml

Here is my compose file. Your may look different.

version: "3.1"
services:
   wlpappsrv: 

      image: 192.168.56.102/osboxes/wlptest:1.0
      volumes:
         - /mnt/nfs/dockershared/wlpapp/server.xml:/opt/ibm/wlp/usr/servers/defaultServer/server.xml
      networks:
         - my_hrm_network
      secrets:
         - keystore.jks
         - truststore.jks
         - app_enc_key.xml
      ports:
         - 9080
         - 9443
     deploy:
        placement:
           constraints: [node.role == worker]
           mode: replicated
           replicas: 4
           resources:
              limits:
                 memory: 2048M
           restart_policy:
              condition: on-failure
              max_attempts: 3
              window: 6000s
           labels:
              - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
              - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
              - "com.docker.ucp.access.label=dev"
networks:
   my_hrm_network:
      external:
         name: my_hrm_network
secrets:
   keystore.jks:
      external: true
   truststore.jks:
      external: true
   app_enc_key.xml:
      external: true

Few notes about the docker-compose.yml
  1. Volume definition that maps server.xml in the container with the one in the NFS file system is optional. This mapping gives additional flexibility to update the server.xml. You can achieve similar or even better flexibility/portability by using Docker Swarm Config service. See my blog post - How to use Docker Swarm Configs service with WebSphere Liberty Profile for details.
  2. The secrets definition under service 'wlpappsrv' refers to the secrets definition in the root level, which in it turns refers to externally defined secret.
  3. "com.docker.ucp.mesh.http." labels are totally optional and only required if you are using HRM. 
  4. "com.docker.ucp.access.label" is also optional and required only if you have defined access constraints.
  5. Since, I'm using Swarm and HRM, I don't need to explicitly map the internal container ports to host port. If you need to map, you can use something like below for your port definition:
    ports:
       - 9080:9080
       - 9443:9443
  6. You may encounter situation that your container application is not able to access the secrets created under /run/secrets. It may be related to bug #31006. In order to resolve the issue use 'mode: 0444' while defining your secrets. Something like this:
    secrets:
       - source: keystore.jks
         mode: 0444
       ...   

Deploy the service 

Here I'm using "docker stack deploy..." to deploy the service:
$> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP

Note: In certain cases, you may get "secrets Additional property secrets is not allowed", error message. In order to resolve, make sure your compose file version to 3.1. In my case, where it's working fine, I've Docker version 17.03.2-ee4, API version: 1.27, Docker-Compose version 1.14.0.

Once the service is deployed. You can list it using 'docker service ls ..." command

$> docker service ls
ID           NAME                 MODE       REPLICAS IMAGE
28xhhnbcnhfg dev_WLPAPP_wlpappsrv replicated 4/4      192.168.56.102/osboxes/wlptest:1.0

And list the replicated containers:
$> docker ps
CONTAINER ID IMAGE                              COMMAND CREATED STATUS PORTS NAMES
2052806bbae3 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.3.m7apci6i1ks218ddnv4qsdbwv
541cf0f39b6e 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.4.wckec2jcjbmrhstftajh2zotr
ccdd7275fd7f 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.2.oke0fz2sifs5ej0vy63250wo9
7d5668a4d851 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.1.r9gi0qllnh8r9u8popqg5mg5b

And here is what the WLP messages.log shows (taken from one of the containers log file):
********************************************************************************
product = WebSphere Application Server 17.0.0.2 (wlp-1.0.17.cl170220170523-1818)
wlp.install.dir = /opt/ibm/wlp/
server.output.dir = /opt/ibm/wlp/output/defaultServer/
java.home = /opt/ibm/java/jre
java.version = 1.8.0
java.runtime = Java(TM) SE Runtime Environment (pxa6480sr4fp7-20170627_02 (SR4 FP7))
os = Linux (3.10.0-514.el7.x86_64; amd64) (en_US)
process = 1@e086b8c54a8d
********************************************************************************
[7/24/17 19:44:29:275 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server defaultServer has been launched.
...
[7/24/17 19:44:30:533 UTC] 00000017 com.ibm.ws.config.xml.internal.XMLConfigParser A CWWKG0028A: Processing included configuration resource: /run/secrets/app_enc_key.xml
[7/24/17 19:44:31:680 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 2.763 seconds
[7/24/17 19:44:31:990 UTC] 0000001f com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[7/24/17 19:44:45:877 UTC] 00000017 com.ibm.ws.security.ready.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting...
[7/24/17 19:44:47:262 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager I CWWKS4103I: Creating the LTPA keys. This may take a few seconds.
[7/24/17 19:44:47:295 UTC] 00000017 ibm.ws.security.authentication.internal.jaas.JAASServiceImpl I CWWKS1123I: The collective authentication plugin with class name NullCollectiveAuthenticationPlugin has been activated.
[7/24/17 19:44:48:339 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager A CWWKS4104A: LTPA keys created in 1.065 seconds. LTPA key file: /opt/ibm/wlp/output/defaultServer/resources/security/ltpa.keys
[7/24/17 19:44:48:365 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyCreateTask I CWWKS4105I: LTPA configuration is ready after 1.107 seconds.
[7/24/17 19:44:57:514 UTC] 00000017 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications.
[7/24/17 19:44:57:651 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv6) port 9080.
[7/24/17 19:44:57:675 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint-ssl has been started and is now listening for requests on host * (IPv6) port 9443.
[7/24/17 19:44:57:947 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302 has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7276.
[7/24/17 19:44:57:951 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302-ssl has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7286.
...

As you can see (messages in blue), it's able to include the configuration from /run/secrets/app_enc_key.xml and it also shows that defaultHttpEndpoint-ssl has been started and listening on port 9443; meaning that it's able to successfully load and open the /run/secrets/keystore.jks and /run/secrets/truststore.jks files using the encrypted password with encryption key defined in /run/secrets/app_enc_key.xml.

Now, it's time to access the application. In my case, since, I'm using HRM, I will access it as: https://mydockertest.com:8443/wlpappctx
If you are not using HRM; you may access it using:
https://<docker-container-host>:9443/<application-context>


Example using Load-Balancer


If you have a load-balancer in front and want to set-up a  pass-through SSL, you can use SNI: aka SSL routing. Below is simple example using ha-proxy. You can also refer to HA-Proxy documentation here for details.  

Here is haproxy.cfg for our example PoC:
# /etc/haproxy/haproxy.cfg, version 1.7
global
   maxconn 4096

defaults
   timeout connect 5000ms
   timeout client 50000ms
   timeout server 50000ms

frontend frontend_ssl_tcp
   bind *:8443
   mode tcp
   tcp-request inspect-delay 5s
   tcp-request content accept if { req_ssl_hello_type 1 }
   default_backend bckend_ssl_default

backend bckend_ssl_default
   mode tcp
   balance roundrobin
   server worker1 192.168.56.103:8443 check
   server worker2 192.168.56.104:8443 check           

Here is a Dockerfile for custom image:
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Build the image:
Note: execute the 'docker build ...' command from the same directory where Dockerbuild file is located.

$> docker build -t my_haproxy:1.7 .
Once you build the image and start the ha-proxy container like below:
$> docker run -d --name ddchaproxy -p 8443:8443 my_haproxy:1.7

Note: In this case ha-proxy is listening on port 8443.

Access the application:

https://mydockertest.com:8443//wlpappctx

Note: Make sure mydockertest.com resolves to the IP address of ha-proxy.


Looks like you're really into Docker, see my other related blog posts below:

Few Tips on Secure vs. Non-Secure WAS Web Server Plug-in Connection

In this blog, I am focusing only on one aspect of WebSphere Application Server (WAS) Web Server plug-in to Application Server connection i.e. secure vs. non-secure, what has changed in recent versions of WAS and any tips and tricks.
By default, when plug-in configuration file (plugin-cfg.xml) is generated, it creates configuration for both secure (HTTPS) and non-secure (HTTP) transport channels for the communication to
the application server(s). Below is fragment of plugin-cfg.xml:
...
<ServerCluster CloneSeparatorChange="false" ... >
      <Server ConnectTimeout="0" ExtendedHandshake="false" ...>
<!-- non-secure (HTTP) transport channel -->
             <Transport Hostname="MyHost" Port="9080" Protocol="http"/>
<!-- secure (HTTPS) transport channel -->
<Transport Hostname="MyHost" Port="9443" Protocol="https">
                   <p
roperty Name="keyring" Value="c:\IBM\Plugins\plugin-key.kdb"/>
                   <Property Name="stashfile" Value="c:\IBM\Plugins\plugin-key.sth"/>
              </Transport>

       </Server>
  </ServerCluster>
...

When you have two transport channels defined, a lot of time it creates confusion because people may not know which communication channel is actually going to be used or whether they really need both channels in their configuration?
When you have both channels defined (just like shown in the example above), if the incoming traffic is secure (HTTPS), it automatically chooses the secure channel (MyHost:9443 in example) to create connection to the back-end application server, but if the incoming traffic is non-secure (HTTP), it by default chooses non-secure (HTTP) (MyHost:9080 in example).

  • What if reverse case scenario? 
  • Or only one type of back-end connection is defined/available and opposite type of incoming traffic is encountered? 

Looks like there are some recent changes and gotcha here.


Incoming secure (HTTPS) traffic and and only non-secure (HTTP) transport channel defined/available:

  • version 8.5.5.0 and later:


 In version 8.5.5.0 or latter, in this particular case, plug-in won't create any connection to the    application server, because it interprets this situation as a security risk and request fails. If plugin trace is enabled, you'll see something like below in the plugin.log:

[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ws_common: websphereFindTransport: Nosecure transports available
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ws_common: websphereWriteRequestReadResponse: Failed to find a transport
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ESI: getResponse: failed to get response: rc = 4
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - DEBUG: ESI: esiHandleRequest: failed to get response
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - DEBUG: ESI: esiRequestUrlStackDestroy
[Thu Nov 26 15:36:52 2015] 00003578 00004774 - ERROR: ws_common: websphereHandleRequest: Failed to handle request

However, you can use the plug-in custom property UseInsecure=true in the plugin-cfg.xml file; In this case plugin will use non-secure (HTTP) transport channel to establish connection to the application server despite the secure incoming request. You can add custom property in two wasys:
1. You can directly modify the plugin-cfg.xml as follows:
<Config ASDisableNagle="false" AcceptAllContent="true" AppServerPortPreference="HostHeader" ... UseInsecure="true">
...
</Config>
2. Or add this property through WebSphere Administration Console (Servers > Web Servers > Web_server_name > Plug-in properties > Custom properties page) and regenerate the plugin-cfg.xml.

Once the UseInsecure=true custom property becomes effective, the above mentioned scenario can create connection successfully. Below are some relevant lines from plugin.log (trace enabled)

[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - TRACE: ws_common: websphereFindTransport: Finding the transport for server 021313E0
websphereFindTransport: Setting the transport(case 3): OND2C00981304.cihs.ad.gov.on.ca on 080
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereExecute: Executing the transaction with the app server reqInfo is e671f78 useExistingStream=0, client->stream=00000000
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: Getting the stream to the app server (keepalive 28)
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_transport: transportStreamDequeue: Checking for existing stream from the queue
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: calling blocking connect
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: Setting socket to non-block for ServerIOTimeout over HTTP
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: ws_common: websphereGetStream: socket 3564 connected to OND2C00981304.cihs.ad.gov.on.ca:9080 timeout=900
[Thu Nov 26 15:46:33 2015] 0000379c 000014b8 - DEBUG: lib_stream: openStream: Opening the  stream soc=3564

  • Previous versions:


By default, in previous versions of WAS plug-in for web server, if the web server plug-in received secure (HTTPS) request but could not create secure connection to the application server (either secure transport channel not defined or secure connection could not be established),
it would create a non-secure (HTTP) connection (if one is defined and available). If HTTP transport not defined, then no connection would be created.
So, the behaviour in older version of WAS that defaulted HTTPS to HTTP when secure connection was not available, was a real problem from security prospective. Read more about the problem as documented here:
http://www-01.ibm.com/support/docview.wss?uid=swg1PM85452.

As mentioned above, in WAS version 8.5.5.0 and later, it has been fixed and it only defaults from HTTPS to HTTP if it is explicitly configured to default when HTTPS connection can not be
established. There still seems to be some logging issue in version 8.5.5 with UseInsecure="true" and fix is available in fix pack 8.5.5.2. Read detail here: http://www-01.ibm.com/support/docview.wss?uid=swg1PM96173

Incoming HTTP traffic and only secure (HTTPS) transport channel to back end configured/available:

   In this case, there should be no issue as long as keyring that contains the certificate of back-end application server and stash file properly configured for the transport channel.

Note: In order to minimize confusion, if you are sure only secure (HTTPS) connection should be allowed for your implementation, you can simply comment out the non-secure transport channel configuration in the plugin-cfg.xml or vice-versa.

Hope these tips help to eradicate (if any) confusion you to might have related to plug-in to application server connection from security perspective.

Read more about the Web server plug-in connections in IBM Knowledge Center here: http://www-01.ibm.com/support/knowledgecenter/SSAW57_8.5.5/com.ibm.websphere.nd.doc/ae/cwsv_plugin_connections.html?lang=en.

Accelerated Cryptography using On-Chip Instructions : from Java Application Perspective

     In this blog, I am exploring accelerated crypto processing options using on-chip instructions. I am looking these options from applications (specifically Java) perspective. Let me start with one of my favourite application servers - WebSphere Application Server (WAS). One of the new features in IBM WebSphere Application Server version 8.5.0.1 lets you
take advantage of the Intel Advanced Encryption Standard (AES) New Instruction (AES-NI) set built-in (on-chip) within Intel Westmere and successor family of processors when dealing with AES cryptography. As per Intel, if exploited correctly, AES-NI not only boosts performance of cryptographic operations but also has security advantages. It may help to eliminate timing and cache based attacks. Since AES is currently one of the most popular block ciphers, wide range of applications are able to take benefit from these built-in instructions. Enabling this feature for WAS is easy. You need to define system property com.ibm.crypto.provider.doAESInHardware
and assign value true. You can do it by defining it under Generic JVM arguments setting through WebSphere Administration Console. However, here are few pre-requisites in order for it to work:

  • Java version: IBM SDK version 7 SR 3 or higher.
  • WAS version: 8.5.0.1 or higher
  • JCE provider: IBM JCE provider.
  •    Note: IBM PKCS11 provider does not use the Intel AES-NI instructions.
       Note: Update JCE provider in java.security file located under $JAVA_HOME/jre/lib/security
  • Processor: Intel Westmere and successor family of processors
In order to verify whether the underlying processor supports the AES-NI instructions or not, you can use the following system property to generate appropriate JCE tracing:
com.ibm.crypto.provider.AESNITrace=true
Basically, setting com.ibm.crypto.provider.doAESInHardware=true is no harm. if it is set and supported by underlying Intel processor, IBM JCE provider attempts to use AES-NI otherwise
it uses software module for cryptographic operations. Refer to IBM Knowledge Center for more information. For details on AES-NI, refer to
artcle Intel® Advanced Encryption Standard (Intel® AES) Instructions Set - Rev 3.01 by Shay Gueron (Intel) at https://software.intel.com/en-us/articles/intel-advanced-encryption-standard-aes-instructions-set

     If your platform is not WAS, but let's say, WebLogic on Solaris, you're well covered there as well. Starting from version 10 update 8, Solaris supports AES-NI using Solaris Cryptographic Framework (SCF). Java applications that use SunPKCS11 JCE provider will benefit AES acceleration for encryption/decryption through Intel AES-NI on Solaris. Very detail information about Java cryptography using AES-NI on Solaris can be found in Ramesh Nagappan's web log here. If you are looking Intel AES-NI support on Solaris in general, see Dan Anderson's blog Intel AES-NI Optimization on Solaris. Obviously, AES-NI support on Solaris is available only for Solaris x86 64-bit, running on a Intel microprocessor that supports the AES-NI instruction set, what about similar feature on Solaris powered by Sun/Oracle T series of processors? Guess what? Sun/Oracle SPARC processors are actually the leader in supporting hardware-accelerated crypto processing. Even though all T series chips supported some level of it, starting with T4, crypto became the part of the core instruction set accessible via non-privileged instructions. It is now one of the basic services offered by CPU. Very interesting blog about Java EE Application
Servers, SPARC T4, Solaris Containers, and Resource Pools by Jeff Taylor can be found here. If you are interested in utilizing SPARC on-chip crypto processor for applications hosted on WebSphere Application Server, this OTN white paper (http://www.oracle.com/technetwork/server-storage/sun-sparc-enterprise/documentation/ibm-websphere-sparc-t5-2332327.pdf) gives a lot of information. Read the section, Impact of SPARC T5 Hardware Encryption for Secure Traffic. Specifically, in this section it talks about how to use Oracle Solaris Kernel module called Kernel SSL proxy (KSSL), which can be used for offloading operations such as SSL/TLS. KSSL processes SSL traffic via Oracle SCF in the Kernel and thus improves the performance. White paper also clearly shows the performance comparison between on-chip and software crypto modules.

      This is just a small attempt to put together few options available in terms of on-chip accelerated crypto processing for Java applications. Obviously, there are number of other solutions in the market not covered here. If you are researching for suitable cryptographic solution for your next project, you may start by reviewing Validated FIPS 140-1 and FIPS 140-2 Cryptographic Modules list, maintained by National Institute of Standards and Technology (NIST) here.
      As seen from different tests available from individual tester or from vendor, on-chip crypto accelerator really performs well in comparison to software module if implemented correctly. So, if your platform supports, consider this option for your next project and get both performance and security benefits.

ArcSight Logger Installation Issue

We were trying to install & configure ArcSight Logger on 64 bit Red Hat Enterprise Linux Server release 6.1 (Santiago) . We encountered strange issue.  We had issue with the versions of Loggers:
  • ArcSight-logger-5.3.0.6684.0.bin
  • ArcSight-logger-5.3.1.6838.0.bin 
Let me explain below what's the issue was and what custom solution we found after few hours/days of investigation. We had to do investigation ourselves, as Software Vendor did not provide any satisfactory solution, even though we had opened a ticket with them. Their answers were very generic, which did not lead to any solution.



Problem Details:
While installing the ArcSight Logger in console mode (with '-i console' option):

  • Logger initialization failed while installing as “root” (we typically become root as 'sudo su -').  Error message, “The initialization of Logger software was unsuccessful. Reason: Platform initializer failed”.
  • Logger installation and initialization went Okay, while installing as non-root user like “arcsight”.  Please note, while installing as non-root user, the logger can not listen on privileged SSL port "443" and it cannot create linux service for application. 

 Since, our requirement was logger to be accessible on Port 443, we had to install & initialize it as root user.
Note: Even though, “root” is being used during installation, Arcsight http processes are owned by non-privileged user ‘nobody’ for security reason.

Investigation details/result:
As  we found, in both cases above, the installation itself was successful, but it failed during initialzation. We found that it was failing while executing the script file: <Logger-Install-Dir>/current/arcsight/service/function in the following line:
runAsOtherUser() {
        curr_user=`whoami`
        if [ -f "$CURRENT_HOME/arcsight/service/user.config" -a "X$curr_user" = "Xroot" ]; then
                source $CURRENT_HOME/arcsight/service/user.config
                su -c "$1" $CUSTOM_NONROOT_USER
        else
                eval $1
        fi
}

We got the segmentation fault at this point, as seen in the <Logger-Install-Dir>/current/arcsight/logger/logs/logger_init_driver.log
../service/functions: line 159: 23437 Segmentation fault      (core dumped) su -c "$1" $CUSTOM_NONROOT_USER

and the actual reason seemed to be NOT finding the “NSS_3.12.9” libraries.  As we noticed the below the error message in /var/log/secure log:

su: PAM unable to dlopen(/lib64/security/pam_ldap.so): /opt/arcsight/logger/current/local/nss/lib/libnss3.so: version `NSS_3.12.9' not found (required by /lib64/libldap-2.4.so.2)
su: PAM adding faulty module: /lib64/security/pam_ldap.so


We checked the installed RPM packages related to the nss* on our Logger server, and found that we actually had slightly higher version (nss_3.14.3) installed,. See below:

nss-3.14.3-4.el6_4.x86_64
nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64
nss-softokn-3.14.3-3.el6_4.x86_64
nss-softokn-freebl-3.14.3-3.el6_4.i686
nss-softokn-freebl-3.14.3-3.el6_4.x86_64
nss-sysinit-3.14.3-4.el6_4.x86_64
nss-tools-3.14.3-4.el6_4.x86_64
nss-util-3.14.3-3.el6_4.x86_64


since, it was not failing while using non root user, it's obvious that, nss related code library (located under <Logger-Install-Dir>/current/local/nss/lib/libnss3.so) is being used only while doing 'su *' in the script. And it's also obvious that ArcSight's libnss library code requires (hard coded ???) nss_3.12.9 version of nss libraries installed on the server.


Resolution:

  1. I guess, the simple resolution of this problem would be to lower the version of nss libraries to 3.12.9, however, in our case it's not possible because of operation policy not to go with lower version and always maintain the latest version of libraries. Many of your organization may have similar policy in effect, so read below what (work-around ) option you have until software vendor provides the fix:
  2. Here’s how we're able to successfully install and configure ArcSight Logger using the ‘root’ user without lowering the version of nss libraries. As previously discussed, the issue seemed to be incompatibility with the nss* libraries (software code seems to be hard-coded to require 3.12.9 version of nss libraries) and required some application specific script changes. 
         Run the installer as root in the console mode. As soon as you see the message, "Begin    Initialization ... The installation of Logger software was successfull ... Initialization will begin after pressing [Enter]...", open another terminal/command window, and remove the entry: <Logger-Install-Dir>/current/local/nss/lib from the LD_LIBRARY_PATH variable. Affected files:
  •  /opt/arcsight/logger/current/arcsight/logger/bin/scripts/relative_paths_env.sh
  • /opt/arcsight/logger/current/arcsight/service/functions
  • /opt/arcsight/logger/current/arcsight/logger/bin/scripts/web.sh
Once, the files updated, save the files.
Hit the [Enter] from the first console window and continue the initialization/configuration.
It seemed that once you remove the nss/lib path entry from the LD_LIBRARY_PATH, the application uses the latest version of nss libraries installed/available on your server.