Showing posts with label Secrets. Show all posts
Showing posts with label Secrets. Show all posts

How to Configure PostgreSQL with SSL/TLS support on Kubernetes

SSL is disabled in the default Postgresql configuration and I had to struggle a little bit while making Postgresql on Kubernetes with SSL/TLS support to work. After few research and trials, I was able to resolve the issues and here I'm sharing what I have done to make it work for me.


    
    

High Level Steps:
  1. Customize postgresql.conf  (to add/edit SSL/TLS configuration) and create configMap object. This way, we don't need to rebuild the Postgres to apply custom postgresql.conf , because ConfigMap allows us you to decouple configuration artifacts from image content. 
  2. Create secret type objects for server.key, server.crt, root.crt, ca.crt, and password file.
  3. Define and use NFS type PersistentVolume (PV) and PersistentVolumeClaim (PVC)
  4. Use securityContext to resolve permission issues.
  5. Use '-c config_file=<config-volume-location>/postgresql.conf' to override the default postgresql.conf
Note: all files used in this post can be cloned/downloaded from GitHub https://github.com/pppoudel/postgresql-with-ssl-on-kubernetes.git

Let's get started

In the example, I'm using namespace called 'shared-services' and service account called 'shared-svc-accnt'. You can create your own namespace and service account or use the 'default'. In anyways, I have listed here necessary steps and yaml files can be downloaded from github.

Create namespace and service account


# Create namespace shared-services

   $> kubectl create -f shared-services-ns.yml

# Create Service Account shared-svc-accnt

   $> kubectl create -f shared-svc-accnt.yml

# Create a grant for service account shared-svc-accnt. Do this step as per your platform.


Create configMap object

I have put both postgresql.conf and pg_hba.conf under config directory. I have updated postgresql.conf as follows:

ssl = on
ssl_cert_file = '/etc/postgresql-secrets-vol/server.crt'
ssl_key_file = '/etc/postgresql-secrets-vol/server.key'

Note: the location '/etc/postgresql-config-vol' needs to be mounted while defining 'volumeMounts', which we will discuss later in the post.

Those three above listed are the main configuration items that need to have proper values in order to force Postgresql to support SSL/TLS. If you are using CA signed certificate, you also need to provide value for 'ssl_ca_file' and optionally 'ssl_crl_file'. Read Secure TCP/IP Connections with SSL for more details.
You also need to update the pg_hba.conf (HBA stands for host-based authentication) as necessary. pg_hba.conf is used to manage connection type, control access using a client IP address range, a database name, a user name, and the authentication method etc.

# Sample entries in pg_hba.conf
# Trust local connection - no password required.
local    all             all                                     trust
# Only secured remote connection from given IP-Range accepted and password are encoded using MD5
#hostssl  all             all             < Cluster IP-Range >/< Prefix length >         md5
hostssl  all             all             10.96.0.0/16         md5


$> ls -l config/

-rw-------. 1 osboxes osboxes  4535 Sep 22 17:33 pg_hba.conf
-rw-------. 1 osboxes osboxes 22781 Sep 23 03:03 postgresql.conf

# Create configMap object
$> kubectl create configmap postgresql-config --from-file=config/ -n shared-services
configmap "postgresql-config" created

# Review created object
$> kubectl describe configMap/postgresql-config -n shared-services
Name:         postgresql-config
Namespace:    shared-services
...

Create secrets

I've created server.key and self signed certificate using OpenSSL. you can either do the same or have CA signed certificates. Here, we are not going to use the client certificate. Read section 18.9.3. Creating Certificates if you need help in creating certificates.

# Create MD5 hashed password to be used with postgresql
$> POSTGRES_USER=postgres
$> POSTGRES_PASSWORD=myp3qlpwD
$> echo "md5$(echo -n $POSTGRES_PASSWORD$POSTGRES_USER | md5sum | cut -d ' ' -f1)" > secrets/postgresql-pwd.txt

# Here are all files under secrets directory
$> ls -la secrets/
-rw-rw-r--. 1 osboxes osboxes  13 Sep 22 23:42 postgresql-pwd.txt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:51 root.crt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:49 server.crt
-r--------. 1 osboxes osboxes 887 Sep 22 16:43 server.key

# Create secret postgresql-secrets
$> kubectl create secret generic postgresql-secrets --from-file=secrets/ -n shared-services
secret "postgresql-secrets" created

# Verify

$> kubectl describe secrets/postgresql-secrets -n shared-services
Name:         postgresql-secrets
Namespace:    shared-services
Labels:       
Annotations:  

Type:  Opaque

Data
====
server.key:          887 bytes
postgresql-pwd.txt:  13 bytes
root.crt:            891 bytes
server.crt:          891 bytes

Note: As seen above, I have created MD5 hash using "md5<password>:<userid>". The reason, I added string "md5" in front of hashed string is that when Postgres sees "md5" as a prefix, it recognizes that the string is already hashed and does not try to hash again and stores as it is.

Create PersistentVolume (PV) and PersistentVolumeClaim (PVC) 

Let's go ahead and create PV and PVC. We will use 'Retain' as persistentVolumeReclaimPolicy, so that data can be retained even when Postgresql pod is destroyed and recreated.
Sample PV yaml file:
## shared-nfs-pv-postgresql.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv-postgresql
  namespace: shared-services
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/postgresql/
    server: 192.168.56.101
  persistentVolumeReclaimPolicy: Retain

Sample PVC yaml file:
## shared-nfs-pvc-postgresql.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc-postgresql
  namespace: shared-services
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

PV and PVC creation and verification steps:
# Create persistentvolume
$> kubectl create -f yaml/shared-nfs-pv-postgresql.yml
persistentvolume "shared-nfs-pv-postgresql" created

# Create persistentvolumeclaim
$> kubectl create -f yaml/shared-nfs-pvc-postgresql.yml
persistentvolumeclaim "shared-nfs-pvc-postgresql" created

# Verify and make sure status of persistentvolumeclaim/shared-nfs-pvc-postgresql is Bound
$> kubectl get pv,pvc -n shared-services
NAME                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS   REASON    AGE
persistentvolume/shared-nfs-pv-postgresql   5Gi        RWX            Retain           Bound     shared-services/shared-nfs-pvc-postgresql                            32s

NAME                                              STATUS    VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/shared-nfs-pvc-postgresql   Bound     shared-nfs-pv-postgresql   5Gi        RWX                           20s

Create deployment manifest file

Here is the one, I have put together. You can customize it further per your need.

---
# Service definition
apiVersion: v1
kind: Service
metadata:
  name: sysg-postgres-svc
  namespace: shared-services
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
      name: tcp-5432
  selector:
      app: sysg-postgres-app
---
# Deployment definition
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: sysg-postgres-dpl
  namespace: shared-services
spec:
  selector:
    matchLabels:
      app: sysg-postgres-app
  replicas: 1
  template:
    metadata:
      labels:
        app: sysg-postgres-app
    spec:
      serviceAccountName: shared-svc-accnt
      securityContext:
        runAsUser: 70
        supplementalGroups: [999,1000]
        fsGroup: 70
      volumes:
        - name: shared-nfs-pv-postgresql
          persistentVolumeClaim:
            claimName: shared-nfs-pvc-postgresql
        - name: secret-vol
          secret:
            secretName: postgresql-secrets
            defaultMode: 0640
        - name: config-vol
          configMap:
            name: postgresql-config
      containers:
      - name: sysg-postgres-cnt
        image: postgres:10.5-alpine
        imagePullPolicy: IfNotPresent
        args:
          - -c
          - hba_file=/etc/postgresql-config-vol/pg_hba.conf
          - -c
          - config_file=/etc/postgresql-config-vol/postgresql.conf
        env:
          - name: POSTGRES_USER
            value: postgres
          - name: PGUSER
            value: postgres
          - name: POSTGRES_DB
            value: mmdb
          - name: PGDATA
            value: /var/lib/postgresql/data/pgdata
          - name: POSTGRES_PASSWORD_FILE
            value: /etc/postgresql-secrets-vol/postgresql-pwd.txt
        ports:
         - containerPort: 5432
        volumeMounts:
          - name: config-vol
            mountPath: /etc/postgresql-config-vol
          - mountPath: /var/lib/postgresql/data/pgdata
            name: shared-nfs-pv-postgresql
          - name: secret-vol
            mountPath: /etc/postgresql-secrets-vol
      nodeSelector:
        kubernetes.io/hostname: centosddcwrk01

Deploy the Postgresql

Below steps show the creation of service and deployment as well as step to make sure that the Postgres is running with SSL enabled mode.
# Deploy
$> kubectl apply -f yaml/postgres-deploy.yml
service "sysg-postgres-svc" created
deployment.apps "sysg-postgres-dpl" created

# Verify
$> kubectl get pods,svc -n shared-services
NAME                                     READY     STATUS    RESTARTS   AGE
pod/sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          1h

NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/sysg-postgres-svc   ClusterIP   10.96.90.30           5432/TCP   1h

# sh into the postgresql pod:
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services
/ $

# Launch psql
/ $ psql -U postgres
psql (10.5)
Type "help" for help.

# Verify SSL is enabled
postgres=# SHOW ssl;
 ssl
-----
 on
(1 row)

# Check the stored password. It should match the hashed value of "<password><user>" with "md5" prepended.
postgres=#  select usename,passwd from pg_catalog.pg_shadow;
 usename  |               passwd
----------+-------------------------------------
 postgres | md5db59316e90b1afb5334a331081618af6

# Connect remotely. You need to provide password.
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm -n shared-services -- psql "sslmode=require host=10.96.90.30 port=5432 dbname=mmdb" --username=postgres
Password for user postgres:
psql (10.5)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
(1 row)

Few key points

1) Using customized configuration file:
As you have seen above,  I have created configMap object postgresql-config and used it using option '-c config_file=/etc/postgresql-config-vol/postgresql.conf'. ConfigMap object postgresql-config is mapped to path '/etc/postgresql-config-vol' in volumeMounts definition.

containers:
- name: sysg-postgres-cnt
  imagePullPolicy: IfNotPresent
  args:
    - -c
    - hba_file=/etc/postgresql-config-vol/pg_hba.conf
    - -c
    - config_file=/etc/postgresql-config-vol/postgresql.conf

volumeMounts:
  - name: config-vol
    mountPath: /etc/postgresql-config-vol


2) Creating environment variable from secrets:

env:
  - name: POSTGRES_PASSWORD_FILE
    value: /etc/postgresql-secrets-vol/postgresql-pwd.txt

And the secret is mapped to path /etc/postgresql-secrets-vol
volumeMounts:
  - name: secret-vol
    mountPath: /etc/postgresql-secrets-vol

3) PGDATA environment variable: 
The default value is '/var/lib/postgresql/data'. However, Postgres recommends "... if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.". Refer to https://hub.docker.com/_/postgres/

Here we assign /var/lib/postgresql/data/pgdata:
env:
  - name: PGDATA
    value: /var/lib/postgresql/data/pgdata


Troubleshooting

1) Make sure server.key, server.crt, and root.crt all have appropriate permissions that is 0400 (if owned by postgres process owner) or 0640 (if owned by root). If proper permissions is not applied, Postgresql will not start. and in log, you will see following FATAL message.

2018-09-22 18:26:22.391 UTC [1] FATAL:  private key file "/etc/postgresql-secrets-vol/server.key" has group or world access
2018-09-22 18:26:22.391 UTC [1] DETAIL: File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
2018-09-22 18:26:22.391 UTC [1] LOG:  database system is shut down

In order to apply proper permission in file level, you can use 'defaultMode', I'm using defaultMode: 0644 as shown below (fragment from postgres-deploy.yml)

- name: secret-vol
  secret:
    secretName: postgresql-secrets
    defaultMode: 0640

2) Make sure to have right ownership - whether the files/directories are related to secret volume, config volume or persistence storage volume. Below is the error related PV path:

initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted

In order to resolve the above issue, you need to use Kubernetes' provided securityContext options like 'runAsUser', 'fsGroup', 'supplementalGroups' and/or capabilities. SecurityContext can be defined in both pod level and container level. In my case, I've defined it in pod level as shown below (fragment from postgres-deploy.yml)

securityContext:
  runAsUser: < specify your run as user >
  fsGroup: < specify group >
  supplementalGroups: [< comma delimited list of supplementalGroups >]

Read Configure a Security Context for a Pod or Container chapter from official Kubernetes site. I've also given some troubleshooting tips while using NFS type Persistent Volume and Claim in my previous blog How to Create, Troubleshoot and Use NFS type Persistent Storage Volume in Kubernetes

Below, I'm showing file permission per my configuration. Files are owned by root:postgres.

# Get running Kubernetes pod
$> kubectl get pods -n shared-services
NAME                                 READY     STATUS    RESTARTS   AGE
sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          12s

# sh to running Kubernetes pod
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services

# Explore files
/ $ cd /etc/postgresql-secrets-vol/..data/
/etc/postgresql-secrets-vol/..2018_09_22_19_24_52.289379695 $ ls -la

drwxr-sr-x    2 root     postgres       120 Sep 23 01:19 .
drwxrwsrwt    3 root     postgres       160 Sep 23 01:19 ..
-rw-r-----    1 root     postgres        13 Sep 23 01:19 postgresql-pwd.txt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 root.crt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 server.crt
-rw-r-----    1 root     postgres       887 Sep 23 01:19 server.key

3) psql: FATAL:  no pg_hba.conf entry for host "10.0.2.15", user "postgres" ... This FATAL message usually appears when you are trying to establish connection to Postgres, but the way you are trying to  authenticate is not defined in pg_hba.conf. Either the source IP (from where the connection originates is out of range, security option is not supported. Check your pg_hba.conf file and make sure right entry has been added.

[Optional]  Creating custom Postgres Docker image with customized postgresql.conf

If you prefer to create custom Docker image with custom postgresql.conf rather than creating configMap and using '-c config_file' option, you can do so. Here is how:

Create Dockerfile:


FROM postgres:10.5-alpine
COPY config/postgresql.conf /tmp/postgresql.conf
COPY scripts/_updateConfig.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/_updateConfig.sh && chmod 644 /tmp/postgresql.conf

My custom postgresql.conf is located under config directory locally. It will be copied to /tmp when Docker image is created. _updateConfig.sh is located under scripts directory locally and copied to /docker-entrypoint-initdb.d/ in build time.

Create script file _updateConfig.sh as shown below. It assumes that default PGDATA value '/var/lib/postgresql/data' is being used.

#!/usr/bin/env bash
cat /tmp/postgresql.conf > /var/lib/postgresql/data/postgresql.conf

Important: we can not directly copy the custom postgresql.conf into $PGDATA directory in build time because that directory does not exist yet.

Build the image:


Directory and files shown below are local:
$> ls -la postgresql/

drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 13:01 config
-rwxr-xr--.  1 osboxes osboxes  227 Sep 22 19:14 Dockerfile
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 18:39 scripts
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 23:42 secrets
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 12:11 yaml
$>cd postgresql

# docker build -t <image tag> .
# In my case I am using osboxes/postgres:10.5-sysg as image name and tag.
$> docker build -t osboxes/postgres:10.5-sysg .

If you use custom Docker image built this way, you don't need to define configMap to use custom postgresql.conf.



Using Docker Secrets with IBM WebSphere Liberty Profile Application Server


In this blog post, I'm discussing how to utilize Docker Secrets (a Docker Swarm service feature) to manage sensitive data (like password encryption keys, SSH private keys, SSL certificates etc.) for Dockerized application powered by IBM WebSphere Liberty Profile (WLP) application server. Docker Secrets helps to centrally manage these sensitive information while in rest or in transit (encrypted and securely transmitted to only those containers that need access to it and has explicit access to it). It is out of scope for this post to go deep into Docker secretes, however, if you need to familiarize yourself with Docker Secretes, refer to https://docs.docker.com/engine/swarm/secrets/.
Note: if you like to know how to program encryption/decryption within your Java application using passwordutilities-1.0 feature of WLP, see my blog How to use WLP passwordUtilities feature for encryption/decryption
I'm going to write this post in a tutorial style, so that anyone interested to try can follow the steps.

Pre-requisite: In order to follow the steps outlined here, you have to have following:

  1. Good working knowledge of Docker
  2. Configured Docker Swarm environment (using Docker 1.13 or higher version) with at least one manager and one worker node or Docker Datacenter with Universal Control Plane (UCP) having manager node, worker node(s). It's good to have a separate Docker client node, so that you can remotely connect to manager and execute commands. 
  3. Good working knowledge of IBM WebSphere Liberty Profile (https://developer.ibm.com/wasdev/blog/2013/03/29/introducing_the_liberty_profile/).

Here is brief description, how we are going to utilize Docker Secretes with WLP.

  1. Password encryption key that is used to encrypt password for WLP KeyStore, TrustStore and any other password(s) used by WLP applications will be externalized and stored as Docker Secretes.
  2. Private key such as one stored in KeyStore (used to enable secure communication in WLP) will be externalized and stored as Docker Secretes.

Here are some obvious benefits:

  1. Centrally manage all sensitive data. Since Docker enforces access control, only people with enough/right privilege(s) will have access to sensitive data.
  2. Only those container(s) and service(s) will have access to private/sensitive data which has explicit access as per need basis.
  3. Private information remains private while in rest or in transit.
  4. New Docker image created by 'docker commit' will not contain any sensitive data and also dump/package created by WLP server dump or package command, will not contain encryption key as it's externalized. See more insights about WLP password encryption here: https://www.ibm.com/support/knowledgecenter/en/SS7K4U_8.5.5/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/cwlp_pwd_encrypt.html and managing Docker Secrets here: https://docs.docker.com/engine/swarm/secrets/

Enough talk, now, let's start the real work. Below are the major steps that we'll carry out:

  1. Create Docker secrets for following that is being used by WLP:
    • KeyStore
    • Truststore
    • Password Encryption key
  2. Build Docker image based on websphere-liberty:webProfile7
  3. Create network
  4. Put together docker-compose.xml for deployment.
  5. Deploy application as Docker service.


Create Docker Secrets

Here, we're going to use Docker Commandline (CLI) and we'll execute Docker command  from Docker client node remotely. You need have following three environment variables correctly setup in order to execute command remotely. Refer to https://docs.docker.com/engine/reference/commandline/cli/#description for detail.
  • DOCKER_TLS_VERIFY
  • DOCKER_CERT_PATH
  • DOCKER_HOST
If you are using Docker Datacenter, you can use GUI based UCP Admin Console to create the same. Note: label com.docker.ucp.access.label="<value>" is not mandatory unless you have access constraint defined. For detail refer to Authentication and authorization
1) Create Docker Secrete with name keystore.jks, which basically is key database that stores private key to be used by WLP.

#Usage: docker secret create [OPTIONS] SECRET file|- 
#Create a secret from a file or STDIN as content 

$> docker secret create keystore.jks /mnt/nfs/dockershared/wlpapp/keystore.jks --label com.docker.ucp.access.label="dev"
 idc9em1u3fki8k0z77ol91sh4 

2) Following command creates secret called truststore.jks using physical Java keystore file which contains trust certificates

$> docker secret create truststore.jks /mnt/nfs/dockershared/wlpapp/truststore.jks --label com.docker.ucp.access.label="dev"
w8qs1o7pwrvl96nuamv97sb9t

Finally create the Docker secret call app_enc_key.xml, which basically refers to the fragment of xml wich contains definintion of password encryption key

$> docker secret create app_enc_key.xml /mnt/nfs/dockershared/wlpapp/app_enc_key.xml --label com.docker.ucp.access.label="dev"
kj3hcw4ss71hnudfgr6g32mxm

Note: Docker secrets are available under '/run/secrets/' at runtime to any container which has explicit access to that secret.
Here is how the /mnt/nfs/dockershared/wlpapp/app_enc_key.xml look like:

<server> 
   <variable name="wlp.password.encryption.key" value="#replaceMe#">
   </variable>
</server>

Note: Make sure to replace the string '#replaceMe#' with your own password encryption key.

Let's check and make sure all our secrets are properly created and listed:

$> docker secret ls
ID                        NAME             CREATED        UPDATED
idc9em1u3fki8k0z77ol91sh4 keystore.jks     3 hours ago    3 hours ago
kj3hcw4ss71hnudfgr6g32mxm app_enc_key.xml  21 seconds ago 21 seconds ago
w8qs1o7pwrvl96nuamv97sb9t truststore.jks   3 hours ago    3 hours ago


Building Docker Image:

Now, let's first encrypt our keystore and trusstore passwords using the pre-defined encryption key and put together the server.xml for WLP server. We are going to use securityUtility tool that ships with IBM WLP to encrypt our password.
Note: make sure your password encryption key matches to the one that is defined by 'wlp.password.encryption.key' property in app_enc_key.xml.
Here I'm encoding my example password '#myStrongPassw0rd#' using encryption key '#replaceMe#' with encoding option 'aes'.
Please note that encoding option 'xor' ignores the encryption key and uses default.

$> cd /opt/ibm/wlp/bin
$> ./securityUtility encode #myStrongPassw0rd# --encoding=aes --key=#replaceMe#
{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==

Now, we have our Docker secrets created and we have encrypted our password. It's time to put together our server.xml for WLP application server and build the Docker image. Here is how my server.xml looke like.

<server description="TestWLPApp">
   <featuremanager> 
      <feature>javaee-7.0</feature> 
      <feature>localConnector-1.0</feature> 
      <feature>ejbLite-3.2</feature> 
      <feature>jaxrs-2.0</feature> 
      <feature>jpa-2.1</feature> 
      <feature>jsf-2.2</feature> 
      <feature>json-1.0</feature> 
      <feature>cdi-1.2</feature> 
      <feature>ssl-1.0</feature> 
   </featuremanager> 
   <include location="/run/secrets/app_enc_key.xml"/> 
   <httpendpoint host="*" httpport="9080" httpsport="9443" id="defaultHttpEndpoint"/> 
   <ssl clientauthenticationsupported="true" id="defaultSSLConfig" keystoreref="defaultKeyStore"     truststoreref="defaultTrustStore"/> 
   <keystore id="defaultKeyStore" location="/run/secrets/keystore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
   <keystore id="defaultTrustStore" location="/run/secrets/truststore.jks" password="{aes}ANGkm5cIca4hoPMh4EUeA4YYqVPAbo4HIqlB9zOCXp1n"/> 
   <applicationmonitor updatetrigger="mbean"/> 
   <datasource id="wlpappDS" jndiname="wlpappDS"> 
      <jdbcdriver libraryref="OracleDBLib"/> 
      <properties.oracle password="{aes}AAj/El4TFm/8+9UFzWu5kCtURUiDIV/XKbGY/lT2SVKFij/+H38b11uhjh+Peo/rBA==" url="jdbc:oracle:thin:@192.168.xx.xxx:1752:WLPAPPDB" user="wlpappuser"/>
   </datasource>  
    <library id="OracleDBLib"> 
       <fileset dir="/apps/wlpapp/shared_lib" includes="ojdbc6-11.2.0.1.0.jar"/>
    </library> 
    <webapplication contextRoot="wlpappctx" id="wlpapp" location="/apps/wlpapp/war/wlptest.war" name="wlpapp"/> 
</server>

As you can see, the location of defaultKeyStore, defaultTrustStore, and app_enc_key.xml is pointing to directory '/run/secrets'. It is, as mentioned before, because all private data created by Docker Secrets will be available for the assigned services under '/run/secrets' of the corresponding container.

Now let's put together Dockerfile.

FROM websphere-liberty:webProfile7
COPY /mnt/nfs/dockershared/wlpapp/server.xml /opt/ibm/wlp/usr/servers/defaultServer/
RUN installUtility install --acceptLicense defaultServer
COPY /mnt/nfs/dockershared/wlpapp/wlptest.war /apps/wlpapp/war/
COPY /mnt/nfs/dockershared/wlpapp/ojdbc6-11.2.0.1.0.jar /apps/wlpapp/shared_lib/
CMD ["/opt/ibm/java/jre/bin/java","-javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar","-Djava.awt.headless=true","-jar","/opt/ibm/wlp/bin/tools/ws-server.jar","defaultServer"]

Note: above, I'm copying my server.xml into /opt/ibm/wlp/usr/servers/defaultServer/ before running the installUtility as I'm adding few features required by my application including, ssl-1.0.

Finally, we're going to build the Docker image.

$> docker build -t 192.168.56.102/osboxes/wlptest:1.0 .
Sending build context to Docker daemon 56.9 MB
Step 1/7 : FROM websphere-liberty:webProfile7
---> c035090355f5
...
Step 4/7 : RUN installUtility install --acceptLicense defaultServer
---> Running in 2bce0d02e253
Checking for missing features required by the server ...
...
Successfully built 07fef794348e

Note: 192.168.56.102 is my local Docker Trusted Registry (DTR).

Once, the image is successfully built, make sure the image is available on all nodes of Docker Swarm. I'm not going show details how you distribute the image.
> If you are using DTR, You can first push the image to the registry (using 'docker push ...', then connect to Docker Swarm host and execute 'docker pull ...'),
> Other option is to use 'docker save ...' to save the image as tar file then load the image into Swarm using 'docker load ...'.
 Here, I'm deploying this into Docker Datacenter which has two UCP worker nodes, one UCP manager node and DTR node. I'm also going to use the HTTP routing mesh (HRM), and User defined Overlay networks in swarm mode.
Note: User defined Docker network and HRM are NOT necessary to utilize the Docker secrets.


Create Overlay network:

$> docker network create -d overlay --label com.docker.ucp.access.label="dev" --label com.docker.ucp.mesh.http=true my_hrm_network
naf8hvyx22n6lsvb4bq43z968

Note: Label 'com.docker.ucp.mesh.http=true' is required while creating network in order to utilize HRM.

Put together docker-compose.yml

Here is my compose file. Your may look different.

version: "3.1"
services:
   wlpappsrv: 

      image: 192.168.56.102/osboxes/wlptest:1.0
      volumes:
         - /mnt/nfs/dockershared/wlpapp/server.xml:/opt/ibm/wlp/usr/servers/defaultServer/server.xml
      networks:
         - my_hrm_network
      secrets:
         - keystore.jks
         - truststore.jks
         - app_enc_key.xml
      ports:
         - 9080
         - 9443
     deploy:
        placement:
           constraints: [node.role == worker]
           mode: replicated
           replicas: 4
           resources:
              limits:
                 memory: 2048M
           restart_policy:
              condition: on-failure
              max_attempts: 3
              window: 6000s
           labels:
              - "com.docker.ucp.mesh.http.9080=external_route=http://mydockertest.com:8080,internal_port=9080"
              - "com.docker.ucp.mesh.http.9443=external_route=sni://mydockertest.com:8443,internal_port=9443"
              - "com.docker.ucp.access.label=dev"
networks:
   my_hrm_network:
      external:
         name: my_hrm_network
secrets:
   keystore.jks:
      external: true
   truststore.jks:
      external: true
   app_enc_key.xml:
      external: true

Few notes about the docker-compose.yml
  1. Volume definition that maps server.xml in the container with the one in the NFS file system is optional. This mapping gives additional flexibility to update the server.xml. You can achieve similar or even better flexibility/portability by using Docker Swarm Config service. See my blog post - How to use Docker Swarm Configs service with WebSphere Liberty Profile for details.
  2. The secrets definition under service 'wlpappsrv' refers to the secrets definition in the root level, which in it turns refers to externally defined secret.
  3. "com.docker.ucp.mesh.http." labels are totally optional and only required if you are using HRM. 
  4. "com.docker.ucp.access.label" is also optional and required only if you have defined access constraints.
  5. Since, I'm using Swarm and HRM, I don't need to explicitly map the internal container ports to host port. If you need to map, you can use something like below for your port definition:
    ports:
       - 9080:9080
       - 9443:9443
  6. You may encounter situation that your container application is not able to access the secrets created under /run/secrets. It may be related to bug #31006. In order to resolve the issue use 'mode: 0444' while defining your secrets. Something like this:
    secrets:
       - source: keystore.jks
         mode: 0444
       ...   

Deploy the service 

Here I'm using "docker stack deploy..." to deploy the service:
$> docker stack deploy --compose-file docker-compose.yml dev_WLPAPP

Note: In certain cases, you may get "secrets Additional property secrets is not allowed", error message. In order to resolve, make sure your compose file version to 3.1. In my case, where it's working fine, I've Docker version 17.03.2-ee4, API version: 1.27, Docker-Compose version 1.14.0.

Once the service is deployed. You can list it using 'docker service ls ..." command

$> docker service ls
ID           NAME                 MODE       REPLICAS IMAGE
28xhhnbcnhfg dev_WLPAPP_wlpappsrv replicated 4/4      192.168.56.102/osboxes/wlptest:1.0

And list the replicated containers:
$> docker ps
CONTAINER ID IMAGE                              COMMAND CREATED STATUS PORTS NAMES
2052806bbae3 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.3.m7apci6i1ks218ddnv4qsdbwv
541cf0f39b6e 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk01/dev_WLPAPP_wlpappsrv.4.wckec2jcjbmrhstftajh2zotr
ccdd7275fd7f 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.2.oke0fz2sifs5ej0vy63250wo9
7d5668a4d851 192.168.56.102/osboxes/wlptest:1.0 "/opt/ibm/java/jre..." 3 minutes ago Up 3 minutes 9080/tcp, 9443/tcp centosddcwrk02/dev_WLPAPP_wlpappsrv.1.r9gi0qllnh8r9u8popqg5mg5b

And here is what the WLP messages.log shows (taken from one of the containers log file):
********************************************************************************
product = WebSphere Application Server 17.0.0.2 (wlp-1.0.17.cl170220170523-1818)
wlp.install.dir = /opt/ibm/wlp/
server.output.dir = /opt/ibm/wlp/output/defaultServer/
java.home = /opt/ibm/java/jre
java.version = 1.8.0
java.runtime = Java(TM) SE Runtime Environment (pxa6480sr4fp7-20170627_02 (SR4 FP7))
os = Linux (3.10.0-514.el7.x86_64; amd64) (en_US)
process = 1@e086b8c54a8d
********************************************************************************
[7/24/17 19:44:29:275 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server defaultServer has been launched.
...
[7/24/17 19:44:30:533 UTC] 00000017 com.ibm.ws.config.xml.internal.XMLConfigParser A CWWKG0028A: Processing included configuration resource: /run/secrets/app_enc_key.xml
[7/24/17 19:44:31:680 UTC] 00000001 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 2.763 seconds
[7/24/17 19:44:31:990 UTC] 0000001f com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started.
[7/24/17 19:44:45:877 UTC] 00000017 com.ibm.ws.security.ready.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting...
[7/24/17 19:44:47:262 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager I CWWKS4103I: Creating the LTPA keys. This may take a few seconds.
[7/24/17 19:44:47:295 UTC] 00000017 ibm.ws.security.authentication.internal.jaas.JAASServiceImpl I CWWKS1123I: The collective authentication plugin with class name NullCollectiveAuthenticationPlugin has been activated.
[7/24/17 19:44:48:339 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyInfoManager A CWWKS4104A: LTPA keys created in 1.065 seconds. LTPA key file: /opt/ibm/wlp/output/defaultServer/resources/security/ltpa.keys
[7/24/17 19:44:48:365 UTC] 00000028 com.ibm.ws.security.token.ltpa.internal.LTPAKeyCreateTask I CWWKS4105I: LTPA configuration is ready after 1.107 seconds.
[7/24/17 19:44:57:514 UTC] 00000017 com.ibm.ws.app.manager.internal.monitor.DropinMonitor A CWWKZ0058I: Monitoring dropins for applications.
[7/24/17 19:44:57:651 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host * (IPv6) port 9080.
[7/24/17 19:44:57:675 UTC] 0000003f com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint-ssl has been started and is now listening for requests on host * (IPv6) port 9443.
[7/24/17 19:44:57:947 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302 has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7276.
[7/24/17 19:44:57:951 UTC] 00000017 com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel wasJmsEndpoint302-ssl has been started and is now listening for requests on host localhost (IPv4: 127.0.0.1) port 7286.
...

As you can see (messages in blue), it's able to include the configuration from /run/secrets/app_enc_key.xml and it also shows that defaultHttpEndpoint-ssl has been started and listening on port 9443; meaning that it's able to successfully load and open the /run/secrets/keystore.jks and /run/secrets/truststore.jks files using the encrypted password with encryption key defined in /run/secrets/app_enc_key.xml.

Now, it's time to access the application. In my case, since, I'm using HRM, I will access it as: https://mydockertest.com:8443/wlpappctx
If you are not using HRM; you may access it using:
https://<docker-container-host>:9443/<application-context>


Example using Load-Balancer


If you have a load-balancer in front and want to set-up a  pass-through SSL, you can use SNI: aka SSL routing. Below is simple example using ha-proxy. You can also refer to HA-Proxy documentation here for details.  

Here is haproxy.cfg for our example PoC:
# /etc/haproxy/haproxy.cfg, version 1.7
global
   maxconn 4096

defaults
   timeout connect 5000ms
   timeout client 50000ms
   timeout server 50000ms

frontend frontend_ssl_tcp
   bind *:8443
   mode tcp
   tcp-request inspect-delay 5s
   tcp-request content accept if { req_ssl_hello_type 1 }
   default_backend bckend_ssl_default

backend bckend_ssl_default
   mode tcp
   balance roundrobin
   server worker1 192.168.56.103:8443 check
   server worker2 192.168.56.104:8443 check           

Here is a Dockerfile for custom image:
FROM haproxy:1.7
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Build the image:
Note: execute the 'docker build ...' command from the same directory where Dockerbuild file is located.

$> docker build -t my_haproxy:1.7 .
Once you build the image and start the ha-proxy container like below:
$> docker run -d --name ddchaproxy -p 8443:8443 my_haproxy:1.7

Note: In this case ha-proxy is listening on port 8443.

Access the application:

https://mydockertest.com:8443//wlpappctx

Note: Make sure mydockertest.com resolves to the IP address of ha-proxy.


Looks like you're really into Docker, see my other related blog posts below: