GIT: Maintain Clean Workspace, Stay Synced with Integration Branch & Publish Perfect Commits

GIT is very powerful and provides many options to choose while doing one or the other tasks. GIT on one hand is a distributed version control system as every git working directory contains a full-fledged repository, but on the other hand it can be hosted centrally to facilitate sharing and collaboration. So, it is easy to use the power of GIT and achieve wonders but also easy to mess around and spend good chunk of your daily hour(s) resolving the conflict(s). I've worked with developers coaching them how to stay clean and synced while helping them to improve their productivity and fine-tuning their commits. Ultimately, I have come up with this one page visual diagram that outlines the practice which I have been preaching for.


Diagram 1.0

Since the diagram 1.0 is self explanatory, I am not going to write detail elaboration of it, but just going to highlight few important concepts below.

Maintain clean work space (working directory)

Specifically when you are done for the day and heading home (or to a bar if you feel so) or starting fresh (with a fresh cup of java) in the morning, it is important to ensure your working directory is clean. Block #4 (in the diagram 1.0) and associated green boundary explains how to deal with Un-tracked, Un-staged, or Un-committed files.
  • Discard: discard them (if you really don't need them) 
    • The orange boundary contains steps to deal with those changes in case by case basis. 
    • The Purple boundary discards everything that is not commited.

  • Commit: stage (if not already staged) and commit.
  • Stash: store them safely for later use - which is called Stashing in GIT term.


Remain synced with remote branch:

Making it a regular practice of pulling latest from remote branch and either merging or rebasing (depending upon your merge strategy in place), not only helps to resolve any merge conflict when it is still small and manageable, but also helps to boost team collaboration. Block #5 with pink boundary in diagram 1.0, explains exactly this. If you are working on a 'feature' branch (following GitFlow strategy), you need to pull first not only from your remote 'feature' branch but also from 'develop' (assuming here 'develop' as an integration branch. You may have 'dev' or 'main' as an integration branch) branch and merge locally on your feature branch before you push your code to remote feature branch and later create a <<pull request>> to integration branch.

Commit early and often

Blocks #8, #9, #10 show this. Whether you are developing a feature or working on a bug/defect fix, it is important to commit when you complete a logical unit of work. Please note, it is never too early or never too frequent to commit your code as long as you review your commits and fine tune them before pushing/publishing. Commiting not only helps to maintain the clean working directory but also helps to preserve the data from accidental loss.

Review and fine-tune your  commits before publishing

If you follow 'commit early and often' principle, it is important that you review and if necessary, fine-tune your commits before pushing/sharing/publishing. It is important to make sure that your commit is small enough and represent a logical unit of work (related to a particular feature, bug fix or defect fix). Fine-tuned commits are extremely useful in troubleshooting using git bisection (git bisect) to find a code that introduced a particular bug or reverting a commit (git revert) with confidence. You can perfect your commits by squashing related commits into one making it kind of transnational, by rearranging commits in right order, by amending commit comments making them contextual with right reference or by splitting commit if it contains unrelated changes. Block 10.1 in diagram 1.0 reminds you to perfect (if necessary) your commits before sharing/pushing/publishing.
Important: NEVER re-write any shared/published history.

Pull before push


It is one of the most important rules that you need to follow when you are working in a team environment. As described in Block #5.0 of diagram 1.0, you need to pull latest commits from the  remote branch, merge (resolve if any conflicts) locally and only after that push your code. Whether you merge or rebase depends upon the strategy you have in place. Most of the time, merge is safer than rebase.

Regularly publish your code

Most of us get paid only after publishing, so it is important! It is generally how teams share/collaborate as well. Remote/Central repositories are usually setup considering high availability (HA) and disaster recovery (DR), so it is also important to regularly publish your commits to protect them from destructive events. Refer to block #12 of the diagram 1.0

Note: if you are interested in contributing to enhance the diagram further, you can do so. Source (GitBestPracticesOnePagerDiagram.xml) of this diagram (draw.io format) is in GitHub: https://github.com/pppoudel/git-best-practices-one-page-diagram.git


References of Git commands (used in diagram 1.0):


How to Configure PostgreSQL with SSL/TLS support on Kubernetes

SSL is disabled in the default Postgresql configuration and I had to struggle a little bit while making Postgresql on Kubernetes with SSL/TLS support to work. After few research and trials, I was able to resolve the issues and here I'm sharing what I have done to make it work for me.


    
    

High Level Steps:
  1. Customize postgresql.conf  (to add/edit SSL/TLS configuration) and create configMap object. This way, we don't need to rebuild the Postgres to apply custom postgresql.conf , because ConfigMap allows us you to decouple configuration artifacts from image content. 
  2. Create secret type objects for server.key, server.crt, root.crt, ca.crt, and password file.
  3. Define and use NFS type PersistentVolume (PV) and PersistentVolumeClaim (PVC)
  4. Use securityContext to resolve permission issues.
  5. Use '-c config_file=<config-volume-location>/postgresql.conf' to override the default postgresql.conf
Note: all files used in this post can be cloned/downloaded from GitHub https://github.com/pppoudel/postgresql-with-ssl-on-kubernetes.git

Let's get started

In the example, I'm using namespace called 'shared-services' and service account called 'shared-svc-accnt'. You can create your own namespace and service account or use the 'default'. In anyways, I have listed here necessary steps and yaml files can be downloaded from github.

Create namespace and service account


# Create namespace shared-services

   $> kubectl create -f shared-services-ns.yml

# Create Service Account shared-svc-accnt

   $> kubectl create -f shared-svc-accnt.yml

# Create a grant for service account shared-svc-accnt. Do this step as per your platform.


Create configMap object

I have put both postgresql.conf and pg_hba.conf under config directory. I have updated postgresql.conf as follows:

ssl = on
ssl_cert_file = '/etc/postgresql-secrets-vol/server.crt'
ssl_key_file = '/etc/postgresql-secrets-vol/server.key'

Note: the location '/etc/postgresql-config-vol' needs to be mounted while defining 'volumeMounts', which we will discuss later in the post.

Those three above listed are the main configuration items that need to have proper values in order to force Postgresql to support SSL/TLS. If you are using CA signed certificate, you also need to provide value for 'ssl_ca_file' and optionally 'ssl_crl_file'. Read Secure TCP/IP Connections with SSL for more details.
You also need to update the pg_hba.conf (HBA stands for host-based authentication) as necessary. pg_hba.conf is used to manage connection type, control access using a client IP address range, a database name, a user name, and the authentication method etc.

# Sample entries in pg_hba.conf
# Trust local connection - no password required.
local    all             all                                     trust
# Only secured remote connection from given IP-Range accepted and password are encoded using MD5
#hostssl  all             all             < Cluster IP-Range >/< Prefix length >         md5
hostssl  all             all             10.96.0.0/16         md5


$> ls -l config/

-rw-------. 1 osboxes osboxes  4535 Sep 22 17:33 pg_hba.conf
-rw-------. 1 osboxes osboxes 22781 Sep 23 03:03 postgresql.conf

# Create configMap object
$> kubectl create configmap postgresql-config --from-file=config/ -n shared-services
configmap "postgresql-config" created

# Review created object
$> kubectl describe configMap/postgresql-config -n shared-services
Name:         postgresql-config
Namespace:    shared-services
...

Create secrets

I've created server.key and self signed certificate using OpenSSL. you can either do the same or have CA signed certificates. Here, we are not going to use the client certificate. Read section 18.9.3. Creating Certificates if you need help in creating certificates.

# Create MD5 hashed password to be used with postgresql
$> POSTGRES_USER=postgres
$> POSTGRES_PASSWORD=myp3qlpwD
$> echo "md5$(echo -n $POSTGRES_PASSWORD$POSTGRES_USER | md5sum | cut -d ' ' -f1)" > secrets/postgresql-pwd.txt

# Here are all files under secrets directory
$> ls -la secrets/
-rw-rw-r--. 1 osboxes osboxes  13 Sep 22 23:42 postgresql-pwd.txt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:51 root.crt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:49 server.crt
-r--------. 1 osboxes osboxes 887 Sep 22 16:43 server.key

# Create secret postgresql-secrets
$> kubectl create secret generic postgresql-secrets --from-file=secrets/ -n shared-services
secret "postgresql-secrets" created

# Verify

$> kubectl describe secrets/postgresql-secrets -n shared-services
Name:         postgresql-secrets
Namespace:    shared-services
Labels:       
Annotations:  

Type:  Opaque

Data
====
server.key:          887 bytes
postgresql-pwd.txt:  13 bytes
root.crt:            891 bytes
server.crt:          891 bytes

Note: As seen above, I have created MD5 hash using "md5<password>:<userid>". The reason, I added string "md5" in front of hashed string is that when Postgres sees "md5" as a prefix, it recognizes that the string is already hashed and does not try to hash again and stores as it is.

Create PersistentVolume (PV) and PersistentVolumeClaim (PVC) 

Let's go ahead and create PV and PVC. We will use 'Retain' as persistentVolumeReclaimPolicy, so that data can be retained even when Postgresql pod is destroyed and recreated.
Sample PV yaml file:
## shared-nfs-pv-postgresql.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv-postgresql
  namespace: shared-services
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/postgresql/
    server: 192.168.56.101
  persistentVolumeReclaimPolicy: Retain

Sample PVC yaml file:
## shared-nfs-pvc-postgresql.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc-postgresql
  namespace: shared-services
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

PV and PVC creation and verification steps:
# Create persistentvolume
$> kubectl create -f yaml/shared-nfs-pv-postgresql.yml
persistentvolume "shared-nfs-pv-postgresql" created

# Create persistentvolumeclaim
$> kubectl create -f yaml/shared-nfs-pvc-postgresql.yml
persistentvolumeclaim "shared-nfs-pvc-postgresql" created

# Verify and make sure status of persistentvolumeclaim/shared-nfs-pvc-postgresql is Bound
$> kubectl get pv,pvc -n shared-services
NAME                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS   REASON    AGE
persistentvolume/shared-nfs-pv-postgresql   5Gi        RWX            Retain           Bound     shared-services/shared-nfs-pvc-postgresql                            32s

NAME                                              STATUS    VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/shared-nfs-pvc-postgresql   Bound     shared-nfs-pv-postgresql   5Gi        RWX                           20s

Create deployment manifest file

Here is the one, I have put together. You can customize it further per your need.

---
# Service definition
apiVersion: v1
kind: Service
metadata:
  name: sysg-postgres-svc
  namespace: shared-services
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
      name: tcp-5432
  selector:
      app: sysg-postgres-app
---
# Deployment definition
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: sysg-postgres-dpl
  namespace: shared-services
spec:
  selector:
    matchLabels:
      app: sysg-postgres-app
  replicas: 1
  template:
    metadata:
      labels:
        app: sysg-postgres-app
    spec:
      serviceAccountName: shared-svc-accnt
      securityContext:
        runAsUser: 70
        supplementalGroups: [999,1000]
        fsGroup: 70
      volumes:
        - name: shared-nfs-pv-postgresql
          persistentVolumeClaim:
            claimName: shared-nfs-pvc-postgresql
        - name: secret-vol
          secret:
            secretName: postgresql-secrets
            defaultMode: 0640
        - name: config-vol
          configMap:
            name: postgresql-config
      containers:
      - name: sysg-postgres-cnt
        image: postgres:10.5-alpine
        imagePullPolicy: IfNotPresent
        args:
          - -c
          - hba_file=/etc/postgresql-config-vol/pg_hba.conf
          - -c
          - config_file=/etc/postgresql-config-vol/postgresql.conf
        env:
          - name: POSTGRES_USER
            value: postgres
          - name: PGUSER
            value: postgres
          - name: POSTGRES_DB
            value: mmdb
          - name: PGDATA
            value: /var/lib/postgresql/data/pgdata
          - name: POSTGRES_PASSWORD_FILE
            value: /etc/postgresql-secrets-vol/postgresql-pwd.txt
        ports:
         - containerPort: 5432
        volumeMounts:
          - name: config-vol
            mountPath: /etc/postgresql-config-vol
          - mountPath: /var/lib/postgresql/data/pgdata
            name: shared-nfs-pv-postgresql
          - name: secret-vol
            mountPath: /etc/postgresql-secrets-vol
      nodeSelector:
        kubernetes.io/hostname: centosddcwrk01

Deploy the Postgresql

Below steps show the creation of service and deployment as well as step to make sure that the Postgres is running with SSL enabled mode.
# Deploy
$> kubectl apply -f yaml/postgres-deploy.yml
service "sysg-postgres-svc" created
deployment.apps "sysg-postgres-dpl" created

# Verify
$> kubectl get pods,svc -n shared-services
NAME                                     READY     STATUS    RESTARTS   AGE
pod/sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          1h

NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/sysg-postgres-svc   ClusterIP   10.96.90.30           5432/TCP   1h

# sh into the postgresql pod:
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services
/ $

# Launch psql
/ $ psql -U postgres
psql (10.5)
Type "help" for help.

# Verify SSL is enabled
postgres=# SHOW ssl;
 ssl
-----
 on
(1 row)

# Check the stored password. It should match the hashed value of "<password><user>" with "md5" prepended.
postgres=#  select usename,passwd from pg_catalog.pg_shadow;
 usename  |               passwd
----------+-------------------------------------
 postgres | md5db59316e90b1afb5334a331081618af6

# Connect remotely. You need to provide password.
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm -n shared-services -- psql "sslmode=require host=10.96.90.30 port=5432 dbname=mmdb" --username=postgres
Password for user postgres:
psql (10.5)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
(1 row)

Few key points

1) Using customized configuration file:
As you have seen above,  I have created configMap object postgresql-config and used it using option '-c config_file=/etc/postgresql-config-vol/postgresql.conf'. ConfigMap object postgresql-config is mapped to path '/etc/postgresql-config-vol' in volumeMounts definition.

containers:
- name: sysg-postgres-cnt
  imagePullPolicy: IfNotPresent
  args:
    - -c
    - hba_file=/etc/postgresql-config-vol/pg_hba.conf
    - -c
    - config_file=/etc/postgresql-config-vol/postgresql.conf

volumeMounts:
  - name: config-vol
    mountPath: /etc/postgresql-config-vol


2) Creating environment variable from secrets:

env:
  - name: POSTGRES_PASSWORD_FILE
    value: /etc/postgresql-secrets-vol/postgresql-pwd.txt

And the secret is mapped to path /etc/postgresql-secrets-vol
volumeMounts:
  - name: secret-vol
    mountPath: /etc/postgresql-secrets-vol

3) PGDATA environment variable: 
The default value is '/var/lib/postgresql/data'. However, Postgres recommends "... if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.". Refer to https://hub.docker.com/_/postgres/

Here we assign /var/lib/postgresql/data/pgdata:
env:
  - name: PGDATA
    value: /var/lib/postgresql/data/pgdata


Troubleshooting

1) Make sure server.key, server.crt, and root.crt all have appropriate permissions that is 0400 (if owned by postgres process owner) or 0640 (if owned by root). If proper permissions is not applied, Postgresql will not start. and in log, you will see following FATAL message.

2018-09-22 18:26:22.391 UTC [1] FATAL:  private key file "/etc/postgresql-secrets-vol/server.key" has group or world access
2018-09-22 18:26:22.391 UTC [1] DETAIL: File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
2018-09-22 18:26:22.391 UTC [1] LOG:  database system is shut down

In order to apply proper permission in file level, you can use 'defaultMode', I'm using defaultMode: 0644 as shown below (fragment from postgres-deploy.yml)

- name: secret-vol
  secret:
    secretName: postgresql-secrets
    defaultMode: 0640

2) Make sure to have right ownership - whether the files/directories are related to secret volume, config volume or persistence storage volume. Below is the error related PV path:

initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted

In order to resolve the above issue, you need to use Kubernetes' provided securityContext options like 'runAsUser', 'fsGroup', 'supplementalGroups' and/or capabilities. SecurityContext can be defined in both pod level and container level. In my case, I've defined it in pod level as shown below (fragment from postgres-deploy.yml)

securityContext:
  runAsUser: < specify your run as user >
  fsGroup: < specify group >
  supplementalGroups: [< comma delimited list of supplementalGroups >]

Read Configure a Security Context for a Pod or Container chapter from official Kubernetes site. I've also given some troubleshooting tips while using NFS type Persistent Volume and Claim in my previous blog How to Create, Troubleshoot and Use NFS type Persistent Storage Volume in Kubernetes

Below, I'm showing file permission per my configuration. Files are owned by root:postgres.

# Get running Kubernetes pod
$> kubectl get pods -n shared-services
NAME                                 READY     STATUS    RESTARTS   AGE
sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          12s

# sh to running Kubernetes pod
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services

# Explore files
/ $ cd /etc/postgresql-secrets-vol/..data/
/etc/postgresql-secrets-vol/..2018_09_22_19_24_52.289379695 $ ls -la

drwxr-sr-x    2 root     postgres       120 Sep 23 01:19 .
drwxrwsrwt    3 root     postgres       160 Sep 23 01:19 ..
-rw-r-----    1 root     postgres        13 Sep 23 01:19 postgresql-pwd.txt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 root.crt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 server.crt
-rw-r-----    1 root     postgres       887 Sep 23 01:19 server.key

3) psql: FATAL:  no pg_hba.conf entry for host "10.0.2.15", user "postgres" ... This FATAL message usually appears when you are trying to establish connection to Postgres, but the way you are trying to  authenticate is not defined in pg_hba.conf. Either the source IP (from where the connection originates is out of range, security option is not supported. Check your pg_hba.conf file and make sure right entry has been added.

[Optional]  Creating custom Postgres Docker image with customized postgresql.conf

If you prefer to create custom Docker image with custom postgresql.conf rather than creating configMap and using '-c config_file' option, you can do so. Here is how:

Create Dockerfile:


FROM postgres:10.5-alpine
COPY config/postgresql.conf /tmp/postgresql.conf
COPY scripts/_updateConfig.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/_updateConfig.sh && chmod 644 /tmp/postgresql.conf

My custom postgresql.conf is located under config directory locally. It will be copied to /tmp when Docker image is created. _updateConfig.sh is located under scripts directory locally and copied to /docker-entrypoint-initdb.d/ in build time.

Create script file _updateConfig.sh as shown below. It assumes that default PGDATA value '/var/lib/postgresql/data' is being used.

#!/usr/bin/env bash
cat /tmp/postgresql.conf > /var/lib/postgresql/data/postgresql.conf

Important: we can not directly copy the custom postgresql.conf into $PGDATA directory in build time because that directory does not exist yet.

Build the image:


Directory and files shown below are local:
$> ls -la postgresql/

drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 13:01 config
-rwxr-xr--.  1 osboxes osboxes  227 Sep 22 19:14 Dockerfile
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 18:39 scripts
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 23:42 secrets
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 12:11 yaml
$>cd postgresql

# docker build -t <image tag> .
# In my case I am using osboxes/postgres:10.5-sysg as image name and tag.
$> docker build -t osboxes/postgres:10.5-sysg .

If you use custom Docker image built this way, you don't need to define configMap to use custom postgresql.conf.



Trip to Tobermory - a perfect getaway within driving distance of Toronto

My family's no-labouring plan for this year's Labour Day long weekend, included some cool & quality time at the warm Sauble beach, overnight stay in traditionally Scottish town of Kincardine as well as a trip to the majestic lakeside town called Tobermory. It was a short (two days only) plan for a long weekend. We intentionally made it two days only, so that we could return back home on Monday and finish any last minute, left behind preparation for a new school year starting from Tuesday.

    As seen from Ferry Terminal - Lake Huron and Islands nearby
                                         
Originally, our plan was to visit Tobermory for the full duration of the weekend, and soak in the beauty of the place, however because of an unsuccessful attempt to book a hotel (Expedia and hotels.com only showed listings that famously said - we are sold out), we shifted our attention to next near by town called Owen Sound. However I found myself in a similar situation and had no luck in finding a hotel in that city as well. At one point I was even thinking about giving up on the long weekend idea, but one of my friends gave  me the brilliant idea of booking a hotel in Kincardine, Ontario and I did. Early Saturday morning, we were to drive about 2 hours and 50 minutes from Markham to Sauble beach. However, thanks to my daughters' research of Kincardine, we changed the plan at the last minute. They found very good reviews about Station beach in Kincardine and convinced us to head directly there, rather than going to Sauble beach. The fact that we had been to Sauble beach before also helped to make that decision.
      On Saturday morning, we left home around 7:50 am. The first few hours of our morning was foggy, but the scene began to look better as it slowly lifted. Sometimes even few hours of driving (it is about two and half hours of  drive from Toronto to Kincardine) can be boring. However, if you have cheerful kids in the car, it's sure be a lot of fun. We enjoyed watching countryside while driving. We even played "I spy with my little eyes ... Even though my kids are teenagers now, they love playing this game on trips ,which keeps us all engaged so we don’t miss out on the little things." 

Conquering Kincardine

We were at Station beach by 11:30 am. I immediately fell in love with the beach because of it's clean pristine water, and long sandy shoreline. It is so conveniently located just in walking distance from the downtown and parking is free. As soon as I touched the water, I felt a deep need to submerge myself in it immediately. It was amazing to swim there. The beach wasn’t too crowded, which made it the perfect place to relax and take a breather. No matter what your idea of relaxing is, you can achieve there. You can swim, sleep, read, or maybe play beach volleyball. We had brought food from home, so we enjoyed little picnic there. I even had a good nap while sunbathing. My kids went for a walk along the boardwalk (a great place to walk, jog and enjoy the view) and took beautiful pictures and videos. The boardwalk became a walk & learn experience for them  as it  had  interpretive signs with information about local marine history and shipwrecks.

Station Beach - even birds are sunbathing here!!!


      Marina at Station beach
               Interpretive sign
Tips:

  • Station beach is located at 151 Station Beach Road, Kincardine ON
  • As per www.canoe.ca, this water is listed as one of the top 9 destinations for surfing.
  • For those with mobility issues, there are 'MOBI-Mats' at Station Beach. The mats stretch right to the waters edge. 
  • If you enjoy playing beach volleyball, then there is co-ed beach volleyball every Friday at 7 p.m
  • The park has a bouncy castle for kids to enjoy
  • The lighthouse and museum are just a 2 to 3 minute walk way and lakeside downtown Kincardine is just nearby.

After spending almost three and half hours on the beach, we drove to the hotel (Sutton Park Inn). The check-in process was smooth and the hotel was nice. After taking shower and having a fresh cup of coffee, we went out for a tour to downtown Kincardine. We parked our car, in the harbour street, near the lighthouse and went on a walk along Queen street. Kincardine is a small town located on the shores of Lake Huron in Bruce County and has strong Scottish heritage. I was told that during summer, every Saturday night, people in Kincardine celebrate and take part in Scottish Pipe Band Parades.

Queen Street, downtown Kincardine

If you have a sweet tooth, you should visit this little chocolate shop (Mill Creek Chocolates) located at 813 Queen street. They Offer the handmade chocolates in very personalized packaging finished with colourful wraps and ribbons. You can enjoy the chocolates yourself or bring as a gift. You'll love them.

Mill Creek Chocolates Shop at Queen street.
Mill Creek Chocolates showroom

We also had chance to have some friendly conversation with the sales lady. She told us a little history of Kincardine, the Mill Creek Chocolates as well as how she ended up living in Kincardine. Originally from Brampton Ontario, she once visited Kincardine, liked the town and since then has been living in Kincardine.
     I had read that Station beach had one of the most beautiful sunsets. But unfortunately the evening became cloudy around 6:30 PM and looked like it was going to rain. Frustrated a little bit, we instead went to see the lighthouse which we found very fitting to the aesthetic of the small town. Around 7:00 pm, we were about to head back to the hotel, it started raining.  It rained all night long but we were prepared for this. We had brought board game <<Monopoly>> with us. We played until about 11:00 pm and had tons of fun while snacking the Mill Creek chocolates.
     It's no secret that when I initially booked my hotel in Kincardine, the purpose was to use it just as a transit point. However, now, I have no regrets whatsoever. We enjoyed it fully.


Trip to Tobermory


The next day (Sunday) we got up on time to have a complimentary breakfast at the hotel, and after packing our bags we started driving to Tobermory. It was around a two hours drive. We reached Tobermory at around 11:00 AM. We purchased our tickets (be prepared to spend time in ticket queue. Even if you have pre-purchased your ticket online, you still have to stay in line to check-in and to get a parking permit for your car. They do have a separate window to serve the holders of pre-purchased tickets) for cruise (Bruce Anchor company - 7468 HWY 6).
Notes:
  • The Bruce Anchor company offers few options. See their web site for more details.
  • Ticket price also includes the parking fee and they have free shuttle from the parking spot to the Ferry Terminal and back.
  • There are other boat and cruise services as well. Check the following:
      We booked the Tobermory Explorer option and enjoyed the view (while staying aboard) of shipwrecks lighthouses, and several beautiful islands in the Fathom Five National Marine Park.

Bruce Anchor boat cruise
Big Tub Lighthouse

It was around 75 minutes of fun-filled cruising. The cruise moves at a slow speed around the "Little Tub Harbour", where you can see the sunken ships (Note: Tobermory is home to over 20 historic shipwrecks) through the glass window or glass bottom. Speed bumps up once it is out of Little Tub Harbour, water comes flying through the window and you'll get taste of fresh of water Lake Huron.

Sunken ship as seen from glass window of cruise.
One of the flower pot rock pillars in Flowerpot island

If you choose the drop off option to Flowerpot island, you need to follow few rules in order not to damage natural environment (see instruction in picture below). Looks like the name - Flowerpot comes from two rock pillars on it's shore, which exactly look like flower pots. The island itself is about about two square kilometres in area and is a popular tourist destination. Activities like swimming, hiking and camping are allowed there.

Flowerpot island - visitor information
One of the flower pot rock pillars in Flowerpot island

After returning from the cruise, we went for a walk on a trail in the woods, which eventually led us to the lake. We stayed there for about an hour - swimming and watching the waves of blue water continuously hitting the big rocks on the shore. My kids didn't want to return and asked if we could stay to stay for another day or so. But I had to refuse as we didn't have a hotel booked for another night.
      So, we started driving back home. We drove a good three hours and made a stop along the way for some food and drinks. It was around 10:30 PM, when we arrived home.
     For us, it turned out to be a perfect getaway within driving distance of Toronto. I would definitely recommend a trip to Tobermory for anyone! It is an amazing escape that fits the budget and time. 

How to Create, Troubleshoot and Use NFS type Persistent Storage Volume in Kubernetes

Whether you need to simply persists the data or share data among pods, one of the options is to use Network File System (NFS) type Persistent Volumes (PV).
However, you may encounter multiple issues and a lot of times error message(s) you see in the pod's log not detailed enough or even misleading. In this blog post, I'm going to show you step by step process (with real example) of creating PV, Persistence Volume Claims (PVC) and use them in a pod. We'll also discuss the possible issues and how to resolve them.

Prerequisites for this exercise:

  1. Make sure you have working Kubernetes cluster where you can create resources as needed. 
  2. Make sure you have a working Network File System (NFS) server and is accessible from all Kubernetes nodes in the Kubernetes cluster.

Process steps:

1) Allow Kubernetes pod/container to use NFS

1.1) Check, if selinux is enabled on your Kubernetes cluster nodes/hosts (where Kubernetes pod(s) will be created). If it is enabled, we need to make sure it lets container/pod to access remote NFS share.

$> sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: enforcing
Mode from config file: enforcing
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 28

1.2) If it is enabled, find out the value of 'virt_use_nfs'. You can use either 'getsebool' or 'semanage' utilities as shown below:

$> getsebool virt_use_nfs
virt_use_nfs --> off
or
$> sudo semanage boolean -l | grep virt_use_nfs
virt_use_nfs (off , off) Allow virt to use nfs

1.3) If value of 'virt_use_nfs' is 'off', make sure to enable it; otherwise, any attempt by Kubernetes pod to access NFS share may be denied and you may get '403 Forbidden error' from your application.You can use 'setsebool' tool to set value as '1' or 'on'

$> sudo setsebool -P virt_use_nfs 1

$> sudo semanage boolean -l | grep virt_use_nfs
virt_use_nfs (on , on) Allow virt to use nfs

Note: -P option is to set the value permanently.

2) Create NFS share on NFS server

2.1) create  a directory on NFS server. My NFS server's IP is 192.168.56.101. Here I'm creating directory '/var/rabbitmq' on NFS server as a NFS share and assigning the ownership to 'osboxes:osboxes'. We'll discuss the ownership of the share and it's relationship to pod/container security context little later in the post.

# Create directory to be shared.
sudo mkdir -p /var/rabbitmq


# Change the ownership
$> sudo chown osboxes:osboxes /var/rabbitmq


Important: The right ownership of the NFS share is crucial.

2.2) Add NFS share in /etc/exports file. Below, I'm adding all of my kubernetes nodes. Pods running on 192.168.56.101-103 will be able to access the NFS share. 'root_squash' option "squashes" the power of the remote root user to the lowest local user, preventing unauthorized alterations.

/var/rabbitmq/ 192.168.56.101(rw,sync,root_squash)
/var/rabbitmq/ 192.168.56.102(rw,sync,root_squash)
/var/rabbitmq/ 192.168.56.103(rw,sync,root_squash)


2.3) Export the NFS share.

sudo exportfs -a


3) Provisioning of PV and PVC

Let's create a PersistentVolume (PV), PersistentVolumeClaim (PVC) for RabbitMQ.
Note: it's important that the PVC and pod that uses it to be in the same namespace. You can create them all in default namespace. However, here I'm going to create a dedicated namespace for this purpose.

3.1) Create a new namespace or use existing one or default namespace.
Below yaml file (shared-services-ns.yml) defines a namespace object called 'shared-services':

apiVersion: v1
kind: Namespace
metadata:
   name: shared-services

To create the “shared-services” namespace, run the following command:

# Create a new namespace:
$> kubectl create -f shared-services-ns.yml
namespace "shared-services" created

# Verify namespace is created successfully
$> kubectl get namespaces shared-services
NAME              STATUS    AGE
shared-services   Active    36s

3.2) Create a new service account or use existing one or default:
If a service account is not set in the pod definition, the pod uses the default service account for the namespace. Here we are defining a new service account called 'shared-svc-accnt'. File: svcAccnt.yml

apiVersion: v1
kind: ServiceAccount
metadata:
   name: shared-svc-accnt
   namespace: shared-services

To create a new service account 'shared-svc-accnt', run the following command:

# Create service account
$> kubectl create -f svcAccnt.yml
serviceaccount "shared-svc-accnt" created

# Verify service account
$> kubectl describe serviceaccount shared-svc-accnt -n shared-services
Name:                shared-svc-accnt
Namespace:           shared-services
Labels:              
Annotations:         
Image pull secrets:  
Mountable secrets:   shared-svc-accnt-token-mgk9w
Tokens:              shared-svc-accnt-token-mgk9w
Events:              

3.3) Assign role/permission to service account:
Once, service account is created, make sure to provide necessary access permission to service account in the given namespace. Based on your Kubernetes platform, you may do it differently. Since, my Kubernetes is part of Docker Enterprise Edition (EE), I do it through Docker Universal Control Plane (UCP) as described in https://docs.docker.com/ee/ucp/authorization/grant-permissions/#kubernetes-grants. I'll assign 'restricted control' role to my service account 'shared-svc-accnt' in namespace 'shared-services'. If you are using MiniKube or other platform, you may want to refer to generic Kuberentes documents for RBAC and service account permission. Basically, you need to basically create the cluster role(s) and bind it to the service account. Here are some links to corresponding documentation. See https://v1-7.docs.kubernetes.io/docs/admin/authorization/rbac/#service-account-permissions and https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole

3.4) Define PV object in a yaml file (rabbitmq-nfs-pv.yml):

apiVersion: v1
kind: PersistentVolume
metadata:
  name: rabbitmq-nfs-pv
  namespace: shared-services
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/rabbitmq/
    server: 192.168.56.101
  persistentVolumeReclaimPolicy: Retain

Note: currently a PVcan have “Retain”, “Recycle”, or “Delete” reclaim policies. For dynamically provisioned PV, the default reclaim policy is “Delete”. Kubernetes supports following access modes:

  • ReadWriteOnce – the volume can be mounted as read-write by a single node
  • ReadOnlyMany – the volume can be mounted read-only by many nodes
  • ReadWriteMany – the volume can be mounted as read-write by many nodes

To create a new PV 'rabbitmq-nfs-pv', run the following command:

# Create PV
$> kubectl create -f rabbitmq-nfs-pv.yml
persistentvolume "rabbitmq-nfs-pv" created

# Verify PV
$> kubectl describe pv rabbitmq-nfs-pv
Name:            rabbitmq-nfs-pv
Labels:          
Annotations:     
Finalizers:      []
StorageClass:
Status:          Available
Claim:
Reclaim Policy:  Retain
Access Modes:    RWX
Capacity:        5Gi
Node Affinity:   
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.56.101
    Path:      /var/rabbitmq/
    ReadOnly:  false
Events:        

3.5) Define PVC object in a yaml file ( rabbitmq-nfs-pvc.yml):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rabbitmq-nfs-pvc
  namespace: shared-services
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

Note: make sure to create PVC in the same namespace as your pod(s) that use it.

To create a new PVC 'rabbitmq-nfs-pvc', run the following command:

# Create PVC
$> kubectl create -f rabbitmq-nfs-pvc.yml
persistentvolumeclaim "rabbitmq-nfs-pvc" created

# Verify PVC
$> kubectl describe pvc rabbitmq-nfs-pvc -n shared-services
Name:          rabbitmq-nfs-pvc
Namespace:     shared-services
StorageClass:
Status:        Bound
Volume:        rabbitmq-nfs-pv
Labels:        
Annotations:   pv.kubernetes.io/bind-completed=yes
               pv.kubernetes.io/bound-by-controller=yes
Finalizers:    []
Capacity:      5Gi
Access Modes:  RWX
Events:        

Important: see the status above. It's "Bound" and it's bound to volume "rabbitqm-nfs-pv" that we created in previous step. If your PVC is not able to bind with PV, then it's a problem. It could be problem in defining the PV and PVC. Make sure your PV and PVC are of same storage class (if you are using one. For details refer to https://kubernetes.io/docs/concepts/storage/storage-classes/), and PV can fully satisfy the specification defined in PVC.


3.7) Now let's put together a simple yaml file that defines service and deployment objects for RabbitMQ (rabbitmq-nfs-pv-poc-depl.yml):

apiVersion: v1
kind: Service
metadata:
  name: rabbitmq-nfs-poc-svc
  namespace: shared-services
  labels:
    app: rabbitmq-nfs-poc-svc
spec:
  type: NodePort
  ports:
  - name: http
    port: 15672
    targetPort: 15672
  - name: amqp
    protocol: TCP
    port: 5672
    targetPort: 5672
  selector:
    app: rabbitmq-app
---
apiVersion: apps/v1beta2 # for versions prior to 1.9.0
kind: Deployment
metadata:
  name: rabbitmq-depl
  namespace: shared-services
spec:
  selector:
    matchLabels:
      app: rabbitmq-app
  replicas: 1
  template:
    metadata:
      labels:
        app: rabbitmq-app
    spec:
      serviceAccountName: shared-svc-accnt
      securityContext:
        runAsUser: 1000
        supplementalGroups: [1000,65534]
      containers:
      - name: rabbitmq-cnt
        image: rabbitmq
        imagePullPolicy: IfNotPresent
        #privileged: false
        #securityContext:
          #runAsUser: 1000
        ports:
        - containerPort: 15672
          name: http-port
          protocol: TCP
        - containerPort: 5672
          name: amqp
          protocol: TCP
        volumeMounts:
          # 'name' must match the volume name below.
          - name: rabbitmq-mnt
            # Where to mount the volume.
            mountPath: "/var/lib/rabbitmq/"
      volumes:
      - name: rabbitmq-mnt
        persistentVolumeClaim:
          claimName: rabbitmq-nfs-pvc
 

Note:
As seen in the rabbitmq-nfs-pv-poc-depl.yml above, I'm defining the security context in the pod level as:

securityContext:
  runAsUser: 1000
  supplementalGroups: [1000,65534]

Here runAsUser's value '1000' and supplementalGroups' value '1000' belong to user 'osboxes' and group 'osboxes'. gid '65534' belongs to group 'nfsnobody'.

$> id osboxes
uid=1000(osboxes) gid=1000(osboxes) groups=1000(osboxes),10(wheel),983(docker)

$> id nfsnobody
uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)

My NFS share '/var/rabbitmq' is owned by 'osboxes:osboxes', so I'm specifying those values that belong to osboxes in the securityContext.

Security context can be defined both on pod level as well as container level. Security context defined in the pod level is applied to all containers in the pod. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ has details about configuring security context for pod or container.


Following command creates rabbitmq deployment and service:

# Create objects 
$> kubectl create -f rabbitmq-nfs-pv-poc-depl.yml
service "rabbitmq-nfs-poc-svc" created
deployment.apps "rabbitmq-depl" created

# Get pods $> kubectl get pods -n shared-services
NAME                            READY     STATUS    RESTARTS   AGE
rabbitmq-depl-775496b9b-d85l7   1/1       Running   0          7s


Let's check the rabbitmq processes inside the container and files under '/var/rabbitmq' share on NFS server.

# Check process inside the container
$> kubectl exec -it rabbitmq-depl-775496b9b-d85l7 /bin/bash -n shared-services
$> ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
1000         1     0  0 12:38 ?        00:00:00 /bin/sh /usr/lib/rabbitmq/bin/rabbitmq-server
1000       162     1  0 12:38 ?        00:00:00 /usr/lib/erlang/erts-9.3.3.2/bin/epmd -daemon
1000       321     1  5 12:38 ?        00:00:03 /usr/lib/erlang/erts-9.3.3.2/bin/beam.smp -W w -

# Connect to NFS server 


$> ssh osboxes@192.168.56.101
Last login: Sun Aug 26 14:48:19 2018 from centosddcclnt

Make sure rabbitmq successfully created the file and review the file ownership
$> cd /var/rabbitmq
$> ls -la
total 28
drwxr-xr-x.  5 osboxes osboxes   4096 Aug 26 13:40 .
drwxr-xr-x. 25 root    root      4096 Aug 26 13:34 ..
-rw-------.  1 osboxes nfsnobody   40 Aug 26 13:40 .bash_history
drwxr-xr-x.  3 osboxes nfsnobody 4096 Aug 26 13:38 config
-r--------.  1 osboxes nfsnobody   20 Aug 26 01:00 .erlang.cookie
drwxr-xr-x.  4 osboxes nfsnobody 4096 Aug 26 13:38 mnesia
drwxr-xr-x.  2 osboxes nfsnobody 4096 Aug 26 13:38 schema



4) Possible issues & troubleshooting

4.1) Pod remain in pending state and pod description shows 'mount failed: exit status 32' as shown below:

$> kubectl describe pod rabbitmq-shared-app -n shared-services
Name:         rabbitmq-shared-app
Namespace:    shared-services
Node:         centosddcwrk01/192.168.56.103
Start Time:   Thu, 16 Aug 2018 17:03:19 +0100
Labels:       name=rabbitmq-shared-app
Annotations:  
Status:       Pending
IP:
  ...
  ...
...
Events:
  Type     Reason                 Age   From                   Message
  ----     ------                 ----  ----                   -------
  ...
  Warning  FailedMount            50s   kubelet, centosddcucp  MountVolume.SetUp failed for volume .... : mount failed: exit status 32

If you try to run the mount manually from inside the container, you may see following:

$> kubectl exec -it rabbitmq-depl-bd9689c8-7md48 /bin/bash -n shared-services
root@rabbitmq-depl-bd9689c8-7md48:/# pwd
/


root@rabbitmq-depl-bd9689c8-7md48:/# mount -t nfs 192.168.56.101:/var/rabbitmq /tmp/test
mount: wrong fs type, bad option, bad superblock on 192.168.56.101:/var/rabbitmq,
       missing codepage or helper program, or other error
       (for several filesystems (e.g. nfs, cifs) you might
       need a /sbin/mount. helper program)

       In some cases useful info is found in syslog - try
       dmesg | tail or so.

In this case, review the '/etc/exports' file on NFS server.  This file controls which file systems are exported to remote hosts and specifies options. If your Kubernetes host/node is not listed
in this file with appropriate option(s), a pod running on that node will not be able to mount. Make sure to run the command 'sudo exportfs -a' once you have updated the /etc/exports. You can also try to manually mount from your host (instead of from within the container) in order to test if that host/node is authorized to mount. Refer to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-nfs-server-config-exports for details.


4.2) Pod fails to instantiate and you see 'chown: changing ownership of '/var/lib/rabbitmq': Operation not permitted' error in the log as shown below:

$> kubectl create -f rabbitmq-nfs-pv-poc-depl.yml
service "rabbitmq-nfs-poc-svc" created
deployment.apps "rabbitmq-depl" created

$> kubectl get pods -n shared-services
NAME                             READY     STATUS             RESTARTS   AGE
rabbitmq-depl-5fff645d95-429vd   0/1       CrashLoopBackOff   1          14s

$> kubectl logs rabbitmq-depl-5fff645d95-429vd -n shared-services
chown: changing ownership of '/var/lib/rabbitmq': Operation not permitted

This means that the pod is able to mount successfully, however, it's not able to change the ownership of file/directory. The easiest way to resolve this issue is to have a common user that owns NFS share on NFS server and runAsUser of Kubernetes pod. For example, for this demo, I have used 'osboxes' user which owns the NFS share and also use this user's uid '1000' in the pod level security context.

$> ls -lZ /var/rabbitmq
drwxr-xr-x. osboxes nfsnobody system_u:object_r:var_t:s0       ...

$> id osboxes
uid=1000(osboxes) gid=1000(osboxes) groups=1000(osboxes),10(wheel),983(docker)

In reality, it may not be that easy. You may not have access to remote NFS server or system administrator of NFS server is not willing to change the ownership of NFS share on NFS server. In this case (as a work-around), you can use 'root' as runAsUser like below in the container level:

securityContext:
  runAsUser: 0

However, for this to work properly, the /etc/exports file on NFS server should not squash (use 'no_root_squash') the root. It should look something like this:

/var/rabbitmq/ 192.168.56.103(rw,sync,no_root_squash)

'no_root_squash' has it's own security consequences. See details here https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/5/html/deployment_guide/s1-nfs-server-config-exports

In summary, in order to grant pod's access to PVs you need to take considerations of:

  • Finding the group ID and/or user ID assigned to the actual storage (on NFS server)
  • SELinux considerations,
  • Also making sure that the IDs allowed to access physical storage match the requirements of the particular pod.

The Group IDs, the user ID, and SELinux values can be defined in the pod's SecurityContext section. User IDs can also be defined to each container. So, in short you can use the following user, group and options to control and find the right combination:
  • supplementalGroups
  • fsGroup
  • runAsUser
  • seLinuxOptions

Hope, it helps you a little bit!

Note: yaml files used in this post can be downloaded from Github location: https://github.com/pppoudel/kube-pv-pvc-demo