How to Retrieve Info Using Jenkins Remote Access API



We can use REST-like Remote Access API in Jenkins to post or retrieve information in XML or JSON format.  General format of the URL is <Jenkins-URL>/<data-context>/api/xml|json.
Below are few examples:

1) Retrieve Build Result.
Below examples shows the build result in JSON format for job 'web-apps-build-pkgvalidator' and build number ''6'. Credential is passed using '--user <user-name>:<password>'
result is piped to 'jq' to pretty print.
Notes:



$> curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator/6/api/json --user ppoudel:<password> | jq

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  100  1765  100  1765    0     0  14120   0 --:--:-- --:--:-- --:--:-- 16192


{
  "_class": "hudson.maven.MavenModuleSetBuild",
  "actions": [
    {},
    {
      "_class": "hudson.model.CauseAction",
      "causes": [
        {
          "_class": "hudson.model.Cause$UserIdCause",
          "shortDescription": "Started by user Purna Poudel",
          "userId": "ppoudel",
          "userName": "Purna Poudel"
        }
      ]
    },
    {
      "_class": "hudson.plugins.git.util.BuildData",
      "buildsByBranchName": {
        "refs/remotes/origin/release/release-2.0.3": {
          "_class": "hudson.plugins.git.util.Build",
          "buildNumber": 6,
          "buildResult": null,
          "marked": {
            "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
            "branch": [
              {
                "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
                "name": "refs/remotes/origin/release/release-2.0.3"
              }
            ]
          },
          "revision": {
            "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
            "branch": [
              {
                "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
                "name": "refs/remotes/origin/release/release-2.0.3"
              }
            ]
          }
        }
      },
      "lastBuiltRevision": {
        "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
        "branch": [
          {
            "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
            "name": "refs/remotes/origin/release/release-2.0.3"
          }
        ]
      },
      "remoteUrls": [
        "https://pppoudel@bitbucket.org/pppoudel/pkgvalidator.git"
      ],
      "scmName": ""
    },
    {
      "_class": "hudson.plugins.git.GitTagAction"
    },
    {},
    {
      "_class": "hudson.maven.reporters.MavenAggregatedArtifactRecord"
    },
    {},
    {},
    {}
  ],
  "artifacts": [],
  "building": false,
  "description": null,
  "displayName": "#6",
  "duration": 37633,
  "estimatedDuration": 39130,
  "executor": null,
  "fullDisplayName": "web-apps-build-pkgvalidator #6",
  "id": "6",
  "keepLog": false,
  "number": 6,
  "queueId": 150,
  "result": "SUCCESS",
  "timestamp": 1546274196419,
  "url": "http://localhost:8080/job/web-apps-build-pkgvalidator/6/",
  "builtOn": "",
  "changeSet": {
    "_class": "hudson.plugins.git.GitChangeSetList",
    "items": [],
    "kind": "git"
  },
  "culprits": [
    {
      "absoluteUrl": "http://localhost:8080/user/ppoudel",
      "fullName": "Purna Poudel"
    }
  ],
  "mavenArtifacts": {},
  "mavenVersionUsed": "3.5.4"
}

2) Below example shows use of /api/xml with 'xpath' to get just the build status from the build report.

$> curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator/6/api/xml?xpath=/*/result --user ppoudel:<password>

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24 0 24 0 0 102 0 --:--:-- --:--:-- --:--:-- 110


<result>SUCCESS</result>

3) Getting build status using /api/json. The following example shows retrieving job name, build number, build status and timestamp.

$> curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator/6/api/json?tree=fullDisplayName,number,result,timestamp --user ppoudel:<password>

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   152  100   152    0     0   1216      0 --:--:-- --:--:-- --:--:--  1394


{"_class":"hudson.maven.MavenModuleSetBuild","fullDisplayName":"web-apps-build-pkgvalidator #6","number":6,"result":"SUCCESS","timestamp":1546274196419}

4) Retrieving all jobs under certain view:
Note: here I'm piping the result through 'jq' and 'grep', which is optional.

curl -X GET http://localhost:9090/job/Web/job/mobile-apps/view/mobile-apps/api/json --user ppoudel:<password> | jq | grep name

"name": "mobile-apps-xyzmportal",
"name": "mobile-apps-holportal",
"name": "mobile-apps-tpl",
...
...
"name": "mobile-apps-bcs_jpj",


 5) Retrieving JUnit test summary report:

$> curl http://localhost:9090/job/web-apps-build-pkgvalidator/6/testReport/api/json?tree=failCount,skipCount,totalCount,urlName --user ppoudel:<password>

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   127  100   127    0     0    451      0 --:--:-- --:--:-- - 579


{"_class":"hudson.maven.reporters.SurefireAggregatedReport","failCount":0,"skipCount":0,"totalCount":20,"urlName":"testReport

6) Below steps can be used to retrieve the SonarQube analysis result using Jenkins' remote REST like API.
6.1) Get the taskId from the build providing build number
Note: http://localhost:9090 is Jenkins server URL.

curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator_sonarqube/6/api/json --user ppoudel:<password> | jq | grep ceTaskId

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5182  100  5182    0     0  18507      0 --:--:-- --:--:-- --:--:-- 20811
      "ceTaskId": "AWfXjftAinfFqLzOhqqe",

6.2)  Get the analysisId using taskID:
Note: http://localhost:8000 is SonarQube URL.

$> curl -X GET http://localhost:8000/api/ce/task?id=AWfXjftAinfFqLzOhqqe --user ppoudel:<password> | jq | grep analysisId

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   518  100   518    0     0   2770      0 --:--:-- --:--:-- --:--:--  3029
    "analysisId": "AWfXjgNNVNzkngEjXoPD",

6.3) Get the analysis report using analysisId:

$> curl -X GET http://localhost:8000/api/qualitygates/project_status?analysisId=AWfXjgNNVNzkngEjXoPD --user ppoudel:<password> | jq

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed
100  1438  100  1438    0     0   4608      0 --:--:-- --:--:-- --:--:--  4841

{
  "projectStatus": {
    "status": "ERROR",
    "conditions": [
      {
        "status": "OK",
        "metricKey": "new_maintainability_rating",
        "comparator": "GT",
        "periodIndex": 1,
        "warningThreshold": "1",
        "actualValue": "1"
      },
      {
        "status": "OK",
        "metricKey": "new_reliability_rating",
        "comparator": "GT",
        "periodIndex": 1,
        "warningThreshold": "1",
        "actualValue": "1"
      },
      {
        "status": "OK",
        "metricKey": "new_security_rating",
        "comparator": "GT",
        "periodIndex": 1,
        "errorThreshold": "1",
        "actualValue": "1"
      },
      {
        "status": "OK",
        "metricKey": "sqale_rating",
        "comparator": "GT",
        "warningThreshold": "3",
        "actualValue": "1"
      },
      {
        "status": "ERROR",
        "metricKey": "security_rating",
        "comparator": "GT",
        "errorThreshold": "1",
        "actualValue": "5"
      },
      {
        "status": "WARN",
        "metricKey": "reliability_rating",
        "comparator": "GT",
        "warningThreshold": "3",
        "actualValue": "5"
      },
      {
        "status": "ERROR",
        "metricKey": "blocker_violations",
        "comparator": "GT",
        "errorThreshold": "0",
        "actualValue": "180"
      },
      {
        "status": "WARN",
        "metricKey": "critical_violations",
        "comparator": "GT",
        "warningThreshold": "0",
        "actualValue": "3806"
      },
      {
        "status": "WARN",
        "metricKey": "major_violations",
        "comparator": "GT",
        "warningThreshold": "0",
        "actualValue": "2878"
      },
      {
        "status": "WARN",
        "metricKey": "coverage",
        "comparator": "LT",
        "warningThreshold": "80",
        "actualValue": "0.1"
      },
      {
        "status": "ERROR",
        "metricKey": "vulnerabilities",
        "comparator": "GT",
        "errorThreshold": "0",
        "actualValue": "107"
      }
    ],
    "periods": [
      {
        "index": 1,
        "mode": "previous_version",
        "date": "2018-12-15T15:11:09-0500",
        "parameter": "1.5.8-SNAPSHOT"
      }
    ],
    "ignoredConditions": false
  }
}

7) If you need to get all the configured projects (paginated output) in SonarQube you can use the following URL:
Note: http://localhost:8000 is SonarQube URL.


# First page:
$> curl -X GET http://localhost:8000/api/components/search?qualifiers=TRK&p=1

8) Or get detail of a specific project:
Note: http://localhost:8000 is SonarQube URL.

$> http://localhost:8000/api/components/show?key=<project-key&gt

For more information on Remote access API visit https://wiki.jenkins.io/display/JENKINS/Remote+access+API

You may be interested reading the following Jenkins related blog:
Jenkins Job Chain and How to Resolve Issue with Parameter Passing
Jenkins Pipeline - Few Troubleshooting Tips

How I achieved my PSM I Certification

I'm writing this post not only to express my joy but also to share the techniques that I employed in order to prepare myself to pass the Professional Scrum Master™ (PSM) I test. I'm extremely delighted to have this certification. But don't get me wrong, employing the knowledge gained, in daily practice is more important than earning a certification!
Just to give you some background, I'm a DevOps specialist and work with different agile teams (using Scrum framework), as part of my daily job. I attended a Agile boot camp just last week, which also gave me a lot of insight on how to use Scrum framework in the real world.

In terms of preparation, I spent around 5 days including 3 days of boot camp. Below is the list of resources, I used to acquire the mastery of PSM I.

  1. Scrum Guide - in my humble opinion, this is the most important source of information. It is short but written very concisely and to the point. I read it just once top to bottom, but spent about four hours reviewing and taking notes (imaging I spent 4 hours and it (the 2017 version) only has 19 pages). I did spend another 15 minutes or so to review it again just before taking my real test. You can download this guide free from www.scrumguides.org
  2. Took free Open Assessments provided by scrum.org. Even though, I was targeting for Scrum Master, I took all available Open Assessments there just to get a bigger picture from the Scrum Master, the Product Owner, the Development Team, and the Scaled Scrum perspectives. I took some of these assessments twice to get close to a 100% passing mark and also took note of individual failed quizzes and reviewed it all again just before taking my real test.
  3.  Big thanks go to Mikhail Lapshin for maintaining such a wonderful site (https://mlapshin.com/index.php/scrum-quizzes/) dedicated to Scrum quizzes. In my opinion you find here sufficient (quantitatively and qualitatively) quizzes. Mikhail also maintains very good Scrum Questions page where he provides detail insight.  
  4. I also reviewed materials from www.volkerdon.com and did their quizzes. I must say that the site has high quality materials and quizzes. I repeated some of the quizzes here twice.
  5. I also downloaded and reviewed The Scrum Master Training Manual (The-Scrum-Master-Training-Manual-Vr1.61.pdf) produced by Management Plaza (https://mplaza.pm). As name suggests, it is written as a manual with nice details. I also noticed that the manual gives too much emphasis on word "Project" (as a noun). My understanding is that Scrum is more product oriented rather than project. Scrum Guide (2017 version) has just a brief touch on project, which states "... Each Sprint may be considered a project with no more than a one-month horizon. Like projects, Sprints are used to accomplish something...". So, make sure you are not confused when reviewing the manual.
  6. If you are a audio visual learner, Scrum Training Series modules from http://www.scrumtrainingseries.com/ are very good. These modules not only explain Scrum in real world like scenarios but also have good set of quizzes. I went through top five modules and did the quizzes. 
  7. Along with the Scrum Guide, I also reviewed the Nexus Guide and got knowledge about the Nexus framework. Nexus is one of the Scrum Scaling frameworks.
  8. I also studied some material (from the internet) to get better understanding on how to do Product Backlog Items (PBIs) ordering/prioritization, PBI work estimation etc. Evidence-Based Management Guide is a good read. I also gained some knowledge on consensus-based estimating technique using Planning Poker®, estimating using Story points etc.
In terms of giving the actual test, I chose the morning time (I heard that the site becomes slower during the day). I woke up Sunday morning around 6:00 AM, prepared myself a cup of coffee (I actually consumed two cups in two hours of time), did around 40 minutes of review before starting the actual test. Once you start the test, time management is very important. I was about to miss few, hadn't I rushed the last 30 minutes. I did mark few questions for later review (you have to manually make a note in separate notepad, as the system does not allow you to mark the question for review), but at the end, did not get enough time to go back and review again. It took me around 58 minutes to finish the test. So, instead of going back and reviewing, I just hit the finish button. I felt that I was confident in my answers, however, I was still nervous for my results.Yes, I got 96.3%.
   Hope this post is helpful to all of you - either considering to get certification or just trying to learn and use Scrum in your daily practice!

Jenkins Pipeline - Few Troubleshooting Tips



1)  ERROR: Error cloning remote repo ... fatal: I don't handle protocol 'ssh'. Detail stack trace below:

ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress ssh://<gituser>@<githost>:<port>/<GIT_REPO_NAME>.git
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: fatal: I don't handle protocol '
ssh'
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1715)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72)

Root cause and possible solution: The above issue can be caused by many things, but one you never suspect from the error message itself is that it is caused by some extra character(s) in GIT URL that you can't see on the Jenkins UI. Specifically if you have copied your GIT URL from another web page. Just delete the GIT URL in your pipeline script and retype it manually (instead of copying), it may solve the issue.

2) Error: java.lang.NoSuchMethodError: No such DSL method 'findFiles' found among steps...

Root cause and possible solution: The DSL methods are related to the Jenkins DSL execution engine or one of the Plugins. In this particular case, make sure pipeline-utility-steps (pipeline-utility-steps.jpi) plugin is installed. For more info visit https://plugins.jenkins.io/pipeline-utility-steps

3) Error: java.lang.NoSuchMethodError: No such DSL method 'httpRequest' found among steps...

Root cause and possible solution: In this particular case, make sure httpRequest (http_request.hpi) plugin is installed. For more info visit https://jenkins.io/doc/pipeline/steps/http_request/

4) Error: java.lang.NoSuchMethodError: No such DSL method 'sshagent' found among steps...

Root cause and possible solution: In this particular case, make sure sshAgent (ssh-agent.hpi ) plugin (http_request.hpi) plugin is installed. For more info visit https://wiki.jenkins.io/display/JENKINS/SSH+Agent+Plugin

5) Error: java.lang.RuntimeException: [ssh-agent] Could not find a suitable ssh-agent provider...


java.lang.RuntimeException: [ssh-agent] Could not find a suitable ssh-agent provider. at com.cloudbees.jenkins.plugins.sshagent.SSHAgentStepExecution.initRemoteAgent(SSHAgentStepExecution.java:175) at com.cloudbees.jenkins.plugins.sshagent.SSHAgentStepExecution.start(SSHAgentStepExecution.java:63) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:270) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:178) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source)

Root cause and possible solution: This error means that the sshAgent plugin is not able to locate the sshAgent provider in the path. Mostly, you may encounter this issue if you are running Jenkins on Windows. In order to resolve this issue on Windows, I downloaded the Portable GIT (https://git-scm.com/download/win) and put %PORTABLE_GIT_HOME%\usr\bin directory in my System path. This directory has ssh-agent.exe file. Make sure to launch new command window and restart the Jenkins.
You may be interested reading the following Jenkins related blog:
Jenkins Job Chain and How to Resolve Issue with Parameter Passing
How to Retrieve Info Using Jenkins Remote Access API

6) You are using Maven Pipeline Plugin and you get  [ERROR] Java heap space

[ERROR] Java heap space -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError

Root cause and possible solution: Because of insufficient Java heap space, it is running out of memory. The easiest solution is to use Maven JVM Options to specify the maximum (or min & max both) heap size that your process needs. You can use 'mavenOpts' within 'withMaven' step. Below is an example:

stage('Build') {
withMaven(
  mavenSettingsConfig: '9d2a7048-91b1-47a8-8788-be4b89b71128', jdk: jdk.toString(), maven: 'Maven 3.3.9', mavenOpts: '-Xmx2048m') {
    bat 'mvn clean package'
      }
}

Important: before increasing the heap size, make sure you have sufficient physical memory (RAM) available. If your Java process can not reserve specified heap space, then you may get error saying "Error occurred during initialization of VMCould not reserve enough space for 2097152KB object heap". Which means either there is not enough physical memory available in the server for Java to reserve specified heap space (2 GB in this case) or other server/OS settings (like 32 JVM on Windows has around 1.6 GB heap limitation) preventing Java to reserve specified heap size. 

Welcoming the New Year - the Polar Bear Way

Looking back, 2018 has been an amazing year, full of ups and downs and sweet and sours experiences. Today, I just wanted to say HAPPY NEW YEAR to everyone and wishing you all the best in 2019! How was your first day of 2019? I started my New Year by taking dip into the freezing water of lake Ontario organized by Toronto Polar Bear Club in support of Boost Child & Youth Advocacy Centre (CYAC). I thought, there was no better way to start the new year than participating in an event for charity.


People rushing to chilly water of lake Ontario


Boost CYAC is a Toronto based organization that answers calls when a child/youth under the age of 18 becomes a victim of sexual abuse, physical abuse, emotional abuse, or neglect and help them with their services.
Boost CYAC has a noble mission which states, "Boost CYAC is committed to eliminating abuse and violence in the lives of children, youth, and their families...We are dedicated to the prevention of child abuse and violence through education and awareness, and to collaborating with our community partners to provide services to children, youth, and their families." For more information, visit their website https://boostforkids.org

Thank you to everyone who supported me and donated to Boost CYAC. Good news! As per the latest figure from @TOPolarBearClub, today's participants have raised over $33,000.


Today's event was amazing and definitely something I can talk about for a long time. Participating in this event was a goal of mine for a long time. Two years ago, I had chance to read Scott Carney's famous book “What doesn't kill Us,” where he explained techniques based on cold therapy, breathing and commitment, developed by Wim Hof. I became an instant fan of it, and started practicing a lighter version of it. So, today was my chance to experience how my body reacts in natural freezing water and it did just great!

The day overall was chilly with a temperature of around -3°C and with wind chill it felt like -6°C. I went to this event with my and friend's families. The event was a buzz of excitement with people cheering and taking pictures. I was one of the first few people to touch the freezing cold water and experience the thrilling chill.

I'm hoping to make the polar bear dip as my New Year's Day tradition. If I'm able inspire you enough through this writing, I'll be seeing you all  in January 1st, 2020 at Sunnyside beach, Toronto. Stay warm until then.

Here are some of the pictures and a short YouTube Video that my family captured during the event.

After the dip



CityNews coverage

GIT: Maintain Clean Workspace, Stay Synced with Integration Branch & Publish Perfect Commits

GIT is very powerful and provides many options to choose while doing one or the other tasks. GIT on one hand is a distributed version control system as every git working directory contains a full-fledged repository, but on the other hand it can be hosted centrally to facilitate sharing and collaboration. So, it is easy to use the power of GIT and achieve wonders but also easy to mess around and spend good chunk of your daily hour(s) resolving the conflict(s). I've worked with developers coaching them how to stay clean and synced while helping them to improve their productivity and fine-tuning their commits. Ultimately, I have come up with this one page visual diagram that outlines the practice which I have been preaching for.


Diagram 1.0

Since the diagram 1.0 is self explanatory, I am not going to write detail elaboration of it, but just going to highlight few important concepts below.

Maintain clean work space (working directory)

Specifically when you are done for the day and heading home (or to a bar if you feel so) or starting fresh (with a fresh cup of java) in the morning, it is important to ensure your working directory is clean. Block #4 (in the diagram 1.0) and associated green boundary explains how to deal with Un-tracked, Un-staged, or Un-committed files.
  • Discard: discard them (if you really don't need them) 
    • The orange boundary contains steps to deal with those changes in case by case basis. 
    • The Purple boundary discards everything that is not commited.

  • Commit: stage (if not already staged) and commit.
  • Stash: store them safely for later use - which is called Stashing in GIT term.


Remain synced with remote branch:

Making it a regular practice of pulling latest from remote branch and either merging or rebasing (depending upon your merge strategy in place), not only helps to resolve any merge conflict when it is still small and manageable, but also helps to boost team collaboration. Block #5 with pink boundary in diagram 1.0, explains exactly this. If you are working on a 'feature' branch (following GitFlow strategy), you need to pull first not only from your remote 'feature' branch but also from 'develop' (assuming here 'develop' as an integration branch. You may have 'dev' or 'main' as an integration branch) branch and merge locally on your feature branch before you push your code to remote feature branch and later create a <<pull request>> to integration branch.

Commit early and often

Blocks #8, #9, #10 show this. Whether you are developing a feature or working on a bug/defect fix, it is important to commit when you complete a logical unit of work. Please note, it is never too early or never too frequent to commit your code as long as you review your commits and fine tune them before pushing/publishing. Commiting not only helps to maintain the clean working directory but also helps to preserve the data from accidental loss.

Review and fine-tune your  commits before publishing

If you follow 'commit early and often' principle, it is important that you review and if necessary, fine-tune your commits before pushing/sharing/publishing. It is important to make sure that your commit is small enough and represent a logical unit of work (related to a particular feature, bug fix or defect fix). Fine-tuned commits are extremely useful in troubleshooting using git bisection (git bisect) to find a code that introduced a particular bug or reverting a commit (git revert) with confidence. You can perfect your commits by squashing related commits into one making it kind of transnational, by rearranging commits in right order, by amending commit comments making them contextual with right reference or by splitting commit if it contains unrelated changes. Block 10.1 in diagram 1.0 reminds you to perfect (if necessary) your commits before sharing/pushing/publishing.
Important: NEVER re-write any shared/published history.

Pull before push


It is one of the most important rules that you need to follow when you are working in a team environment. As described in Block #5.0 of diagram 1.0, you need to pull latest commits from the  remote branch, merge (resolve if any conflicts) locally and only after that push your code. Whether you merge or rebase depends upon the strategy you have in place. Most of the time, merge is safer than rebase.

Regularly publish your code

Most of us get paid only after publishing, so it is important! It is generally how teams share/collaborate as well. Remote/Central repositories are usually setup considering high availability (HA) and disaster recovery (DR), so it is also important to regularly publish your commits to protect them from destructive events. Refer to block #12 of the diagram 1.0

Note: if you are interested in contributing to enhance the diagram further, you can do so. Source (GitBestPracticesOnePagerDiagram.xml) of this diagram (draw.io format) is in GitHub: https://github.com/pppoudel/git-best-practices-one-page-diagram.git


References of Git commands (used in diagram 1.0):


How to Configure PostgreSQL with SSL/TLS support on Kubernetes

SSL is disabled in the default Postgresql configuration and I had to struggle a little bit while making Postgresql on Kubernetes with SSL/TLS support to work. After few research and trials, I was able to resolve the issues and here I'm sharing what I have done to make it work for me.


    
    

High Level Steps:
  1. Customize postgresql.conf  (to add/edit SSL/TLS configuration) and create configMap object. This way, we don't need to rebuild the Postgres to apply custom postgresql.conf , because ConfigMap allows us you to decouple configuration artifacts from image content. 
  2. Create secret type objects for server.key, server.crt, root.crt, ca.crt, and password file.
  3. Define and use NFS type PersistentVolume (PV) and PersistentVolumeClaim (PVC)
  4. Use securityContext to resolve permission issues.
  5. Use '-c config_file=<config-volume-location>/postgresql.conf' to override the default postgresql.conf
Note: all files used in this post can be cloned/downloaded from GitHub https://github.com/pppoudel/postgresql-with-ssl-on-kubernetes.git

Let's get started

In the example, I'm using namespace called 'shared-services' and service account called 'shared-svc-accnt'. You can create your own namespace and service account or use the 'default'. In anyways, I have listed here necessary steps and yaml files can be downloaded from github.

Create namespace and service account


# Create namespace shared-services

   $> kubectl create -f shared-services-ns.yml

# Create Service Account shared-svc-accnt

   $> kubectl create -f shared-svc-accnt.yml

# Create a grant for service account shared-svc-accnt. Do this step as per your platform.


Create configMap object

I have put both postgresql.conf and pg_hba.conf under config directory. I have updated postgresql.conf as follows:

ssl = on
ssl_cert_file = '/etc/postgresql-secrets-vol/server.crt'
ssl_key_file = '/etc/postgresql-secrets-vol/server.key'

Note: the location '/etc/postgresql-config-vol' needs to be mounted while defining 'volumeMounts', which we will discuss later in the post.

Those three above listed are the main configuration items that need to have proper values in order to force Postgresql to support SSL/TLS. If you are using CA signed certificate, you also need to provide value for 'ssl_ca_file' and optionally 'ssl_crl_file'. Read Secure TCP/IP Connections with SSL for more details.
You also need to update the pg_hba.conf (HBA stands for host-based authentication) as necessary. pg_hba.conf is used to manage connection type, control access using a client IP address range, a database name, a user name, and the authentication method etc.

# Sample entries in pg_hba.conf
# Trust local connection - no password required.
local    all             all                                     trust
# Only secured remote connection from given IP-Range accepted and password are encoded using MD5
#hostssl  all             all             < Cluster IP-Range >/< Prefix length >         md5
hostssl  all             all             10.96.0.0/16         md5


$> ls -l config/

-rw-------. 1 osboxes osboxes  4535 Sep 22 17:33 pg_hba.conf
-rw-------. 1 osboxes osboxes 22781 Sep 23 03:03 postgresql.conf

# Create configMap object
$> kubectl create configmap postgresql-config --from-file=config/ -n shared-services
configmap "postgresql-config" created

# Review created object
$> kubectl describe configMap/postgresql-config -n shared-services
Name:         postgresql-config
Namespace:    shared-services
...

Create secrets

I've created server.key and self signed certificate using OpenSSL. you can either do the same or have CA signed certificates. Here, we are not going to use the client certificate. Read section 18.9.3. Creating Certificates if you need help in creating certificates.

# Create MD5 hashed password to be used with postgresql
$> POSTGRES_USER=postgres
$> POSTGRES_PASSWORD=myp3qlpwD
$> echo "md5$(echo -n $POSTGRES_PASSWORD$POSTGRES_USER | md5sum | cut -d ' ' -f1)" > secrets/postgresql-pwd.txt

# Here are all files under secrets directory
$> ls -la secrets/
-rw-rw-r--. 1 osboxes osboxes  13 Sep 22 23:42 postgresql-pwd.txt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:51 root.crt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:49 server.crt
-r--------. 1 osboxes osboxes 887 Sep 22 16:43 server.key

# Create secret postgresql-secrets
$> kubectl create secret generic postgresql-secrets --from-file=secrets/ -n shared-services
secret "postgresql-secrets" created

# Verify

$> kubectl describe secrets/postgresql-secrets -n shared-services
Name:         postgresql-secrets
Namespace:    shared-services
Labels:       
Annotations:  

Type:  Opaque

Data
====
server.key:          887 bytes
postgresql-pwd.txt:  13 bytes
root.crt:            891 bytes
server.crt:          891 bytes

Note: As seen above, I have created MD5 hash using "md5<password>:<userid>". The reason, I added string "md5" in front of hashed string is that when Postgres sees "md5" as a prefix, it recognizes that the string is already hashed and does not try to hash again and stores as it is.

Create PersistentVolume (PV) and PersistentVolumeClaim (PVC) 

Let's go ahead and create PV and PVC. We will use 'Retain' as persistentVolumeReclaimPolicy, so that data can be retained even when Postgresql pod is destroyed and recreated.
Sample PV yaml file:
## shared-nfs-pv-postgresql.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv-postgresql
  namespace: shared-services
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/postgresql/
    server: 192.168.56.101
  persistentVolumeReclaimPolicy: Retain

Sample PVC yaml file:
## shared-nfs-pvc-postgresql.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc-postgresql
  namespace: shared-services
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

PV and PVC creation and verification steps:
# Create persistentvolume
$> kubectl create -f yaml/shared-nfs-pv-postgresql.yml
persistentvolume "shared-nfs-pv-postgresql" created

# Create persistentvolumeclaim
$> kubectl create -f yaml/shared-nfs-pvc-postgresql.yml
persistentvolumeclaim "shared-nfs-pvc-postgresql" created

# Verify and make sure status of persistentvolumeclaim/shared-nfs-pvc-postgresql is Bound
$> kubectl get pv,pvc -n shared-services
NAME                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS   REASON    AGE
persistentvolume/shared-nfs-pv-postgresql   5Gi        RWX            Retain           Bound     shared-services/shared-nfs-pvc-postgresql                            32s

NAME                                              STATUS    VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/shared-nfs-pvc-postgresql   Bound     shared-nfs-pv-postgresql   5Gi        RWX                           20s

Create deployment manifest file

Here is the one, I have put together. You can customize it further per your need.

---
# Service definition
apiVersion: v1
kind: Service
metadata:
  name: sysg-postgres-svc
  namespace: shared-services
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
      name: tcp-5432
  selector:
      app: sysg-postgres-app
---
# Deployment definition
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: sysg-postgres-dpl
  namespace: shared-services
spec:
  selector:
    matchLabels:
      app: sysg-postgres-app
  replicas: 1
  template:
    metadata:
      labels:
        app: sysg-postgres-app
    spec:
      serviceAccountName: shared-svc-accnt
      securityContext:
        runAsUser: 70
        supplementalGroups: [999,1000]
        fsGroup: 70
      volumes:
        - name: shared-nfs-pv-postgresql
          persistentVolumeClaim:
            claimName: shared-nfs-pvc-postgresql
        - name: secret-vol
          secret:
            secretName: postgresql-secrets
            defaultMode: 0640
        - name: config-vol
          configMap:
            name: postgresql-config
      containers:
      - name: sysg-postgres-cnt
        image: postgres:10.5-alpine
        imagePullPolicy: IfNotPresent
        args:
          - -c
          - hba_file=/etc/postgresql-config-vol/pg_hba.conf
          - -c
          - config_file=/etc/postgresql-config-vol/postgresql.conf
        env:
          - name: POSTGRES_USER
            value: postgres
          - name: PGUSER
            value: postgres
          - name: POSTGRES_DB
            value: mmdb
          - name: PGDATA
            value: /var/lib/postgresql/data/pgdata
          - name: POSTGRES_PASSWORD_FILE
            value: /etc/postgresql-secrets-vol/postgresql-pwd.txt
        ports:
         - containerPort: 5432
        volumeMounts:
          - name: config-vol
            mountPath: /etc/postgresql-config-vol
          - mountPath: /var/lib/postgresql/data/pgdata
            name: shared-nfs-pv-postgresql
          - name: secret-vol
            mountPath: /etc/postgresql-secrets-vol
      nodeSelector:
        kubernetes.io/hostname: centosddcwrk01

Deploy the Postgresql

Below steps show the creation of service and deployment as well as step to make sure that the Postgres is running with SSL enabled mode.
# Deploy
$> kubectl apply -f yaml/postgres-deploy.yml
service "sysg-postgres-svc" created
deployment.apps "sysg-postgres-dpl" created

# Verify
$> kubectl get pods,svc -n shared-services
NAME                                     READY     STATUS    RESTARTS   AGE
pod/sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          1h

NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/sysg-postgres-svc   ClusterIP   10.96.90.30           5432/TCP   1h

# sh into the postgresql pod:
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services
/ $

# Launch psql
/ $ psql -U postgres
psql (10.5)
Type "help" for help.

# Verify SSL is enabled
postgres=# SHOW ssl;
 ssl
-----
 on
(1 row)

# Check the stored password. It should match the hashed value of "<password><user>" with "md5" prepended.
postgres=#  select usename,passwd from pg_catalog.pg_shadow;
 usename  |               passwd
----------+-------------------------------------
 postgres | md5db59316e90b1afb5334a331081618af6

# Connect remotely. You need to provide password.
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm -n shared-services -- psql "sslmode=require host=10.96.90.30 port=5432 dbname=mmdb" --username=postgres
Password for user postgres:
psql (10.5)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
(1 row)

Few key points

1) Using customized configuration file:
As you have seen above,  I have created configMap object postgresql-config and used it using option '-c config_file=/etc/postgresql-config-vol/postgresql.conf'. ConfigMap object postgresql-config is mapped to path '/etc/postgresql-config-vol' in volumeMounts definition.

containers:
- name: sysg-postgres-cnt
  imagePullPolicy: IfNotPresent
  args:
    - -c
    - hba_file=/etc/postgresql-config-vol/pg_hba.conf
    - -c
    - config_file=/etc/postgresql-config-vol/postgresql.conf

volumeMounts:
  - name: config-vol
    mountPath: /etc/postgresql-config-vol


2) Creating environment variable from secrets:

env:
  - name: POSTGRES_PASSWORD_FILE
    value: /etc/postgresql-secrets-vol/postgresql-pwd.txt

And the secret is mapped to path /etc/postgresql-secrets-vol
volumeMounts:
  - name: secret-vol
    mountPath: /etc/postgresql-secrets-vol

3) PGDATA environment variable: 
The default value is '/var/lib/postgresql/data'. However, Postgres recommends "... if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.". Refer to https://hub.docker.com/_/postgres/

Here we assign /var/lib/postgresql/data/pgdata:
env:
  - name: PGDATA
    value: /var/lib/postgresql/data/pgdata


Troubleshooting

1) Make sure server.key, server.crt, and root.crt all have appropriate permissions that is 0400 (if owned by postgres process owner) or 0640 (if owned by root). If proper permissions is not applied, Postgresql will not start. and in log, you will see following FATAL message.

2018-09-22 18:26:22.391 UTC [1] FATAL:  private key file "/etc/postgresql-secrets-vol/server.key" has group or world access
2018-09-22 18:26:22.391 UTC [1] DETAIL: File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
2018-09-22 18:26:22.391 UTC [1] LOG:  database system is shut down

In order to apply proper permission in file level, you can use 'defaultMode', I'm using defaultMode: 0644 as shown below (fragment from postgres-deploy.yml)

- name: secret-vol
  secret:
    secretName: postgresql-secrets
    defaultMode: 0640

2) Make sure to have right ownership - whether the files/directories are related to secret volume, config volume or persistence storage volume. Below is the error related PV path:

initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted

In order to resolve the above issue, you need to use Kubernetes' provided securityContext options like 'runAsUser', 'fsGroup', 'supplementalGroups' and/or capabilities. SecurityContext can be defined in both pod level and container level. In my case, I've defined it in pod level as shown below (fragment from postgres-deploy.yml)

securityContext:
  runAsUser: < specify your run as user >
  fsGroup: < specify group >
  supplementalGroups: [< comma delimited list of supplementalGroups >]

Read Configure a Security Context for a Pod or Container chapter from official Kubernetes site. I've also given some troubleshooting tips while using NFS type Persistent Volume and Claim in my previous blog How to Create, Troubleshoot and Use NFS type Persistent Storage Volume in Kubernetes

Below, I'm showing file permission per my configuration. Files are owned by root:postgres.

# Get running Kubernetes pod
$> kubectl get pods -n shared-services
NAME                                 READY     STATUS    RESTARTS   AGE
sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          12s

# sh to running Kubernetes pod
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services

# Explore files
/ $ cd /etc/postgresql-secrets-vol/..data/
/etc/postgresql-secrets-vol/..2018_09_22_19_24_52.289379695 $ ls -la

drwxr-sr-x    2 root     postgres       120 Sep 23 01:19 .
drwxrwsrwt    3 root     postgres       160 Sep 23 01:19 ..
-rw-r-----    1 root     postgres        13 Sep 23 01:19 postgresql-pwd.txt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 root.crt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 server.crt
-rw-r-----    1 root     postgres       887 Sep 23 01:19 server.key

3) psql: FATAL:  no pg_hba.conf entry for host "10.0.2.15", user "postgres" ... This FATAL message usually appears when you are trying to establish connection to Postgres, but the way you are trying to  authenticate is not defined in pg_hba.conf. Either the source IP (from where the connection originates is out of range, security option is not supported. Check your pg_hba.conf file and make sure right entry has been added.

[Optional]  Creating custom Postgres Docker image with customized postgresql.conf

If you prefer to create custom Docker image with custom postgresql.conf rather than creating configMap and using '-c config_file' option, you can do so. Here is how:

Create Dockerfile:


FROM postgres:10.5-alpine
COPY config/postgresql.conf /tmp/postgresql.conf
COPY scripts/_updateConfig.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/_updateConfig.sh && chmod 644 /tmp/postgresql.conf

My custom postgresql.conf is located under config directory locally. It will be copied to /tmp when Docker image is created. _updateConfig.sh is located under scripts directory locally and copied to /docker-entrypoint-initdb.d/ in build time.

Create script file _updateConfig.sh as shown below. It assumes that default PGDATA value '/var/lib/postgresql/data' is being used.

#!/usr/bin/env bash
cat /tmp/postgresql.conf > /var/lib/postgresql/data/postgresql.conf

Important: we can not directly copy the custom postgresql.conf into $PGDATA directory in build time because that directory does not exist yet.

Build the image:


Directory and files shown below are local:
$> ls -la postgresql/

drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 13:01 config
-rwxr-xr--.  1 osboxes osboxes  227 Sep 22 19:14 Dockerfile
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 18:39 scripts
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 23:42 secrets
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 12:11 yaml
$>cd postgresql

# docker build -t <image tag> .
# In my case I am using osboxes/postgres:10.5-sysg as image name and tag.
$> docker build -t osboxes/postgres:10.5-sysg .

If you use custom Docker image built this way, you don't need to define configMap to use custom postgresql.conf.



Trip to Tobermory - a perfect getaway within driving distance of Toronto

My family's no-labouring plan for this year's Labour Day long weekend, included some cool & quality time at the warm Sauble beach, overnight stay in traditionally Scottish town of Kincardine as well as a trip to the majestic lakeside town called Tobermory. It was a short (two days only) plan for a long weekend. We intentionally made it two days only, so that we could return back home on Monday and finish any last minute, left behind preparation for a new school year starting from Tuesday.

    As seen from Ferry Terminal - Lake Huron and Islands nearby
                                         
Originally, our plan was to visit Tobermory for the full duration of the weekend, and soak in the beauty of the place, however because of an unsuccessful attempt to book a hotel (Expedia and hotels.com only showed listings that famously said - we are sold out), we shifted our attention to next near by town called Owen Sound. However I found myself in a similar situation and had no luck in finding a hotel in that city as well. At one point I was even thinking about giving up on the long weekend idea, but one of my friends gave  me the brilliant idea of booking a hotel in Kincardine, Ontario and I did. Early Saturday morning, we were to drive about 2 hours and 50 minutes from Markham to Sauble beach. However, thanks to my daughters' research of Kincardine, we changed the plan at the last minute. They found very good reviews about Station beach in Kincardine and convinced us to head directly there, rather than going to Sauble beach. The fact that we had been to Sauble beach before also helped to make that decision.
      On Saturday morning, we left home around 7:50 am. The first few hours of our morning was foggy, but the scene began to look better as it slowly lifted. Sometimes even few hours of driving (it is about two and half hours of  drive from Toronto to Kincardine) can be boring. However, if you have cheerful kids in the car, it's sure be a lot of fun. We enjoyed watching countryside while driving. We even played "I spy with my little eyes ... Even though my kids are teenagers now, they love playing this game on trips ,which keeps us all engaged so we don’t miss out on the little things." 

Conquering Kincardine

We were at Station beach by 11:30 am. I immediately fell in love with the beach because of it's clean pristine water, and long sandy shoreline. It is so conveniently located just in walking distance from the downtown and parking is free. As soon as I touched the water, I felt a deep need to submerge myself in it immediately. It was amazing to swim there. The beach wasn’t too crowded, which made it the perfect place to relax and take a breather. No matter what your idea of relaxing is, you can achieve there. You can swim, sleep, read, or maybe play beach volleyball. We had brought food from home, so we enjoyed little picnic there. I even had a good nap while sunbathing. My kids went for a walk along the boardwalk (a great place to walk, jog and enjoy the view) and took beautiful pictures and videos. The boardwalk became a walk & learn experience for them  as it  had  interpretive signs with information about local marine history and shipwrecks.

Station Beach - even birds are sunbathing here!!!


      Marina at Station beach
               Interpretive sign
Tips:

  • Station beach is located at 151 Station Beach Road, Kincardine ON
  • As per www.canoe.ca, this water is listed as one of the top 9 destinations for surfing.
  • For those with mobility issues, there are 'MOBI-Mats' at Station Beach. The mats stretch right to the waters edge. 
  • If you enjoy playing beach volleyball, then there is co-ed beach volleyball every Friday at 7 p.m
  • The park has a bouncy castle for kids to enjoy
  • The lighthouse and museum are just a 2 to 3 minute walk way and lakeside downtown Kincardine is just nearby.

After spending almost three and half hours on the beach, we drove to the hotel (Sutton Park Inn). The check-in process was smooth and the hotel was nice. After taking shower and having a fresh cup of coffee, we went out for a tour to downtown Kincardine. We parked our car, in the harbour street, near the lighthouse and went on a walk along Queen street. Kincardine is a small town located on the shores of Lake Huron in Bruce County and has strong Scottish heritage. I was told that during summer, every Saturday night, people in Kincardine celebrate and take part in Scottish Pipe Band Parades.

Queen Street, downtown Kincardine

If you have a sweet tooth, you should visit this little chocolate shop (Mill Creek Chocolates) located at 813 Queen street. They Offer the handmade chocolates in very personalized packaging finished with colourful wraps and ribbons. You can enjoy the chocolates yourself or bring as a gift. You'll love them.

Mill Creek Chocolates Shop at Queen street.
Mill Creek Chocolates showroom

We also had chance to have some friendly conversation with the sales lady. She told us a little history of Kincardine, the Mill Creek Chocolates as well as how she ended up living in Kincardine. Originally from Brampton Ontario, she once visited Kincardine, liked the town and since then has been living in Kincardine.
     I had read that Station beach had one of the most beautiful sunsets. But unfortunately the evening became cloudy around 6:30 PM and looked like it was going to rain. Frustrated a little bit, we instead went to see the lighthouse which we found very fitting to the aesthetic of the small town. Around 7:00 pm, we were about to head back to the hotel, it started raining.  It rained all night long but we were prepared for this. We had brought board game <<Monopoly>> with us. We played until about 11:00 pm and had tons of fun while snacking the Mill Creek chocolates.
     It's no secret that when I initially booked my hotel in Kincardine, the purpose was to use it just as a transit point. However, now, I have no regrets whatsoever. We enjoyed it fully.


Trip to Tobermory


The next day (Sunday) we got up on time to have a complimentary breakfast at the hotel, and after packing our bags we started driving to Tobermory. It was around a two hours drive. We reached Tobermory at around 11:00 AM. We purchased our tickets (be prepared to spend time in ticket queue. Even if you have pre-purchased your ticket online, you still have to stay in line to check-in and to get a parking permit for your car. They do have a separate window to serve the holders of pre-purchased tickets) for cruise (Bruce Anchor company - 7468 HWY 6).
Notes:
  • The Bruce Anchor company offers few options. See their web site for more details.
  • Ticket price also includes the parking fee and they have free shuttle from the parking spot to the Ferry Terminal and back.
  • There are other boat and cruise services as well. Check the following:
      We booked the Tobermory Explorer option and enjoyed the view (while staying aboard) of shipwrecks lighthouses, and several beautiful islands in the Fathom Five National Marine Park.

Bruce Anchor boat cruise
Big Tub Lighthouse

It was around 75 minutes of fun-filled cruising. The cruise moves at a slow speed around the "Little Tub Harbour", where you can see the sunken ships (Note: Tobermory is home to over 20 historic shipwrecks) through the glass window or glass bottom. Speed bumps up once it is out of Little Tub Harbour, water comes flying through the window and you'll get taste of fresh of water Lake Huron.

Sunken ship as seen from glass window of cruise.
One of the flower pot rock pillars in Flowerpot island

If you choose the drop off option to Flowerpot island, you need to follow few rules in order not to damage natural environment (see instruction in picture below). Looks like the name - Flowerpot comes from two rock pillars on it's shore, which exactly look like flower pots. The island itself is about about two square kilometres in area and is a popular tourist destination. Activities like swimming, hiking and camping are allowed there.

Flowerpot island - visitor information
One of the flower pot rock pillars in Flowerpot island

After returning from the cruise, we went for a walk on a trail in the woods, which eventually led us to the lake. We stayed there for about an hour - swimming and watching the waves of blue water continuously hitting the big rocks on the shore. My kids didn't want to return and asked if we could stay to stay for another day or so. But I had to refuse as we didn't have a hotel booked for another night.
      So, we started driving back home. We drove a good three hours and made a stop along the way for some food and drinks. It was around 10:30 PM, when we arrived home.
     For us, it turned out to be a perfect getaway within driving distance of Toronto. I would definitely recommend a trip to Tobermory for anyone! It is an amazing escape that fits the budget and time.