How to Configure PostgreSQL with SSL/TLS support on Kubernetes

SSL is disabled in the default Postgresql configuration and I had to struggle a little bit while making Postgresql on Kubernetes with SSL/TLS support to work. After few research and trials, I was able to resolve the issues and here I'm sharing what I have done to make it work for me.


    
    

High Level Steps:
  1. Customize postgresql.conf  (to add/edit SSL/TLS configuration) and create configMap object. This way, we don't need to rebuild the Postgres to apply custom postgresql.conf , because ConfigMap allows us you to decouple configuration artifacts from image content. 
  2. Create secret type objects for server.key, server.crt, root.crt, ca.crt, and password file.
  3. Define and use NFS type PersistentVolume (PV) and PersistentVolumeClaim (PVC)
  4. Use securityContext to resolve permission issues.
  5. Use '-c config_file=<config-volume-location>/postgresql.conf' to override the default postgresql.conf
Note: all files used in this post can be cloned/downloaded from GitHub https://github.com/pppoudel/postgresql-with-ssl-on-kubernetes.git

Let's get started

In the example, I'm using namespace called 'shared-services' and service account called 'shared-svc-accnt'. You can create your own namespace and service account or use the 'default'. In anyways, I have listed here necessary steps and yaml files can be downloaded from github.

Create namespace and service account


# Create namespace shared-services

   $> kubectl create -f shared-services-ns.yml

# Create Service Account shared-svc-accnt

   $> kubectl create -f shared-svc-accnt.yml

# Create a grant for service account shared-svc-accnt. Do this step as per your platform.


Create configMap object

I have put both postgresql.conf and pg_hba.conf under config directory. I have updated postgresql.conf as follows:

ssl = on
ssl_cert_file = '/etc/postgresql-secrets-vol/server.crt'
ssl_key_file = '/etc/postgresql-secrets-vol/server.key'

Note: the location '/etc/postgresql-config-vol' needs to be mounted while defining 'volumeMounts', which we will discuss later in the post.

Those three above listed are the main configuration items that need to have proper values in order to force Postgresql to support SSL/TLS. If you are using CA signed certificate, you also need to provide value for 'ssl_ca_file' and optionally 'ssl_crl_file'. Read Secure TCP/IP Connections with SSL for more details.
You also need to update the pg_hba.conf (HBA stands for host-based authentication) as necessary. pg_hba.conf is used to manage connection type, control access using a client IP address range, a database name, a user name, and the authentication method etc.

# Sample entries in pg_hba.conf
# Trust local connection - no password required.
local    all             all                                     trust
# Only secured remote connection from given IP-Range accepted and password are encoded using MD5
#hostssl  all             all             < Cluster IP-Range >/< Prefix length >         md5
hostssl  all             all             10.96.0.0/16         md5


$> ls -l config/

-rw-------. 1 osboxes osboxes  4535 Sep 22 17:33 pg_hba.conf
-rw-------. 1 osboxes osboxes 22781 Sep 23 03:03 postgresql.conf

# Create configMap object
$> kubectl create configmap postgresql-config --from-file=config/ -n shared-services
configmap "postgresql-config" created

# Review created object
$> kubectl describe configMap/postgresql-config -n shared-services
Name:         postgresql-config
Namespace:    shared-services
...

Create secrets

I've created server.key and self signed certificate using OpenSSL. you can either do the same or have CA signed certificates. Here, we are not going to use the client certificate. Read section 18.9.3. Creating Certificates if you need help in creating certificates.

# Create MD5 hashed password to be used with postgresql
$> POSTGRES_USER=postgres
$> POSTGRES_PASSWORD=myp3qlpwD
$> echo "md5$(echo -n $POSTGRES_PASSWORD$POSTGRES_USER | md5sum | cut -d ' ' -f1)" > secrets/postgresql-pwd.txt

# Here are all files under secrets directory
$> ls -la secrets/
-rw-rw-r--. 1 osboxes osboxes  13 Sep 22 23:42 postgresql-pwd.txt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:51 root.crt
-rw-rw-r--. 1 osboxes osboxes 891 Sep 22 16:49 server.crt
-r--------. 1 osboxes osboxes 887 Sep 22 16:43 server.key

# Create secret postgresql-secrets
$> kubectl create secret generic postgresql-secrets --from-file=secrets/ -n shared-services
secret "postgresql-secrets" created

# Verify

$> kubectl describe secrets/postgresql-secrets -n shared-services
Name:         postgresql-secrets
Namespace:    shared-services
Labels:       
Annotations:  

Type:  Opaque

Data
====
server.key:          887 bytes
postgresql-pwd.txt:  13 bytes
root.crt:            891 bytes
server.crt:          891 bytes

Note: As seen above, I have created MD5 hash using "md5<password>:<userid>". The reason, I added string "md5" in front of hashed string is that when Postgres sees "md5" as a prefix, it recognizes that the string is already hashed and does not try to hash again and stores as it is.

Create PersistentVolume (PV) and PersistentVolumeClaim (PVC) 

Let's go ahead and create PV and PVC. We will use 'Retain' as persistentVolumeReclaimPolicy, so that data can be retained even when Postgresql pod is destroyed and recreated.
Sample PV yaml file:
## shared-nfs-pv-postgresql.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-nfs-pv-postgresql
  namespace: shared-services
spec:
  capacity:
    storage: 5Gi
  accessModes:
  - ReadWriteMany
  nfs:
    path: /var/postgresql/
    server: 192.168.56.101
  persistentVolumeReclaimPolicy: Retain

Sample PVC yaml file:
## shared-nfs-pvc-postgresql.yml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-nfs-pvc-postgresql
  namespace: shared-services
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

PV and PVC creation and verification steps:
# Create persistentvolume
$> kubectl create -f yaml/shared-nfs-pv-postgresql.yml
persistentvolume "shared-nfs-pv-postgresql" created

# Create persistentvolumeclaim
$> kubectl create -f yaml/shared-nfs-pvc-postgresql.yml
persistentvolumeclaim "shared-nfs-pvc-postgresql" created

# Verify and make sure status of persistentvolumeclaim/shared-nfs-pvc-postgresql is Bound
$> kubectl get pv,pvc -n shared-services
NAME                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                                       STORAGECLASS   REASON    AGE
persistentvolume/shared-nfs-pv-postgresql   5Gi        RWX            Retain           Bound     shared-services/shared-nfs-pvc-postgresql                            32s

NAME                                              STATUS    VOLUME                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/shared-nfs-pvc-postgresql   Bound     shared-nfs-pv-postgresql   5Gi        RWX                           20s

Create deployment manifest file

Here is the one, I have put together. You can customize it further per your need.

---
# Service definition
apiVersion: v1
kind: Service
metadata:
  name: sysg-postgres-svc
  namespace: shared-services
spec:
  type: ClusterIP
  ports:
    - port: 5432
      targetPort: 5432
      protocol: TCP
      name: tcp-5432
  selector:
      app: sysg-postgres-app
---
# Deployment definition
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: sysg-postgres-dpl
  namespace: shared-services
spec:
  selector:
    matchLabels:
      app: sysg-postgres-app
  replicas: 1
  template:
    metadata:
      labels:
        app: sysg-postgres-app
    spec:
      serviceAccountName: shared-svc-accnt
      securityContext:
        runAsUser: 70
        supplementalGroups: [999,1000]
        fsGroup: 70
      volumes:
        - name: shared-nfs-pv-postgresql
          persistentVolumeClaim:
            claimName: shared-nfs-pvc-postgresql
        - name: secret-vol
          secret:
            secretName: postgresql-secrets
            defaultMode: 0640
        - name: config-vol
          configMap:
            name: postgresql-config
      containers:
      - name: sysg-postgres-cnt
        image: postgres:10.5-alpine
        imagePullPolicy: IfNotPresent
        args:
          - -c
          - hba_file=/etc/postgresql-config-vol/pg_hba.conf
          - -c
          - config_file=/etc/postgresql-config-vol/postgresql.conf
        env:
          - name: POSTGRES_USER
            value: postgres
          - name: PGUSER
            value: postgres
          - name: POSTGRES_DB
            value: mmdb
          - name: PGDATA
            value: /var/lib/postgresql/data/pgdata
          - name: POSTGRES_PASSWORD_FILE
            value: /etc/postgresql-secrets-vol/postgresql-pwd.txt
        ports:
         - containerPort: 5432
        volumeMounts:
          - name: config-vol
            mountPath: /etc/postgresql-config-vol
          - mountPath: /var/lib/postgresql/data/pgdata
            name: shared-nfs-pv-postgresql
          - name: secret-vol
            mountPath: /etc/postgresql-secrets-vol
      nodeSelector:
        kubernetes.io/hostname: centosddcwrk01

Deploy the Postgresql

Below steps show the creation of service and deployment as well as step to make sure that the Postgres is running with SSL enabled mode.
# Deploy
$> kubectl apply -f yaml/postgres-deploy.yml
service "sysg-postgres-svc" created
deployment.apps "sysg-postgres-dpl" created

# Verify
$> kubectl get pods,svc -n shared-services
NAME                                     READY     STATUS    RESTARTS   AGE
pod/sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          1h

NAME                        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/sysg-postgres-svc   ClusterIP   10.96.90.30           5432/TCP   1h

# sh into the postgresql pod:
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services
/ $

# Launch psql
/ $ psql -U postgres
psql (10.5)
Type "help" for help.

# Verify SSL is enabled
postgres=# SHOW ssl;
 ssl
-----
 on
(1 row)

# Check the stored password. It should match the hashed value of "<password><user>" with "md5" prepended.
postgres=#  select usename,passwd from pg_catalog.pg_shadow;
 usename  |               passwd
----------+-------------------------------------
 postgres | md5db59316e90b1afb5334a331081618af6

# Connect remotely. You need to provide password.
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm -n shared-services -- psql "sslmode=require host=10.96.90.30 port=5432 dbname=mmdb" --username=postgres
Password for user postgres:
psql (10.5)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
(1 row)

Few key points

1) Using customized configuration file:
As you have seen above,  I have created configMap object postgresql-config and used it using option '-c config_file=/etc/postgresql-config-vol/postgresql.conf'. ConfigMap object postgresql-config is mapped to path '/etc/postgresql-config-vol' in volumeMounts definition.

containers:
- name: sysg-postgres-cnt
  imagePullPolicy: IfNotPresent
  args:
    - -c
    - hba_file=/etc/postgresql-config-vol/pg_hba.conf
    - -c
    - config_file=/etc/postgresql-config-vol/postgresql.conf

volumeMounts:
  - name: config-vol
    mountPath: /etc/postgresql-config-vol


2) Creating environment variable from secrets:

env:
  - name: POSTGRES_PASSWORD_FILE
    value: /etc/postgresql-secrets-vol/postgresql-pwd.txt

And the secret is mapped to path /etc/postgresql-secrets-vol
volumeMounts:
  - name: secret-vol
    mountPath: /etc/postgresql-secrets-vol

3) PGDATA environment variable: 
The default value is '/var/lib/postgresql/data'. However, Postgres recommends "... if the data volume you're using is a fs mountpoint (like with GCE persistent disks), Postgres initdb recommends a subdirectory (for example /var/lib/postgresql/data/pgdata ) be created to contain the data.". Refer to https://hub.docker.com/_/postgres/

Here we assign /var/lib/postgresql/data/pgdata:
env:
  - name: PGDATA
    value: /var/lib/postgresql/data/pgdata


Troubleshooting

1) Make sure server.key, server.crt, and root.crt all have appropriate permissions that is 0400 (if owned by postgres process owner) or 0640 (if owned by root). If proper permissions is not applied, Postgresql will not start. and in log, you will see following FATAL message.

2018-09-22 18:26:22.391 UTC [1] FATAL:  private key file "/etc/postgresql-secrets-vol/server.key" has group or world access
2018-09-22 18:26:22.391 UTC [1] DETAIL: File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
2018-09-22 18:26:22.391 UTC [1] LOG:  database system is shut down

In order to apply proper permission in file level, you can use 'defaultMode', I'm using defaultMode: 0644 as shown below (fragment from postgres-deploy.yml)

- name: secret-vol
  secret:
    secretName: postgresql-secrets
    defaultMode: 0640

2) Make sure to have right ownership - whether the files/directories are related to secret volume, config volume or persistence storage volume. Below is the error related PV path:

initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted

In order to resolve the above issue, you need to use Kubernetes' provided securityContext options like 'runAsUser', 'fsGroup', 'supplementalGroups' and/or capabilities. SecurityContext can be defined in both pod level and container level. In my case, I've defined it in pod level as shown below (fragment from postgres-deploy.yml)

securityContext:
  runAsUser: < specify your run as user >
  fsGroup: < specify group >
  supplementalGroups: [< comma delimited list of supplementalGroups >]

Read Configure a Security Context for a Pod or Container chapter from official Kubernetes site. I've also given some troubleshooting tips while using NFS type Persistent Volume and Claim in my previous blog How to Create, Troubleshoot and Use NFS type Persistent Storage Volume in Kubernetes

Below, I'm showing file permission per my configuration. Files are owned by root:postgres.

# Get running Kubernetes pod
$> kubectl get pods -n shared-services
NAME                                 READY     STATUS    RESTARTS   AGE
sysg-postgres-dpl-596754d5d4-mc8fm   1/1       Running   0          12s

# sh to running Kubernetes pod
$> kubectl exec -it sysg-postgres-dpl-596754d5d4-mc8fm /bin/sh -n shared-services

# Explore files
/ $ cd /etc/postgresql-secrets-vol/..data/
/etc/postgresql-secrets-vol/..2018_09_22_19_24_52.289379695 $ ls -la

drwxr-sr-x    2 root     postgres       120 Sep 23 01:19 .
drwxrwsrwt    3 root     postgres       160 Sep 23 01:19 ..
-rw-r-----    1 root     postgres        13 Sep 23 01:19 postgresql-pwd.txt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 root.crt
-rw-r-----    1 root     postgres       891 Sep 23 01:19 server.crt
-rw-r-----    1 root     postgres       887 Sep 23 01:19 server.key

3) psql: FATAL:  no pg_hba.conf entry for host "10.0.2.15", user "postgres" ... This FATAL message usually appears when you are trying to establish connection to Postgres, but the way you are trying to  authenticate is not defined in pg_hba.conf. Either the source IP (from where the connection originates is out of range, security option is not supported. Check your pg_hba.conf file and make sure right entry has been added.

[Optional]  Creating custom Postgres Docker image with customized postgresql.conf

If you prefer to create custom Docker image with custom postgresql.conf rather than creating configMap and using '-c config_file' option, you can do so. Here is how:

Create Dockerfile:


FROM postgres:10.5-alpine
COPY config/postgresql.conf /tmp/postgresql.conf
COPY scripts/_updateConfig.sh /docker-entrypoint-initdb.d/
RUN chmod 755 /docker-entrypoint-initdb.d/_updateConfig.sh && chmod 644 /tmp/postgresql.conf

My custom postgresql.conf is located under config directory locally. It will be copied to /tmp when Docker image is created. _updateConfig.sh is located under scripts directory locally and copied to /docker-entrypoint-initdb.d/ in build time.

Create script file _updateConfig.sh as shown below. It assumes that default PGDATA value '/var/lib/postgresql/data' is being used.

#!/usr/bin/env bash
cat /tmp/postgresql.conf > /var/lib/postgresql/data/postgresql.conf

Important: we can not directly copy the custom postgresql.conf into $PGDATA directory in build time because that directory does not exist yet.

Build the image:


Directory and files shown below are local:
$> ls -la postgresql/

drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 13:01 config
-rwxr-xr--.  1 osboxes osboxes  227 Sep 22 19:14 Dockerfile
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 18:39 scripts
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 22 23:42 secrets
drwxrwxr-x.  2 osboxes osboxes 4096 Sep 23 12:11 yaml
$>cd postgresql

# docker build -t <image tag> .
# In my case I am using osboxes/postgres:10.5-sysg as image name and tag.
$> docker build -t osboxes/postgres:10.5-sysg .

If you use custom Docker image built this way, you don't need to define configMap to use custom postgresql.conf.



Trip to Tobermory - a perfect getaway within driving distance of Toronto

My family's no-labouring plan for this year's Labour Day long weekend, included some cool & quality time at the warm Sauble beach, overnight stay in traditionally Scottish town of Kincardine as well as a trip to the majestic lakeside town called Tobermory. It was a short (two days only) plan for a long weekend. We intentionally made it two days only, so that we could return back home on Monday and finish any last minute, left behind preparation for a new school year starting from Tuesday.

    As seen from Ferry Terminal - Lake Huron and Islands nearby
                                         
Originally, our plan was to visit Tobermory for the full duration of the weekend, and soak in the beauty of the place, however because of an unsuccessful attempt to book a hotel (Expedia and hotels.com only showed listings that famously said - we are sold out), we shifted our attention to next near by town called Owen Sound. However I found myself in a similar situation and had no luck in finding a hotel in that city as well. At one point I was even thinking about giving up on the long weekend idea, but one of my friends gave  me the brilliant idea of booking a hotel in Kincardine, Ontario and I did. Early Saturday morning, we were to drive about 2 hours and 50 minutes from Markham to Sauble beach. However, thanks to my daughters' research of Kincardine, we changed the plan at the last minute. They found very good reviews about Station beach in Kincardine and convinced us to head directly there, rather than going to Sauble beach. The fact that we had been to Sauble beach before also helped to make that decision.
      On Saturday morning, we left home around 7:50 am. The first few hours of our morning was foggy, but the scene began to look better as it slowly lifted. Sometimes even few hours of driving (it is about two and half hours of  drive from Toronto to Kincardine) can be boring. However, if you have cheerful kids in the car, it's sure be a lot of fun. We enjoyed watching countryside while driving. We even played "I spy with my little eyes ... Even though my kids are teenagers now, they love playing this game on trips ,which keeps us all engaged so we don’t miss out on the little things." 

Conquering Kincardine

We were at Station beach by 11:30 am. I immediately fell in love with the beach because of it's clean pristine water, and long sandy shoreline. It is so conveniently located just in walking distance from the downtown and parking is free. As soon as I touched the water, I felt a deep need to submerge myself in it immediately. It was amazing to swim there. The beach wasn’t too crowded, which made it the perfect place to relax and take a breather. No matter what your idea of relaxing is, you can achieve there. You can swim, sleep, read, or maybe play beach volleyball. We had brought food from home, so we enjoyed little picnic there. I even had a good nap while sunbathing. My kids went for a walk along the boardwalk (a great place to walk, jog and enjoy the view) and took beautiful pictures and videos. The boardwalk became a walk & learn experience for them  as it  had  interpretive signs with information about local marine history and shipwrecks.

Station Beach - even birds are sunbathing here!!!


      Marina at Station beach
               Interpretive sign
Tips:

  • Station beach is located at 151 Station Beach Road, Kincardine ON
  • As per www.canoe.ca, this water is listed as one of the top 9 destinations for surfing.
  • For those with mobility issues, there are 'MOBI-Mats' at Station Beach. The mats stretch right to the waters edge. 
  • If you enjoy playing beach volleyball, then there is co-ed beach volleyball every Friday at 7 p.m
  • The park has a bouncy castle for kids to enjoy
  • The lighthouse and museum are just a 2 to 3 minute walk way and lakeside downtown Kincardine is just nearby.

After spending almost three and half hours on the beach, we drove to the hotel (Sutton Park Inn). The check-in process was smooth and the hotel was nice. After taking shower and having a fresh cup of coffee, we went out for a tour to downtown Kincardine. We parked our car, in the harbour street, near the lighthouse and went on a walk along Queen street. Kincardine is a small town located on the shores of Lake Huron in Bruce County and has strong Scottish heritage. I was told that during summer, every Saturday night, people in Kincardine celebrate and take part in Scottish Pipe Band Parades.

Queen Street, downtown Kincardine

If you have a sweet tooth, you should visit this little chocolate shop (Mill Creek Chocolates) located at 813 Queen street. They Offer the handmade chocolates in very personalized packaging finished with colourful wraps and ribbons. You can enjoy the chocolates yourself or bring as a gift. You'll love them.

Mill Creek Chocolates Shop at Queen street.
Mill Creek Chocolates showroom

We also had chance to have some friendly conversation with the sales lady. She told us a little history of Kincardine, the Mill Creek Chocolates as well as how she ended up living in Kincardine. Originally from Brampton Ontario, she once visited Kincardine, liked the town and since then has been living in Kincardine.
     I had read that Station beach had one of the most beautiful sunsets. But unfortunately the evening became cloudy around 6:30 PM and looked like it was going to rain. Frustrated a little bit, we instead went to see the lighthouse which we found very fitting to the aesthetic of the small town. Around 7:00 pm, we were about to head back to the hotel, it started raining.  It rained all night long but we were prepared for this. We had brought board game <<Monopoly>> with us. We played until about 11:00 pm and had tons of fun while snacking the Mill Creek chocolates.
     It's no secret that when I initially booked my hotel in Kincardine, the purpose was to use it just as a transit point. However, now, I have no regrets whatsoever. We enjoyed it fully.


Trip to Tobermory


The next day (Sunday) we got up on time to have a complimentary breakfast at the hotel, and after packing our bags we started driving to Tobermory. It was around a two hours drive. We reached Tobermory at around 11:00 AM. We purchased our tickets (be prepared to spend time in ticket queue. Even if you have pre-purchased your ticket online, you still have to stay in line to check-in and to get a parking permit for your car. They do have a separate window to serve the holders of pre-purchased tickets) for cruise (Bruce Anchor company - 7468 HWY 6).
Notes:
  • The Bruce Anchor company offers few options. See their web site for more details.
  • Ticket price also includes the parking fee and they have free shuttle from the parking spot to the Ferry Terminal and back.
  • There are other boat and cruise services as well. Check the following:
      We booked the Tobermory Explorer option and enjoyed the view (while staying aboard) of shipwrecks lighthouses, and several beautiful islands in the Fathom Five National Marine Park.

Bruce Anchor boat cruise
Big Tub Lighthouse

It was around 75 minutes of fun-filled cruising. The cruise moves at a slow speed around the "Little Tub Harbour", where you can see the sunken ships (Note: Tobermory is home to over 20 historic shipwrecks) through the glass window or glass bottom. Speed bumps up once it is out of Little Tub Harbour, water comes flying through the window and you'll get taste of fresh of water Lake Huron.

Sunken ship as seen from glass window of cruise.
One of the flower pot rock pillars in Flowerpot island

If you choose the drop off option to Flowerpot island, you need to follow few rules in order not to damage natural environment (see instruction in picture below). Looks like the name - Flowerpot comes from two rock pillars on it's shore, which exactly look like flower pots. The island itself is about about two square kilometres in area and is a popular tourist destination. Activities like swimming, hiking and camping are allowed there.

Flowerpot island - visitor information
One of the flower pot rock pillars in Flowerpot island

After returning from the cruise, we went for a walk on a trail in the woods, which eventually led us to the lake. We stayed there for about an hour - swimming and watching the waves of blue water continuously hitting the big rocks on the shore. My kids didn't want to return and asked if we could stay to stay for another day or so. But I had to refuse as we didn't have a hotel booked for another night.
      So, we started driving back home. We drove a good three hours and made a stop along the way for some food and drinks. It was around 10:30 PM, when we arrived home.
     For us, it turned out to be a perfect getaway within driving distance of Toronto. I would definitely recommend a trip to Tobermory for anyone! It is an amazing escape that fits the budget and time.