Showing posts with label DevOps. Show all posts
Showing posts with label DevOps. Show all posts

How to Use Openssl to Create Keys, CSR and Cert Bundle, Review and Verify and Install

There are number of tools available to create SSL/TLS key pair and CSR. Here I'm going to use openssl.

1. Let's first create a key pair

In this example, we are creating a key of type RSA with 2048-bit key length.  It is recommended that you create a password protected private key.

# Create a plain text key pair (private and public keys)
openssl genrsa -out myserver.key 2048
# if you need, extract the public key from the one generated above
openssl rsa -in myserver.key -pubout > myserver.pub
# Create password protected (encrypted with aes128/aes256)
openssl genrsa -aes128 -passout pass:<password> -out enc-myserver.key 2048
# Encrypt existing plain text private key
openssl rsa -aes128 -in myserver.key -passout pass:<password> -out enc-myserver.key


2. Let's create a CSR. 

Openssl allows to provide input information using a openssl configuration file while creating a CSR. Good thing about the configuration file is that it can be stored in the version control system like git and re-used. Look the config file example below. Let's call it mycsr.cnf

[ req ]
defaults_bits = 2048
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
[ req_distinguished_name ]
countryName = CA
stateOrProvinceName = Ontario
localityName = Toronto
organizationName = IT
OU = ITWork
commonName = myexampleserver.ca
[ v3_req ]
basicConstraints = CA:false
keyUsage = Digital Signature, Key Encipherment
extendedKeyUsage = TLS Web Server Authentication, TLS Web Client Authentication
subjectAltName = @alt_names
[alt_names]
DNS.1 = my1stdns.com
DNS.2 = my2nddns.com
DNS.3 = my3rddns.com

Here we are using mycsr.cnf to feed the necessary information required to create the CSR. Since we are using encrypted key, let's pass the password using option -passin pass:<password>. If you don't use the -passin option, it will prompt you for the password. Here, it will generate the myserver.csr

openssl req -new -key enc-myserver.key -passin pass:<password> -out myserver.csr -config mycsr.cnf

Note: you can also generate CSR using the existing private key and existing certificate. See the commands below. Openssl prior to version 3.x, may not support the '-copy_extensions copyall'.

openssl x509 -x509toreq [-copy_extensions copyall] -in <existing certificate>.crt -signkey <existing private key> -out myserver.csr

Review the generated CSR. In the example below, we are verifying the mycsr.csr created above.

openssl req -noout -text -in mycsr.csr


3. Send your CSR to CA and Get the Signed Certs

Once your Certificate Authority (CA) receives the CSR, they process it and may send a link from where signed certificate(s) can be downloaded. The provided link may contain download options for Root CA cert, one or more intermediate cert(s) and server/domain cert. Depending upon how and for which server/application you are installing certificate, you may want to create a single PEM file from all provided certs. Here is how you can do it:

cat server.crt intermediate.crt rootca.crt >> cert-bundle.pem

Notes:
  1. make sure the certificate file are in PEM format. In order to check, just open the file in text editor like Notepad++ and see if it starts with -----BEGIN and content is in 'ASCII'. Certs can be converted from other format to PEM using openssl commands as follows:


    # Convert DER to PEM
    openssl x509 -in mycert.der -out mycert.pem
    # Convert CER to PEM
    openssl x509 -in mycert.cer -out mycert.pem
    # Convert CRT to PEM:
    openssl x509 -in mycert.crt -out mycert.pem


  2. Open the merged file cert-bundle.pem above in text editor and make sure that each -----BEGIN is in new line.
  3. If you are not able to install the password protected key, remove the password as follows:

    openssl rsa -in enc-myserver.key -passin pass:<password>=> -out myserver.key


4. Install and Verify your Certificate

Installation really depends on what your target server/application is. Here I'm showing a quick example for nginx. Here is a configuration snippet to enable SSL/TLS for nginx:


     server {
         listen       443 ssl;
         server_name  myexampleserver.ca;

         ssl_certificate      <cert-location>/ssl-bundle.crt;
         ssl_certificate_key  <cert-location>/enc-myserver.key;
 ssl_password_file    <path-to-password-file>/key.pass;

         ssl_session_cache    shared:SSL:1m;
         ssl_session_timeout  5m;

         ssl_ciphers  HIGH:!aNULL:!MD5;
         ssl_prefer_server_ciphers  on;

         location / {
             root   html;
             index  index.html index.htm;
         }
     }

Once the configuration is updated, start the nginx and access default page in the browser like 'https://myexampleserver.ca'

5. [Optional] Create .p12 key store for your Keys and Certs 


 PKCS 12 is a industry standard for storing many cryptography objects in a single file. Here is how you can create a PKCS 12 archive.


# openssl pkcs12 -export -in CertPath.cer [-certfile ssl-bundle.crt] -inkey privateKeyPath.key [-passin pass:<private key password>] -passout pass:<.p12 file password> -out key.p12

openssl pkcs12 -export -in ssl-bundle.crt -inkey enc-myserver.key -passin pass:<private key password> -passout pass:<p12 certstore password> -out mycertarchive.p12

Notes: 
  1. if the file passed using option -infile/in has both certs and private key, then -inkey option is not required. 
  2. if the file passed using option -infile/in has all the certs (including the server, intermediate, and rootca) included, then the -certfile option is not required. Usually the practice is to pass server cert file using -infile/in option, private key using -inkey option and rootCA, intermediate certs using -certfile option.


6. [Optional] Use .p12 with Java Keytool or KeyStore Explorer (KSE) 

You can open the .p12 file directly into KSE and use KSE functionalities. You can use the Java keytool as well. Here is an example of listing certs using Java keytool:

  1. List certs using keytool

    keytool -v -list -storetype pkcs12 -keystore mycertarchive.p12


  2. Convert to JKS if necessary. You'll be prompted for passwords

    #keytool -importkeystore -srckeystore <.p12 file> -srcstoretype pkcs12 -destkeystore <.jks file> -deststoretype JKS

    keytool -importkeystore -srckeystore mycertarchive.p12 -srcstoretype pkcs12 -destkeystore mycertarchive.jks -deststoretype JKS

How to Write a Custom Ansible Callback Plugin to Post to MS Teams using Jinja2 Template as a Message Card

 

I spent several hours last week-end doing some research and putting together an Ansible callback plugin that posts messages to Microsoft Teams when specific event(s) occurs in Ansible playbook. I could not find a real good documentation or example to follow. Don't get me wrong, yes, there are documentation/blog for Slack or some even related to sending messages to Teams, but not the way, I wanted. I wanted to send custom messages using Office 365 connector card written in Jinja2 template, which could be customized using the value(s) of extra-vars, goup_vars/host_vars for both success and failure events. 

Finally, I've put together a fully functional callback plugin and wanted to share it with the community, so that people will not have to pull out their hair for the same. The plugin source code can be found in the GitHub (see the links below), but here I'm explaining the details. 

Why callback plugin? You don't really need to use callback plugin to post a message to any end point from Ansible. Ansible even has an Office 365 connector card plugin. But the problem starts when you want to capture the failure event and post message accordingly. From Ansible playbook it's not easy to capture the failure event, unless you put your entire playbook in 'try...' block or put each task in 'try..' block because you can't possibly know which task will fail in next run. With callback plugin, it becomes easy, as the corresponding method is automatically invoked by a particular event in Ansible playbook. 

Why Jinja2 template? It adds flexibility in creating custom messages. Also, it helps to make your callback plugin more universal, as message customization is externalized to Jinja2 template. To demonstrate all this, I've included a simple playbook that fakes the application deployment and posts success or failure deployment messages accordingly. 

Now, let's dive little into the details, starting with callback plugins. Ansible documentation describes callback plugins as "Callback plugins enable adding new behaviors to Ansible when responding to events...". Refer to the Callback Plugins page for general information. In this blog post, I'm not going to explain how to develop your own plugin, but only provide specific information on how this msteam plugin has been developed. If you have not previously written Ansible plugin, I'd suggest looking into Developing plugins section of the Ansible documentation for general guidelines. 

Class Import section:

from __future__ import (absolute_import, division, print_function)
from ansible.plugins.callback import CallbackBase
from jinja2 import Template
...

The 1st line above is required for any plugin and 2nd line is required for Callback plugins. 3rd line above is to work the Jinja2 template. 

Class body section:

As you can see in the lines below, I'm creating msteam with the CallbackModule(CallbackBase) as parent, that means, methods defined in the parent class are available to override. Refer to _init__.py to see what methods are available. For msteam plugin, I've overriden only specific version 2.0 methods as the intention is to use it with Ansible version 2.0 or later. Note: CallbackBase class defines regular as well as corresponding 'v2_*' methods.

class CallbackModule(CallbackBase):
    CALLBACK_VERSION = 2.0
    CALLBACK_TYPE = 'notification'
    CALLBACK_NAME = 'msteam'
    CALLBACK_NEEDS_WHITELIST = True

__init__ section:

See the comment in the code for details.

self.playbook_name = None

# Record the playbook start time. In this case I'm using Canada/Eastern
self.tz = timezone('Canada/Eastern')
self.dt_format = "%Y-%m-%d %H:%M:%S"
self.start_time = datetime.datetime.now(self.tz)

# Placeholder for extra-vars variable
self.extra_vars = None

# If you are executing your playbook from AWX/Tower
# Replace with your Ansible Tower/AWX base url
#self.v_at_base_url = "https://<Ansible tower host>:<port>

# To record whether the playbook variables are retrieved, so that we retrieve them just once.
self.pb_vars_retrieved = False

# Here you can assign your default MS Teams webhook url
self.v_msteam_channel_url = "<replace with your own MS Team webhook URL>"

# default MS Teams message card template. Here I'm assigning the one included in the example playbook
self.v_message_template = "templates/msteam_default_msg.json.j2"

# default job status in the beginning
self.job_status = "successful"

# If you need to post through proxies, uncomment the following and replace with your proxy URLs.
# self.proxies = {
# "http": "<http-proxy-url>",
# "https": "<https-proxy-url>",
# }

v2_playbook_on_start:

def v2_playbook_on_start(self, playbook):
    display.vvv(u"v2_playbook_on_start method is being called")
    self.playbook = playbook
    self.playbook_name = playbook._file_name

v2_playbook_on_play_start:

def v2_playbook_on_play_start(self, play):
     display.vvv(u"v2_playbook_on_play_start method is being called")
     self.play = play
     # get variable manager and retrieve extra-vars
     vm = play.get_variable_manager()
     self.extra_vars = vm.extra_vars
     self.play_vars = vm.get_vars(self.play)
     # The following is used to retrieve variables defined under group_vars or host_vars.
     # If the same variable is defined under both with the same scope,      # the one defined under host_vars takes precedence.
     self.host_vars = vm.get_vars()['hostvars']
     if not self.pb_vars_retrieved:
     self.get_pb_vars()

As you have noticed above, you have to obtain the variable manager from play to get the extra-vars object and use 'get_vars' to get the general playbook variables, and get_vars()['hostvars'] to get the group_vars/host_vars. Refer to the get_pb_vars() method to see how I have obtained the extra-vars, and playbook variables.


v2_playbook_on_stats:

def v2_playbook_on_stats(self, stats):
     display.vvv(u"v2_playbook_on_stats method is being called")
     if not self.pb_vars_retrieved:
          self.get_pb_vars()
     hosts = sorted(stats.processed.keys())
     self.hosts = hosts
     self.summary = {}
     self.end_time = datetime.datetime.now(self.tz)
     self.duration_time = int((self.end_time - self.start_time).total_seconds())
     # Iterate trough all hosts to check for failures
     for host in hosts:
          summary = stats.summarize(host)
          self.summary = summary
          if summary['failures'] > 0:
              self.job_status = "failed"
    
          if summary['unreachable'] > 0:
              self.job_status = "failed"
    
          display.vvv(u"summary for host %s :" % host)
          display.vvv(str(summary))
    
     # Add code here if you want to post to MS Teams per host
    
     # Just send a single notification whether it is a failure or success
     # Post message to MS Teams
     if(not self.disable_msteam_post):
          self.notify_msteam()
     else:
          display.vvv(u"Posting to MS Team has been disabled.")

As you have noticed above, I'm calling notify_msteam() method to post to MS Teams. I'm posting a single message at the end of the playbook execution. However, if you like to post for each host, see how to do that in the code (you have to call the notify_msteam() within the 'for' loop).

notify_msteam: 

I'm not going to post the entire code here, you can see it in the GitHub repository. Here are few important lines. The basic idea here is first to load the Jinja2 template from the given file, then render the template with values retrieved from extra-vars, playbook variable and group_vars/host_vars and finally post the message (see the commented section if you are using the proxy)

 
try:
     with open(self.v_message_template) as j2_file:
     template_obj = Template(j2_file.read())
except Exception as e:
     print("ERROR: Exception occurred while reading MS Teams message template %s. Exiting... %s" % (
     self.v_message_template, str(e)))
     sys.exit(1)
    
rendered_template = template_obj.render(
     v_ansible_job_status=self.job_status,
     v_ansible_job_id=self.tower_job_id,
     v_ansible_scm_revision=self.scm_revision,
     v_ansible_job_name=self.tower_job_template_name,
     v_ansible_job_started=self.start_time.strftime(self.dt_format),
     v_ansible_job_finished=self.end_time.strftime(self.dt_format),
     v_ansible_job_elapsed_time=self.duration_time,
     v_ansible_host_list=self.hosts,
     v_ansible_web_url=web_url,
     v_ansible_app_file=self.v_app_file,
     v_ansible_deployment_action=self.v_deployment_action,
     v_ansible_environment=self.v_environment,
     v_ansible_instance_name=self.v_instance_name,
     v_ansible_executed_from_tower=self.executed_from_tower
)

try:
     with SpooledTemporaryFile(max_size=0, mode='r+w') as tmpfile:
          tmpfile.write(rendered_template)
          tmpfile.seek(0)
          json_payload = json.load(tmpfile)
          display.vvv(json.dumps(json_payload))
except Exception as e:
     print("ERROR: Exception occurred while reading rendered template or writing rendered MS Teams message template. Exiting... %s" % str(e))
     sys.exit(1)
    
try:
     # using proxy
     # response = requests.post(url=self.v_msteam_channel_url,
     # data=json.dumps(json_payload), headers={'Content-Type': 'application/json'}, timeout=10, proxies=self.proxies)
    
     # without proxy
     response = requests.post(url=self.v_msteam_channel_url,
     data=json.dumps(json_payload), headers={'Content-Type': 'application/json'}, timeout=10)
    
     if response.status_code != 200:
          raise ValueError('Request to msteam returned an error %s, the response is:\n%s' % (
     response.status_code, response.text))
except Exception as e:
     print(
     "WARN: Exception occurred while sending notification to MS Teams. %s" % str(e))

Message card as Jinja2 template:

{
    "@type": "MessageCard",
    "@context": "http://schema.org/extensions",
    "themeColor": "{{ '#008000' if(v_ansible_job_status != 'failed') else '#FF0000' }}",
    "title": "Deployment of {{v_ansible_app_file}} on {{ v_ansible_environment }} environment {{'completed successfully' if(v_ansible_job_status == 'successful') else 'failed.' }}",
    "summary": "Ansible Job Summary",
    "sections": [{
        "activityTitle": "Job {{ v_ansible_job_id }} summary: ",
        "facts": [
        {% if v_ansible_executed_from_tower is sameas true %}
        {
            "name": "Playbook revision",
            "value": "{{ v_ansible_scm_revision }}"
        }, {
            "name": "Job name",
            "value": "{{ v_ansible_job_name }}"
        },
        {% endif %}
        {
            "name": "Job status",
            "value": "{{ v_ansible_job_status }}"
        }, {
            "name": "Job started at",
            "value": "{{ v_ansible_job_started }}"
        }, {
            "name": "Job finished at",
            "value": "{{ v_ansible_job_finished }}"
        }, {
            "name": "Job elapsed time (sec)",
            "value": "{{ v_ansible_job_elapsed_time }}"
        }, {
            "name": "Application (v_app_file)",
            "value": "{{ v_ansible_app_file }}"
        }, {
            "name": "Action (v_deployment_action)",
            "value": "{{ v_ansible_deployment_action }}"
        }, {
            "name": "Environment (v_environment)",
            "value": "{{ v_ansible_environment }}"
        }, {
            "name": "Hosts",
            "value": "{{ v_ansible_host_list | join(',') }}"
        },{
            "name": "Instance name(v_instance_name)",
            "value": "{{ v_ansible_instance_name | default('na') }}"
        }],
        "markdown": false
    }]
    {% if v_ansible_executed_from_tower is sameas true %}
    ,"potentialAction": [{
        "@context": "http://schema.org",
        "@type": "ViewAction",
        "name": "View on Ansible Tower",
        "target": [
            "{{ v_ansible_web_url }}"
        ]        
    }]
  {% endif %}
}

Note: In the example above, it adds the "View on Ansible Tower" button if the playbook is executed from Ansible Tower/AWX as shown below.



Success message posted by playbook executed on Ansible Tower




Failure message posted by playbook executed from command line

That's it. Hope it helps. Here are the GitHub links:

1. Callback plugin: https://github.com/pppoudel/callback_plugins

2. Example playbook: https://github.com/pppoudel/ansible_msteam_callback_plugin_using_jinja2_template_example

SonarQube issue - Could not find branches



SonarQube allows branch level (for example, if you are using Git, you may want to scan your feature branch and fix identified issues before creating a Pull/Merge Request to the develop branch) analysis. In order to do branch level analysis, you need to specify your branch using SonarQube parameter 'sonar.branch.name' and optionally 'sonar.branch.target' (the name of the branch into which the temporary branch specified with 'sonar.branch.name' will be merged.
However, if SonarQube project does not exist for the given source code repository yet and you are scanning your source code repository for the first time and you specified your branch name using 'sonar.branch.name', you'll get the following error:
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.6.0.1398:sonar (default-cli) on project web-mywebapps: Could not find branches. A regular analysis is required before creating branches. -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:

As you can see from the error message, it says "...could not find branches. A regular analysis is required...". So, for the first scan whatever branch you are scanning, do not use parameter 'sonar.branch.name' One way to tackle this issue is to write some conditional logic in your SonarQube job. Below is a kind of a pseudo code example. Here the -Dsonar.branch.name property and value is passed to Maven goal only if the variable 'FIRST_TIME_BUILD' is false.
Note: 'FIRST_TIME_BUILD' is not a SonarQube built in variable, I've used it as an example to specify the condition.

<goals>clean install sonar:sonar -Dsonar.host.url=\$SONAR_HOST_URL <% if(!FIRST_BASELINE_BUILD) {%> -Dsonar.branch.name=<%= sonarBranchName%> <%}%> -Dorg.xml.sax.driver=com.sun.org.apache.xerces.internal.parsers.SAXParser -U</goals>

Hope it helps. For details on SonarQube branch analysis, visit https://docs.sonarqube.org/latest/branches/overview/

Jenkins Job Chain and How to Resolve Issue with Parameter Passing

You can trigger a dependent (or other) job in Jenkins automatically from your current job. This way you can have multi-step builds or job chain. One of the use case scenarios for this is - let's say you have a parent build that creates a EAR file which relies on the successful completion of the child build(s) which creates some jar files.
In some situation, you also need to pass the parameters and their values while triggering the other build. But what to do, if you pass the parameter when triggering a job, but the parameter is not available in the triggered job?

Here we will examine the these situations and possible resolutions:

1) Triggering the other job without passing the parameter(s):

If you don't need to pass the parameters, then it's easy. You can just use either the <<Build Other Projects>> as a  Post Build Action for your current job or <<Build after other projects are built>> as a Build Triggers for the other job.

Let's say, if I have a build job called  <<build_jars_no_param>>, it compiles source code and creates bunch of Jars files, which will be used by a EAR file created by <<build_ear>> job. So, as soon as the <<build_jars_no_param>> job is completed, I want <<build_ear>> job is kicked off.
In this case, I just need to define following in the Post Build Action of <<build_jars_no_param>> job as follows:
Post Build Action - Build Other Projects

This triggers the <<build_ear>> job once the <<build_jars_no_param>> job is successfully completed. Multiple projects can be specified delimited by comma in the "Projects to build" input field.
Other than building the other projects that have a dependency on the current project, this can also be used to split a long build process into multiple jobs.

2) Triggering the other job with parameter(s) and their value(s)

If you need to pass the parameters along with triggering the other job, you can use Parameterized Trigger Plugin (https://wiki.jenkins.io/display/JENKINS/Parameterized+Trigger+Plugin) and this can be used both as a Pre Steps or as a Post Steps. This call also provides option to block the current job until the completion of the triggered builds.  

However, there is a security related catch with the use of Parameterized Trigger Plugin. This is one of the Plugins affected by fix for SECURITY-170/CVE-2016-3721. After this fix, Jenkins only allows build parameters that have been explicitly defined in a job's configuration. Any other arbitrary parameters added to a build by plugins will not be available by default.
So, your triggered job (with parameters passing), which might have worked in the prior version (the fix was first included in Jenkins versions 1.651.2 and Jenkins 2.3) is not working in the newer version of Jenkins. 
Here are three ways to resolve:
  1. Work-around #1: restore the previous behavior by setting the system property:
    -Dhudson.model.ParametersAction.keepUndefinedParameters=true
    Example: java -Dhudson.model.ParametersAction.keepUndefinedParameters=true -jar jenkins.war
    This could be a security risk, so use it just as a short-term workaround only.

  2. Work-around #2: white-list parameters by setting system property
    -Dhudson.model.ParametersAction.safeParameters=<comma-separated list of safe parameter names>
    Example: java -Dhudson.model.ParametersAction.safeParameters=FOO,BAR,ref_release_number -jar jenkins.war

  3. Convert/define the other job (the job to be triggered from your current job) with parameters using option "This project is parameterized" in the General section of the job definition. 
From the security point of view, the 3rd option is the preferred option. However, if you have legacy jobs and looking for short term work-around before (re) defining a triggered jobs to be a parameterized projects, you can use option #1 or #2. Option #2 is better than #1, because option #1, just blindly restores the previous behavior.   

Let's look through an example:
1) I've a project  called <<build_ear>> which calls <<build_jars_with_params>> job. Below diagram shows the Pre Steps setting of <<build_ear>> project, and as you can see it defines parameter ref_release_number=2.1.0.${BUILD_NUMBER} to be passed to the triggered job.

Pre Steps - defining a job to be triggered from this project.

On the other hand, here is how I'm testing the the value of the passed parameters in the <<build_jars_with_params>> downstream/triggered project.
Triggered job - testing the passed parameter. 


I'm using Jenkins ver. 2.150.1, which means it includes the fix for SECURITY-170, and I don't see the value of the passed parameter. 

Passed parameter is not available in downstream job

Now, let me use the above mentioned resolution options:

1st option (work-around), I'll have to start the Jenkins with '-Dhudson.model.ParametersAction.keepUndefinedParameters=true'option:


And here, the triggered job, shows the value of ${ref_release_number}

Passed parameter (from upstream job) is available in triggered job.

Use 2nd option. Here I'm starting Jenkins with '-Dhudson.model.ParametersAction.safeParameters=ref_release_number' system property so that 'ref_release_number' is considered as a safe parameter.
Define all parameters to be passed to triggered job as safe parameters using system property.

And here, the triggered job, shows the value of ${ref_release_number}
Passed parameter (from upstream job) is available in triggered job.


Now, here is how to use the 3rd (preferred) option. For this, I have to redefine the <<build_jars_with_params>> as a parameterized project and add 'ref_release_number' as an input parameter.

Define triggered job as a parameterized project.

When my triggered (downstream) job is defined as a parameterized project and I define the parameter 'ref_release_number' here, my upstream job can safely pass this parameter when triggering this job and Jenkins will allow it, no more work-around.
So, here I'm starting Jenkins without any system properties:


And my triggered job still correctly displays the value of the paramter passed from upstream job:


You may be interested reading the following Jenkins related blog:
How to Retrieve Info Using Jenkins Remote Access API
Jenkins Pipeline - Few Troubleshooting Tips

How to Retrieve Info Using Jenkins Remote Access API



We can use REST-like Remote Access API in Jenkins to post or retrieve information in XML or JSON format.  General format of the URL is <Jenkins-URL>/<data-context>/api/xml|json.
Below are few examples:

1) Retrieve Build Result.
Below examples shows the build result in JSON format for job 'web-apps-build-pkgvalidator' and build number ''6'. Credential is passed using '--user <user-name>:<password>'
result is piped to 'jq' to pretty print.
Notes:



$> curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator/6/api/json --user ppoudel:<password> | jq

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  100  1765  100  1765    0     0  14120   0 --:--:-- --:--:-- --:--:-- 16192


{
  "_class": "hudson.maven.MavenModuleSetBuild",
  "actions": [
    {},
    {
      "_class": "hudson.model.CauseAction",
      "causes": [
        {
          "_class": "hudson.model.Cause$UserIdCause",
          "shortDescription": "Started by user Purna Poudel",
          "userId": "ppoudel",
          "userName": "Purna Poudel"
        }
      ]
    },
    {
      "_class": "hudson.plugins.git.util.BuildData",
      "buildsByBranchName": {
        "refs/remotes/origin/release/release-2.0.3": {
          "_class": "hudson.plugins.git.util.Build",
          "buildNumber": 6,
          "buildResult": null,
          "marked": {
            "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
            "branch": [
              {
                "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
                "name": "refs/remotes/origin/release/release-2.0.3"
              }
            ]
          },
          "revision": {
            "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
            "branch": [
              {
                "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
                "name": "refs/remotes/origin/release/release-2.0.3"
              }
            ]
          }
        }
      },
      "lastBuiltRevision": {
        "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
        "branch": [
          {
            "SHA1": "2c60eff4fef13e9346ae7c6b848efdc2fbf31026",
            "name": "refs/remotes/origin/release/release-2.0.3"
          }
        ]
      },
      "remoteUrls": [
        "https://pppoudel@bitbucket.org/pppoudel/pkgvalidator.git"
      ],
      "scmName": ""
    },
    {
      "_class": "hudson.plugins.git.GitTagAction"
    },
    {},
    {
      "_class": "hudson.maven.reporters.MavenAggregatedArtifactRecord"
    },
    {},
    {},
    {}
  ],
  "artifacts": [],
  "building": false,
  "description": null,
  "displayName": "#6",
  "duration": 37633,
  "estimatedDuration": 39130,
  "executor": null,
  "fullDisplayName": "web-apps-build-pkgvalidator #6",
  "id": "6",
  "keepLog": false,
  "number": 6,
  "queueId": 150,
  "result": "SUCCESS",
  "timestamp": 1546274196419,
  "url": "http://localhost:8080/job/web-apps-build-pkgvalidator/6/",
  "builtOn": "",
  "changeSet": {
    "_class": "hudson.plugins.git.GitChangeSetList",
    "items": [],
    "kind": "git"
  },
  "culprits": [
    {
      "absoluteUrl": "http://localhost:8080/user/ppoudel",
      "fullName": "Purna Poudel"
    }
  ],
  "mavenArtifacts": {},
  "mavenVersionUsed": "3.5.4"
}

2) Below example shows use of /api/xml with 'xpath' to get just the build status from the build report.

$> curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator/6/api/xml?xpath=/*/result --user ppoudel:<password>

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 24 0 24 0 0 102 0 --:--:-- --:--:-- --:--:-- 110


<result>SUCCESS</result>

3) Getting build status using /api/json. The following example shows retrieving job name, build number, build status and timestamp.

$> curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator/6/api/json?tree=fullDisplayName,number,result,timestamp --user ppoudel:<password>

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   152  100   152    0     0   1216      0 --:--:-- --:--:-- --:--:--  1394


{"_class":"hudson.maven.MavenModuleSetBuild","fullDisplayName":"web-apps-build-pkgvalidator #6","number":6,"result":"SUCCESS","timestamp":1546274196419}

4) Retrieving all jobs under certain view:
Note: here I'm piping the result through 'jq' and 'grep', which is optional.

curl -X GET http://localhost:9090/job/Web/job/mobile-apps/view/mobile-apps/api/json --user ppoudel:<password> | jq | grep name

"name": "mobile-apps-xyzmportal",
"name": "mobile-apps-holportal",
"name": "mobile-apps-tpl",
...
...
"name": "mobile-apps-bcs_jpj",


 5) Retrieving JUnit test summary report:

$> curl http://localhost:9090/job/web-apps-build-pkgvalidator/6/testReport/api/json?tree=failCount,skipCount,totalCount,urlName --user ppoudel:<password>

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   127  100   127    0     0    451      0 --:--:-- --:--:-- - 579


{"_class":"hudson.maven.reporters.SurefireAggregatedReport","failCount":0,"skipCount":0,"totalCount":20,"urlName":"testReport

6) Below steps can be used to retrieve the SonarQube analysis result using Jenkins' remote REST like API.
6.1) Get the taskId from the build providing build number
Note: http://localhost:9090 is Jenkins server URL.

curl -X GET http://localhost:9090/job/web-apps-build-pkgvalidator_sonarqube/6/api/json --user ppoudel:<password> | jq | grep ceTaskId

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5182  100  5182    0     0  18507      0 --:--:-- --:--:-- --:--:-- 20811
      "ceTaskId": "AWfXjftAinfFqLzOhqqe",

6.2)  Get the analysisId using taskID:
Note: http://localhost:8000 is SonarQube URL.

$> curl -X GET http://localhost:8000/api/ce/task?id=AWfXjftAinfFqLzOhqqe --user ppoudel:<password> | jq | grep analysisId

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   518  100   518    0     0   2770      0 --:--:-- --:--:-- --:--:--  3029
    "analysisId": "AWfXjgNNVNzkngEjXoPD",

6.3) Get the analysis report using analysisId:

$> curl -X GET http://localhost:8000/api/qualitygates/project_status?analysisId=AWfXjgNNVNzkngEjXoPD --user ppoudel:<password> | jq

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed
100  1438  100  1438    0     0   4608      0 --:--:-- --:--:-- --:--:--  4841

{
  "projectStatus": {
    "status": "ERROR",
    "conditions": [
      {
        "status": "OK",
        "metricKey": "new_maintainability_rating",
        "comparator": "GT",
        "periodIndex": 1,
        "warningThreshold": "1",
        "actualValue": "1"
      },
      {
        "status": "OK",
        "metricKey": "new_reliability_rating",
        "comparator": "GT",
        "periodIndex": 1,
        "warningThreshold": "1",
        "actualValue": "1"
      },
      {
        "status": "OK",
        "metricKey": "new_security_rating",
        "comparator": "GT",
        "periodIndex": 1,
        "errorThreshold": "1",
        "actualValue": "1"
      },
      {
        "status": "OK",
        "metricKey": "sqale_rating",
        "comparator": "GT",
        "warningThreshold": "3",
        "actualValue": "1"
      },
      {
        "status": "ERROR",
        "metricKey": "security_rating",
        "comparator": "GT",
        "errorThreshold": "1",
        "actualValue": "5"
      },
      {
        "status": "WARN",
        "metricKey": "reliability_rating",
        "comparator": "GT",
        "warningThreshold": "3",
        "actualValue": "5"
      },
      {
        "status": "ERROR",
        "metricKey": "blocker_violations",
        "comparator": "GT",
        "errorThreshold": "0",
        "actualValue": "180"
      },
      {
        "status": "WARN",
        "metricKey": "critical_violations",
        "comparator": "GT",
        "warningThreshold": "0",
        "actualValue": "3806"
      },
      {
        "status": "WARN",
        "metricKey": "major_violations",
        "comparator": "GT",
        "warningThreshold": "0",
        "actualValue": "2878"
      },
      {
        "status": "WARN",
        "metricKey": "coverage",
        "comparator": "LT",
        "warningThreshold": "80",
        "actualValue": "0.1"
      },
      {
        "status": "ERROR",
        "metricKey": "vulnerabilities",
        "comparator": "GT",
        "errorThreshold": "0",
        "actualValue": "107"
      }
    ],
    "periods": [
      {
        "index": 1,
        "mode": "previous_version",
        "date": "2018-12-15T15:11:09-0500",
        "parameter": "1.5.8-SNAPSHOT"
      }
    ],
    "ignoredConditions": false
  }
}

7) If you need to get all the configured projects (paginated output) in SonarQube you can use the following URL:
Note: http://localhost:8000 is SonarQube URL.


# First page:
$> curl -X GET http://localhost:8000/api/components/search?qualifiers=TRK&p=1

8) Or get detail of a specific project:
Note: http://localhost:8000 is SonarQube URL.

$> http://localhost:8000/api/components/show?key=<project-key&gt

For more information on Remote access API visit https://wiki.jenkins.io/display/JENKINS/Remote+access+API

You may be interested reading the following Jenkins related blog:
Jenkins Job Chain and How to Resolve Issue with Parameter Passing
Jenkins Pipeline - Few Troubleshooting Tips

Jenkins Pipeline - Few Troubleshooting Tips



1)  ERROR: Error cloning remote repo ... fatal: I don't handle protocol 'ssh'. Detail stack trace below:

ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress ssh://<gituser>@<githost>:<port>/<GIT_REPO_NAME>.git
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: fatal: I don't handle protocol '
ssh'
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1715)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:72)

Root cause and possible solution: The above issue can be caused by many things, but one you never suspect from the error message itself is that it is caused by some extra character(s) in GIT URL that you can't see on the Jenkins UI. Specifically if you have copied your GIT URL from another web page. Just delete the GIT URL in your pipeline script and retype it manually (instead of copying), it may solve the issue.

2) Error: java.lang.NoSuchMethodError: No such DSL method 'findFiles' found among steps...

Root cause and possible solution: The DSL methods are related to the Jenkins DSL execution engine or one of the Plugins. In this particular case, make sure pipeline-utility-steps (pipeline-utility-steps.jpi) plugin is installed. For more info visit https://plugins.jenkins.io/pipeline-utility-steps

3) Error: java.lang.NoSuchMethodError: No such DSL method 'httpRequest' found among steps...

Root cause and possible solution: In this particular case, make sure httpRequest (http_request.hpi) plugin is installed. For more info visit https://jenkins.io/doc/pipeline/steps/http_request/

4) Error: java.lang.NoSuchMethodError: No such DSL method 'sshagent' found among steps...

Root cause and possible solution: In this particular case, make sure sshAgent (ssh-agent.hpi ) plugin (http_request.hpi) plugin is installed. For more info visit https://wiki.jenkins.io/display/JENKINS/SSH+Agent+Plugin

5) Error: java.lang.RuntimeException: [ssh-agent] Could not find a suitable ssh-agent provider...


java.lang.RuntimeException: [ssh-agent] Could not find a suitable ssh-agent provider. at com.cloudbees.jenkins.plugins.sshagent.SSHAgentStepExecution.initRemoteAgent(SSHAgentStepExecution.java:175) at com.cloudbees.jenkins.plugins.sshagent.SSHAgentStepExecution.start(SSHAgentStepExecution.java:63) at org.jenkinsci.plugins.workflow.cps.DSL.invokeStep(DSL.java:270) at org.jenkinsci.plugins.workflow.cps.DSL.invokeMethod(DSL.java:178) at org.jenkinsci.plugins.workflow.cps.CpsScript.invokeMethod(CpsScript.java:122) at sun.reflect.GeneratedMethodAccessor171.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source)

Root cause and possible solution: This error means that the sshAgent plugin is not able to locate the sshAgent provider in the path. Mostly, you may encounter this issue if you are running Jenkins on Windows. In order to resolve this issue on Windows, I downloaded the Portable GIT (https://git-scm.com/download/win) and put %PORTABLE_GIT_HOME%\usr\bin directory in my System path. This directory has ssh-agent.exe file. Make sure to launch new command window and restart the Jenkins.
You may be interested reading the following Jenkins related blog:
Jenkins Job Chain and How to Resolve Issue with Parameter Passing
How to Retrieve Info Using Jenkins Remote Access API

6) You are using Maven Pipeline Plugin and you get  [ERROR] Java heap space

[ERROR] Java heap space -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/OutOfMemoryError

Root cause and possible solution: Because of insufficient Java heap space, it is running out of memory. The easiest solution is to use Maven JVM Options to specify the maximum (or min & max both) heap size that your process needs. You can use 'mavenOpts' within 'withMaven' step. Below is an example:

stage('Build') {
withMaven(
  mavenSettingsConfig: '9d2a7048-91b1-47a8-8788-be4b89b71128', jdk: jdk.toString(), maven: 'Maven 3.3.9', mavenOpts: '-Xmx2048m') {
    bat 'mvn clean package'
      }
}

Important: before increasing the heap size, make sure you have sufficient physical memory (RAM) available. If your Java process can not reserve specified heap space, then you may get error saying "Error occurred during initialization of VMCould not reserve enough space for 2097152KB object heap". Which means either there is not enough physical memory available in the server for Java to reserve specified heap space (2 GB in this case) or other server/OS settings (like 32 JVM on Windows has around 1.6 GB heap limitation) preventing Java to reserve specified heap size. 

GIT: Maintain Clean Workspace, Stay Synced with Integration Branch & Publish Perfect Commits

GIT is very powerful and provides many options to choose while doing one or the other tasks. GIT on one hand is a distributed version control system as every git working directory contains a full-fledged repository, but on the other hand it can be hosted centrally to facilitate sharing and collaboration. So, it is easy to use the power of GIT and achieve wonders but also easy to mess around and spend good chunk of your daily hour(s) resolving the conflict(s). I've worked with developers coaching them how to stay clean and synced while helping them to improve their productivity and fine-tuning their commits. Ultimately, I have come up with this one page visual diagram that outlines the practice which I have been preaching for.


Diagram 1.0

Since the diagram 1.0 is self explanatory, I am not going to write detail elaboration of it, but just going to highlight few important concepts below.

Maintain clean work space (working directory)

Specifically when you are done for the day and heading home (or to a bar if you feel so) or starting fresh (with a fresh cup of java) in the morning, it is important to ensure your working directory is clean. Block #4 (in the diagram 1.0) and associated green boundary explains how to deal with Un-tracked, Un-staged, or Un-committed files.
  • Discard: discard them (if you really don't need them) 
    • The orange boundary contains steps to deal with those changes in case by case basis. 
    • The Purple boundary discards everything that is not commited.

  • Commit: stage (if not already staged) and commit.
  • Stash: store them safely for later use - which is called Stashing in GIT term.


Remain synced with remote branch:

Making it a regular practice of pulling latest from remote branch and either merging or rebasing (depending upon your merge strategy in place), not only helps to resolve any merge conflict when it is still small and manageable, but also helps to boost team collaboration. Block #5 with pink boundary in diagram 1.0, explains exactly this. If you are working on a 'feature' branch (following GitFlow strategy), you need to pull first not only from your remote 'feature' branch but also from 'develop' (assuming here 'develop' as an integration branch. You may have 'dev' or 'main' as an integration branch) branch and merge locally on your feature branch before you push your code to remote feature branch and later create a <<pull request>> to integration branch.

Commit early and often

Blocks #8, #9, #10 show this. Whether you are developing a feature or working on a bug/defect fix, it is important to commit when you complete a logical unit of work. Please note, it is never too early or never too frequent to commit your code as long as you review your commits and fine tune them before pushing/publishing. Commiting not only helps to maintain the clean working directory but also helps to preserve the data from accidental loss.

Review and fine-tune your  commits before publishing

If you follow 'commit early and often' principle, it is important that you review and if necessary, fine-tune your commits before pushing/sharing/publishing. It is important to make sure that your commit is small enough and represent a logical unit of work (related to a particular feature, bug fix or defect fix). Fine-tuned commits are extremely useful in troubleshooting using git bisection (git bisect) to find a code that introduced a particular bug or reverting a commit (git revert) with confidence. You can perfect your commits by squashing related commits into one making it kind of transnational, by rearranging commits in right order, by amending commit comments making them contextual with right reference or by splitting commit if it contains unrelated changes. Block 10.1 in diagram 1.0 reminds you to perfect (if necessary) your commits before sharing/pushing/publishing.
Important: NEVER re-write any shared/published history.

Pull before push


It is one of the most important rules that you need to follow when you are working in a team environment. As described in Block #5.0 of diagram 1.0, you need to pull latest commits from the  remote branch, merge (resolve if any conflicts) locally and only after that push your code. Whether you merge or rebase depends upon the strategy you have in place. Most of the time, merge is safer than rebase.

Regularly publish your code

Most of us get paid only after publishing, so it is important! It is generally how teams share/collaborate as well. Remote/Central repositories are usually setup considering high availability (HA) and disaster recovery (DR), so it is also important to regularly publish your commits to protect them from destructive events. Refer to block #12 of the diagram 1.0

Note: if you are interested in contributing to enhance the diagram further, you can do so. Source (GitBestPracticesOnePagerDiagram.xml) of this diagram (draw.io format) is in GitHub: https://github.com/pppoudel/git-best-practices-one-page-diagram.git


References of Git commands (used in diagram 1.0):