Part 3 - Hands on: Jenkins

The last step is to setup a build and deployment pipeline which automatically builds the docker image and deploys it into the ECS cluster.

Installing the Plugins

Jenkins → Manage Jenkins → Manage Plugins

 

AWS Pipeline Plugin

Github: https://github.com/jenkinsci/pipeline-aws-plugin

 

AWS SQS Plugin (2.0.1)

Github: https://github.com/jenkinsci/aws-sqs-plugin

 

Utility Pipeline Plugin

Github: https://github.com/jenkinsci/pipeline-utility-steps-plugin

 

Setting up Credentials

Unfortunately, we have to create multiple credentials due to the plugins not using the same credentials format.

 

Jenkins → Credentials → System → Global Credentials → Add new credentials (http://localhost:8081/credentials/store/system/domain/_/newCredentials)

 

Configuring CodeCommit Access

Create "Username with Password" Credentials in Jenkins using the CodeCommit HTTPS access credentials

 

 

 

Configuring Simple Queue Service (SQS) Access

Create "Secret text" Credentials in Jenkins using the Access Key as ID and associated Secret Key as Secret

 

 

Jenkins → Manage Jenkins → Configure System → Configuration of Amazon SQS queues: http://localhost:8081/configure (bottom)

 

#

 

The URL of the queue can be found in AWS SQS. Click on the queue and look at the details tab.

 

#

 

Important: Don't forget to save your changes!

 

Configuring AWS Access

Create "Username with Password" Credentials in Jenkins using the Access Key as Username and Secret Key as Password

 

 

#

 

Creating a Pipeline

 

Jenkins → New Item

 

#

 

Build Parameter (Git Tags)

 

Create a String parameter named BUILD_TAG.

 

 

Build Trigger (SQS)

 

After having configured the SQS Access we can use the queue to trigger the build.

 

#

 

Writing the Pipeline

Now that we have set up Jenkins and the Amazon account we can start writing the pipeline.

The basic structure of a Declarative Pipeline in Jenkins looks like this:

Structure

 


 
pipeline { 
        agent any 
        environment {

               } 
        stages {
               stage("stage one") {
                       steps {

                              }
               }
        }
}
 

Important!

In the following examples we will use:

Change these to your bucket and repository.

 

Environment Variables

To make the pipeline more maintainable we are going to have 5 environment variables:

  • ECR - the URI of the ECR
  • IMAGE - the image name
  • VERSION - version stored in pom.xml
  • STACK - the stack name in CloudFormation
  • COMMIT - hash of commit that is being built
  •  

 

Groovy implementation Extend source

environment {
       //ecr uri           
       //example ECR = "222222222222.dkr.ecr.eu-central-1.amazonaws.com/springio/gs-spring-boot-docker"
        ECR = "<your uri here>"

       //image name
        IMAGE = "springio/gs-spring-boot-docker"                

       //pom.xml version      
        VERSION = "0.0.0"        

       //Stack name        
       STACK = "MyWebServiceStack"          

       //Commit hash        
       COMMIT = ""    
}

 

That way, we do not have to make changes multiple times in the pipeline.

 

Using variables in strings

Jenkins Pipeline uses rules identical to Groovy for string interpolation. Groovy’s String interpolation support can be confusing to many newcomers to the language. While Groovy supports declaring a string with either single quotes, or double quotes, for example:

 

String declaration

def singlyQuoted = 'Hello'
<strong></strong>def doublyQuoted = "World"

Only the latter string will support the dollar-sign ($) based string interpolation, for example:

String and GString

def username = 'Jenkins'
echo 'Hello Mr. ${username}'  //single quote
<strong></strong>echo "I said, Hello Mr. ${username}" //double quote

Would result in:

Output

Hello Mr. ${username}
I said, Hello Mr. Jenkins

Source: https://jenkins.io/doc/book/pipeline/jenkinsfile/#string-interpolation

 

Stages
Checkout Stage

 

This stage we will clone the git repository from CodeCommit and checkout a tag, if specified.

 

Determining what to checkout

If a tag is defined (manual build) and it exists in the repository, we are going to checkout that tag. Otherwise we will build the latest commit.

If no tag is defined (automated build) we are going to build the latest commit.

After performing the checkout we set the environment variables COMMIT and VERSION.

 

Example Stage

Example Stage extend source

 

stage('Checkout') {            
        steps {                
               git url: 'https://git-codecommit.eu-central-
1.amazonaws.com/v1/repos/tutorial_repo',
               credentialsId: 'CodeCommitCredentials' // ID of HTTPS 
CodeCommit credentials
               
               script {
                       if("${BUILD_TAG}" != "") {
                               echo "Searching for tag: ${BUILD_TAG}"
        
                               //output returns:
                               //      the tag if the tag exists
                               //      "" if the tag does not exists

                               def output = bat(script: "@ git tag -l \"${BUILD_TAG}\"", returnStdout: true).trim();
                               echo output
                               
                               if("${output}" == "${BUILD_TAG}") {
                                      echo "Tag found"
                                      echo "Building tag \"${BUILD_TAG}\""
                                      bat "git checkout ${BUILD_TAG}"
                               } else {
                                      echo "Tag \"${BUILD_TAG}\"not found"
                                      BUILD_TAG = "";
                                      echo "Building latest commit"
                               }
                       } else {                        
                               echo "Building latest commit"
                       }

                       //escape % with a second % -> %%
                       COMMIT = bat(script: "@ git log -n 1 --pretty=format:%%h", returnStdout: true)
                       
                       //read version from pom.xml
                       VERSION = readMavenPom().getVersion()
               }
        }
}

 

Maven Build Stage

This stage will build the .jar and test the application.

Building with Maven

To make the.jar files traceable we will append the tag/commit to the image name:

  • gs-spring-boot-docker-1.0.1.jar → gs-spring-boot-docker-1.0.1-v1.0.1-RC.jar
  • gs-spring-boot-docker-1.0.0.jar → gs-spring-boot-docker-1.0.0-b127d6b.jar

To achieve this we will use the Maven Versions Plugin

 

script {
        // VERSION is read from pom.xml
        bat "mvn versions:set -DnewVersion=${VERSION}-${BUILD_TAG}"
}

 

Displaying test results

https://jenkins.io/doc/pipeline/steps/junit/

https://jenkins.io/doc/book/pipeline/syntax/#post

Displaying the test results should be done even if the build fails. That is why we will put this step inside the post-condition block always.

 

Archiving built jar

If the build succeeds we are going to archive the built jar in the S3 Bucket, so that it is accessible from anywhere, unlike Jenkins' archiveArtifacts step.

 

Example Stage

 

 

Example Stage extend source

 

stage('Build jar with Maven') {
        steps {
               script {
                       // VERSION is read from pom.xml
                       if("${BUILD_TAG}" != "") {
                               bat "mvn versions:set -DnewVersion=${VERSION}-${BUILD_TAG}"
                       } else {
                               bat "mvn versions:set -DnewVersion=${VERSION}-${COMMIT}"
                       }
               }
               bat 'mvn clean install'
        }
        post {
               always {
                       //show junit test results
                       junit 'target/surefire-reports/*.xml'
               } 
               success {
                       //push artifacts to s3 bucket (.jar)
                       script {
                               withAWS(region: 'eu-central-1', credentials: 'AWSCredentials') { // ID of the AWS credentials
                                      //upload built jar to s3 bucket
                                      s3Upload(bucket:'qstutorialbucket', includePathPattern: '**/target/*.jar', path:'builds/')
                               }
                       }
               }
        }
}
Docker Build Stage

This stage will build the docker image.

In order to build the docker image with a specific tag instead of using the default tag latest we have to use the option dockerfile.tag in our build command:

 

Example Stage extend source

 

stage('Build docker image') {
        steps{
               script {
                       //always tag docker image with commit hash
                       bat "mvn dockerfile:build -Ddockerfile.tag=${COMMIT}"
                       
                       //if build tag exists, tag image
                       if("${BUILD_TAG}" != "") {
                               bat "mvn dockerfile:tag -Ddockerfile.tag=${BUILD_TAG}"
                       }
               }
        }
}
 
Docker Tag/Push Stage

This stage will tag the docker image so that we can push it to the ECR.

 

Example Stage extend source

 

stage('Tag/Push docker image') {
        steps {
               script {
                       //login aws with credentials
                       withAWS(region: 'eu-central-1', credentials: 'AWSCredentials') { // ID of the AWS credentials 
                               
                               //get login for ecr (returns command to execute in terminal)
                               def ecrLogin = ecrLogin()

                               //execute command
                               bat ecrLogin
                       }

                       //tag image with commit hash for ecr
                       bat "docker tag ${IMAGE}:${COMMIT} ${ECR}:${COMMIT}"
                       bat "docker push ${ECR}:${COMMIT}"

                       //if build tag exists, tag image for ecr
                       if("${BUILD_TAG}" != "") {
                               bat "docker tag ${IMAGE}:${BUILD_TAG} ${ECR}:${BUILD_TAG}"
                               bat "docker push ${ECR}:${BUILD_TAG}" 
                       }
               }
        }
} 

 

Deploy Stage

This stage will deploy the CloudFormation.

If the stack does not exist yet, the pipeline will deploy a stable version of the CloudFormation (defined in the repository) and then try to update it with the newly built docker image.

Checking if Stack exists

This check returns if the stack defined in the environment variable STACK already exists.
returnStatus returns the error code of the script:

  • 0: successful
  • any other code: failed

Therefore we only have to check if output is 0 to see if the stack exists.

 

Groovy implementation extend source

 

script {
output = bat(script: "aws cloudformation describe-stacks --stack-name ${STACK}", returnStatus: true)
        
        if(output == 0) {
               //Stack exists
        } 
        if(output != 0) {
               //Stack does not exist
        }
}

 

Deploying a stable version

If the stack does not exist yet, we will deploy a stable version first. If it does already exist, we can skip this step.

The stable docker image should be defined in the .yaml files in the repository.

The pipeline will then take the stable configuration and push it to the S3 Bucket. That way, CloudFormation can access the configuration.

 

 

Groovy implementation extend source

 

script {
withAWS('eu-central-1', credentials: 'AWSCredentials') { 
               //upload cloudformation folder to s3 bucket 
               s3Upload(file:'cloudformation/', bucket:'qstutorialbucket', path:'cloudformation/') 

               def output = bat(script: "aws cloudformation describe-stacks --stack-name ${STACK}", returnStatus: true) 
               
               //if stack doesn't exist, create 
               if(output != 0) { 
                       //start cloudformation, deletes stack if failed 
                       cfnUpdate(stack: "${STACK}", url:'https://s3.eu-central-1.amazonaws.com/qstutorialbucket/
cloudformation/master.yaml', onFailure: 'DELETE') 
               }
        }
}

Swapping the image

In order to swap out the current docker image with the newly built one, we will have to modify a .yaml file that contains a reference to the docker image that is being used.

The .yaml file we have to edit is service.yaml.
To make a change to the CloudFormation Stack we will have to do the following:

  • Read the current configuration
  • Write new configuration with updated docker image
  • Push the new configuration so that CloudFormation can access it
  • Update the stack with the new service configuration

 

Groovy implementation extend source

 

script {
//read current service.yaml 
        def yamlAsText = readFile file: 'cloudformation/services/website-service/service.yaml' 

        //replace current image with new one 
        def modifiedYamlAsText = '' 
        
        //if build tag exists, replace image with newly built image tagged with build tag 
        //else, replace image with newly built image tagged with commit hash 
        if("${BUILD_TAG}" != "") { 
               modifiedYamlAsText = yamlAsText.replaceAll(/Image:\s.*/, "Image: ${ECR}:${BUILD_TAG}")
        } else { 
               modifiedYamlAsText = yamlAsText.replaceAll(/Image:\s.*/, "Image: ${ECR}:${COMMIT}")
        } println modifiedYamlAsText 
        
        //write changes to file 
        writeFile file: 'tmpChange.yaml', text: "${modifiedYamlAsText}"

        //upload new service.yaml to s3 
        s3Upload(file:'tmpChange.yaml', bucket:'qstutorialbucket', path:'cloudformation/services/website-service/service.yaml') 

        //update stack 
        cfnUpdate(stack: "${STACK}", url:'https://s3.eu-central-1.amazonaws.com/qstutorialbucket/cloudformation/master.yaml', onFailure: 
'ROLLBACK') 
}

 

Example Stage

In the end, this stage should look similar to this:

 

Example Stage extend source

 

stage('Upload S3/Deploy CloudFormation') {
        steps {
               script {
                       withAWS(region: 'eu-central-1', credentials: 'AWSCredentials') {
                               
                               //upload cloudformation folder to s3 bucket
                               s3Upload(file:'cloudformation/', bucket:'qstutorialbucket', path:'cloudformation/')

                               def output = bat(script: "aws cloudformation describe-stacks --stack-name ${STACK}", returnStatus: true)

                               //if stack doesn't exist, create
                               if(output != 0) {
                                      //start cloudformation, deletes stack if failed
                                      cfnUpdate(stack: "${STACK}", url:'https://s3.eu-central-1.amazonaws.com/qstutorialbucket/
cloudformation/master.yaml', onFailure: 'DELETE')
                               }

                               output = bat(script: "aws cloudformation describe-stacks --stack-name ${STACK}", returnStatus: true)
                               
                               //if stack exists 
                               if(output == 0) {
                                      //make change in service.yaml
                                      //read current service.yaml
                                      def yamlAsText = readFile file: 'cloudformation/services/website-service/service.yaml'
                                      
                                      //replace current image with new one
                                      def modifiedYamlAsText = ''
                                      
                                      //if build tag exists, replace image with newly built image tagged with build tag
                                      //else, replace image with newly built image tagged with commit hash
                                      if("${BUILD_TAG}" != "") {
                                              modifiedYamlAsText = yamlAsText.replaceAll(/Image:\s.*/, "Image: ${ECR}:${BUILD_TAG}") 
                                      } else {
                                              modifiedYamlAsText = yamlAsText.replaceAll(/Image:\s.*/, "Image: ${ECR}:${COMMIT}")
                                      }
                                      println modifiedYamlAsText
                                      
                                      //write changes to file
                                      writeFile file: 'tmpChange.yaml', text: "${modifiedYamlAsText}"
                                      
                                      //upload new service.yaml to s3
                                      s3Upload(file:'tmpChange.yaml', bucket:'qstutorialbucket', path:'cloudformation/services/
website-service/service.yaml')
                                      
                                      //update stack
                                      cfnUpdate(stack: "${STACK}", url:'https://s3.eu-central-1.amazonaws.com/qstutorialbucket/
cloudformation/master.yaml', onFailure: 'ROLLBACK')
                               }
                       }
               }
        }
}

 

Now that we have configured everything, a commit followed by a push to our repository will trigger our Jenkins Pipeline.