Our continuous deployment pipeline


This post aims to give an in-depth description about our continuous deployment pipeline. We are going to use the Continuous Deployment term as Martin Flower defined in this post, so it’s a good to start reading that post if you haven’t done so far.

Technologies in place

  • Jenkins
  • Gitlab
  • Sonar
  • Nexus
  • Jira
  • Kubernetes
  • Docker
  • Mockserver

From a developer point of view, the process is kind of magical, it starts when a developer is ready a deliver a new feature. Usually, it will place a merge request that will trigger the continuous deployment pipeline, and if everything goes OK, it will be shipped to live.

I will try to explain everything in a detailed technical way, nonetheless there is some details that I’m hiding to maintain clarity about the content. Therefore, I’m just highlighting what I think is the most relevant technical aspects.

Triggering the continuous deployment process

The deployment process is triggered every time a team member place a new merge request.

On Gitlab we’ve placed a constraint in which anyone can create a branch from master, and push code on that branch, but only the Jenkins user can push code to the master branch. This way, we can assure that only the tested code is merged into master.

So, how can Jenkins know when he has to merge a branch to master? Well, Gitlab will tell him. By simply putting a Webhook, Gitlab can tell to Jenkins that a merge request has been placed and therefore trigger the continuous deployment pipeline.

Every project has his own pipeline, therefore a Webhook only has to tell to Jenkins which pipeline has to execute.

There is an additional sweet advantage with this approach. Using a pipeline per project allows us to report the pipeline status to Gitlab, and therefore we can report if the pipeline fails on the merge request. Also, if the pipeline fails it can be triggered again by simply putting a comment with the text “Jenkins please retry a build”.

The stages of the pipeline

Even though that every project has his own pipeline, all project pipelines has to have the same stages. This way, the continuous deployment process is consistent among all projects, also we don’t tie ourselves to use the same technology stack. To illustrate this point, I will provide some examples using different technologies, specifically Angular and Scala.

Under the hood, all stages will be run in a docker that has everything that the stage needs to complete. This mean, that the pipeline will require at least one kubernetes pod, if there is no such pod it will deploy a new one, with the correspondent docker image.

We use the jenkins kubernetes plugin, this way with a simple statement we can instruct jenkins to spin up a kubernetes pod with a specific docker image. The pod that are managed by jenkins are called “slaves”.

All of our docker images inherit from the image jenkinsci/jnlp-slave. Which already have git, java and the user jenkins configured.

These are the stages that every project must to have, and it will be executed in the same order:

Merge Master Into The Requested Branch

The title says it all, inside the docker we checkout the project and then merge the master branch into the requested branch, it it happens that the requested branch is behind master Jenkins it will push the request branch.

This way if the pipeline fails on a later stage, it will save some time to the developer by not having to merge the master branch by itself.

Any merge conflict at this stage will make the pipeline fails.

This step will be critical

Merge The Requested Branch Into Master

This step should rarely fail. Since the project and the request branch should be the same. At this point we don’t push the master branch yet.

Build Stage

It consist on simply try to compile the project, if the compilation goes OK it will continuous to the next stage. On most technologies this process is simple, it just execute a few commands, but what it could be tricky is what the docker image should have installed to successful run the compilation.


Before build any project for the first time, is important to check that the docker image that we are planning to use have installed everything that so we can actually build that project inside that docker.

A simple Angular project just require: Nodejs, npm (node package manager) and angular-cli. We just need to add the following lines to the image’s Dockerfile and it should be fine:

RUN curl -sL https://deb.nodesource.com/setup_10.x | bash -
RUN apt-get install -y nodejs
RUN apt-get install -y npm 
RUN npm install -g @angular/cli

If we got everything that we need in place, our build should work with the following commands:

npm install
ng build --prod


In order to run the build we need to have sbt in place. We can install it on docker with the following lines:

RUN echo "deb https://dl.bintray.com/sbt/debian /" | tee -a /etc/apt/sources.list.d/sbt.list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823
RUN apt-get update -y
RUN apt-get install sbt -y

We sbt in place jenkins should be able to run the command:

sbt clean build

Unit and Integration Tests Stage

As it says, it will simply run the unit and integration tests. If the unit test and the integration test goes ok, it will continue to the next stage. Nonetheless, it can vary vastly between technologies.


Event though having node, npm and angular-cli installed is pretty simple to run unit tests. Nonetheless, running e2e tests are not that easy.

Typically, you want to use e2e tests for integration or functional tests. In our case we want to use it as integration tests. There is challenge with this, we can’t create mock objects and therefore our AJAX calls must actually happen, also, we don’t want to call an actual back-end, because we want to leave that to the functional tests stage.

To solve this, we have used mockserver.  We’ve deployed a single mockserver service inside of our kubernetes beta cluster, hence, the jenkins slaves can reach it and our Angular apps, can simply record the server expectations into mockserver before running the tests.

There is also another challenge, in order to run e2e we need a browser in place. A browser also needs an UI, and most docker images come without an GUI. Luckily, modern browsers have a headless mode, that means, it doesn’t need an UI.


The step is quite simply, most Scala frameworks provide the ability to mock http calls. Therefore, at this point, we just need the same stuff that we need to run the unit tests.

Quality Gate

Analyze the code coverage and best practices with Sonar.  If the test coverage is not good enough, or there is too many code smells, then the pipeline will fail.

No matter which technology we use, we just need two thing to perform the quality measurement, which are:

  • Configure the sonar-project.properties file
  • run the sonnar scanner.

Push Master Branch

At this stage, the unit and integration tests have passed, as well as the quality gate. Therefore we’re ready push the master branch.

If the push was successful we will tell to Gitlab that the merge was OK, and therefore it can be closed.

Release Beta Version

It will tag the version on git, make a production build and then, push the binaries to Nexus. Also it will increase the project version, which will require to commit and push the code again.

Beyond this point, we will not generate any new binaries for this version.


We can use npm increase the project version, create and push tags on git.  Since we use Angular to create web apps, our binary will not be a single file, instead it will be a group of files the statics assets.

Of course, we can’t just simply upload a bunch of files to nexus, but can we grab all the generated files and put them inside the zip file. Therefore, we have to build the app, locate the project dist folder and compress its content to a zip file.

Lastly, upload the zip file to a raw repository in Nexus.

#Create git tag
sshagent (credentials: ['<your git lab credentials']) {
    sh 'npm version patch'

    #Get artifactId, version, and group
    def artifactId = sh (
        script: 'jq .name --raw-output package.json | tr -d "\n\r"',
        returnStdout: true
    def version = sh (
        script: 'jq .version --raw-output package.json | tr -d "\n\r"',
        returnStdout: true
    def groupId = sh (
        script: 'jq .group --raw-output package.json | tr -d "\n\r"',
        returnStdout: true
    sh "git push --set-upstream origin v${version}"
    sh "git push --set-upstream origin master"

#Build project
npm install 
ng build --prod

#Create the zip file
sh "(cd dist/; sh tar -zcvf $projectName-$projectVersion.tar.gz $projectName)"   

withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: '<nexus_crendentials_id>', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]){ 
    echo "Uploading file ${projectName}-${version}.tar.gz"
    echo "To https://nexus-delivery-platform.fexcofts.com/repository/web-releases/${groupId}/${artifactId}/${version}/build-dsl-ui-${version}.tar.gz"
    sh "curl -v -u ${USERNAME}:${PASSWORD} --upload-file dist/${projectName}-${version}.tar.gz https://nexus-delivery-platform.fexcofts.com/repository/web-releases/${groupId}/${artifactId}/${version}/build-dsl-ui-${version}.tar.gz"   


Scala (and Java) have a totally different release process. First, Scala generates a single jar file, which could also be packaged in a Docker image. Second, most of the release configuration is made in the build.sbt file, which makes, at least from the jenkins perspective, the hole process a lot more simple.

#Execute the release
sh 'sbt clean "release with-defaults"'

Our sbt configuration makes that the command above, do the following:

  • Test the application
  • Build the jar file
  • Creates a new tag on gitlab
  • Publish the binary into artifactory
  • Prepares the files to generate the docker image

The last step is little bit tricky, we have used the sbt plugin sbt-docker to generate docker file. By default, the plugin will try to build and push the dockers, which suppose an issue because all of our pods are docker containers, so we are going build docker images inside another docker images? Dockerception?

To solve this, we’ve placed two nodes on the pod. One pod to build the application, and another one, with docker installed to build our docker images. This docker is provided by the Docker organization.

We’ve configured our release task, to not generate the docker, it simply will generate the Dockerfile and put the generated jar in a reachable place for the dockerfile. Also, it will generate an script to build, tag and push the docker images. To accomplish this, we’ve created our custom sbt task that will do this for us, and invoke it from the release task.

val dockerFileTask = taskKey[Unit]("Prepare the dockerfile and needed files")

dockerFileTask := {
  val dockerDir = target.value / "docker" //A docker folder will be created inside the target folder
  val artifact: File = assembly.value
  val artifactTargetPath = s"/app/${artifact.name}"  

 val dockerFile = new Dockerfile { //The Dockerfile content
      .maintainer("FTS API Computing Platform")
      .add(artifact, artifactTargetPath)
      .entryPointShell("java", "$JAVA_OPTS","<Your java options>","-jar", artifactTargetPath)

  //Writes the Dockerfile and copy the generated jar on the docker folder.
  val stagedDockerfile =  sbtdocker.staging.DefaultDockerfileProcessor(dockerFile, dockerDir)
  IO.write(dockerDir / "Dockerfile",stagedDockerfile.instructionsString)
  stagedDockerfile.stageFiles.foreach {
    case (source, destination) =>

  // our publish script
  val publishInstructions =
       |  docker build -t ${name.value}:${version.value} .
       |  docker tag ${name.value}:${version.value} registry-nexus-delivery-platform.fexcofts.com/${organization.value}/${name.value}:${version.value}
       |  docker push registry-nexus-delivery-platform.fexcofts.com/${organization.value}/${name.value}:${version.value}
       |  docker tag ${name.value}:${version.value} registry-nexus-delivery-platform.fexcofts.com/${organization.value}/${name.value}:latest
       |  docker push registry-nexus-delivery-platform.fexcofts.com/${organization.value}/${name.value}:latest
  IO.write(dockerDir / "publish.sh", publishInstructions)

Our release configuration:

releaseProcess := Seq[ReleaseStep](
  releaseStepCommand("dockerFileTask"), //Invoke our custom task

Also, to avoid building dockers inside docker, we have placed a volume that gives access to the docker daemon that is in the pod. This way, our container will build our docker images using the pod’s docker demon and not the docker daemon that is inside the container.

To do what I describe before, we’ve build our Jenkins slave with a YAML like this:

apiVersion: v1
kind: Pod
  - name: regcred  
    privileged: false
    runAsUser: 10000 #Jenkins user in docker group
  - name: release-folder #It will contain files that we need to build our images
    emptyDir: {}
  - name: dockersock #our access to the docker deamon in the pod
      path: /var/run/docker.sock
      type: File
  - name: docker
    image: registry-nexus-delivery-platform.fexcofts.com/docker-volume    
      privileged: true
    imagePullPolicy: Always
    - cat
    tty: true
    - name: dockersock
      mountPath: /var/run/docker.sock
    - name: release-folder
      mountPath: /tmp/release 
  - name: scala
    image: registry-nexus-delivery-platform.fexcofts.com/fts-jenkins-slave-java8-sbt115
    workingDir: /home/jenkins
    - cat
    tty: true
    - name: release-folder
      mountPath: /tmp/release #The jenkins pipeline should move the docker files to this folder

Beta environment deployment

It will deploy the project on the beta environment, and if is required it will deploy any other component that depends on.


Is relatively simple, we need to download our zip file from the nexus repository,  uncompress it, set the configuration file that we want, and then push the changes to an App Service.


No big fuss here. Just run a helm chart, that points to the docker image we want to deploy in the Beta cluster.

Functional Tests

It will run the functional tests, if any test fails here, it will create a new Jira card describing what is the test that have failed.

Is important here, to not confuse integration/e2e tests with functional tests. Functional tests can have different constraints, and handle different errors. Also, it may need some run some extra task before and after running the tests, like prepare or cleaning data.


Usually, we would create a separate project to write the functional tests, typically we would use protractor which will connect to a Selenium pod. Also, it will trigger any data preparation/cleanup that we may need.


We will run a JMeter test suite that will target to the deployed endpoint.

Candidate Version

If the functional tests goes OK, Jenkins will tag the version as candidate. It will also deploy the version in candidate cluster.

Stress Testing

The stress tests will be running at this point. Any fail here will create a new Jira Card describing which stress test have failed.  Most of the time, we will use JMeter to run the stress tests over the candidate cluster.

Live Deployment

Deploy the binaries to the live environment.


At this point you feel may that this utterly complicated. And certainly it can be difficult to achieve, but there is a lot of benefits.

Since there is no human intervention, all the team can working on anything else while the pipeline is running.

The system is fully tested all the time, which is almost impossible if you rely on manual tests.

One Reply to “Our continuous deployment pipeline”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.