Build Software Continuous Deployment Pipelines

Hi!

This article can provide approach how to deal with a complex build, test and deploy procedures for microservices.

It is just one of a bigger series involving the Cloud IT Platform, Workflows and experiences with technologies.

Just to be clear for the very beginning. The benefits of microservices are evident and have been demonstrated for years in Enterprise systems. Shorter development time, small and independent releases, and decentralized governance introduce a set of challenges like versioning for compatibility , testing, deployment, configuration management and orchestration.

But the main benefit and more evident advantage is: Scaling

We can scale our systems much more easily, with less implications and not involving nor compromising the whole system.
Anyway, this is a matter for a next post. We need to talk about pipelines with microservices right now.

The Standard Lifecycle

The question right now is, what is our standard approach for building apps?
Look at the so-called Maven lifecycle and that is the answer. It is also valid for not Maven managed projects (e.g. SBT, Gradle, etc.) and we can adapt our build procedures to the same paradigm

Anyway, the Maven lifecycle is based on a well defined lifecycle and types of entities. These entities are very well known and probably are the core of the so-called Maven lifecycle: Snapshots and Releases.

SNAPSHOT
Stability? No!
Version type? Obviously, SNAPSHOT
Promotable? No
Domain? Developers. They are the owners
Stages? Continuous Integration

RELEASE
Stability? Yes
Version? Regular version numbers (no SNAPSHOT word anymore), taggeable to RC  (Release Candidate) and RF (Release Final)
Promotable? Yes, to RC and RF
Domain? Tests, customers
Stages? Test and Productive

The SCM Workflow

To perform this and harmonize with your SCM (Source Code Management) you need a Workflow. Well, below you can find the one we currently use (pretty standard). We’ll explain this topic in the future. Basically, we get an integration branch for features (develop) that is the source for automated pipeline for snapshots that you continuously deploy to the CI(for Continuous Integration) Stage.
Once we are happy the code is merged to the stable branch (master) that is actually the source for the generation of stable versions. They are automatically tagged (in a double way, artifacts and images) as RCs (Release Candidates) and RFs (Release Final) when the automatic and exploratory tests have been passed.

I never insist enough about how needed is to use an artifact repository (Artifactory, Nexus) in a SCM workflow. It makes possible the main, main principle in pipelines…

BINARY INTEGRITY!

Please, never build your code several times. Be sure you are deployng the same binary across your environments or stages. If you are not doing this and you want to do the right thing we’ll have a beer together. It’s on me!

But, returning to our current topic, how to build, test, deploy and promote microservices in an easy and much automated as possible?
The build, test, and deploy procedures can be pretty straightforward for big projects involving a number microservices or normal services. Very often, such a solution has some images with services and one or several UIs, so it is easy to perform the integration and testing and make the conclusion that the services are compatible with each other and with the UI. That is why we use the “Compatibility Matrix” a simple application to manage and control compatible versions in the system and assuring that any incompatible version of a component can be deployed along the rest of them. By using this tool even the deployment to a staging or production environment will be done automatically for each microservice by using an automatic or manual trigger (usually preferred in production).

Actors

  1. Source Code Management (preferably Git)
  2. The build system you use: Maven, NPM, SBT, Gradle, etc.
  3. The CI/CD engine. Jenkins in our case.
  4. Artifact repository (Artifactory, Nexus)
  5. Linux Image platform (Docker/LXC/OS)
  6. Container orchestration, HIgh Availability and all services we can need for our microservices (DC/OS, Kubernetes)
  7. Automated test tools and platforms (Protractor, Arachni, JMeter, etc)

 

The Pipelines

Each microservice has own life and release cycle.
Different teams work on the whole  project delivering components at different speeds, using the compatibility matrix to watch and detect possible compatibility issues.
Every component/microservice is using effectively the Cloud IT Platform that provides common services about logging, metrics, security, etc.
The situation becomes more complicated when each microservice has its own deploy and runtime configuration per environment, for example, no need to install multiple instances of a microservice in a test environment, but it is required for production. As a result, we need to deal at least with a 3-n dimensional array of:

Containerized microservice Docker/LXC/OS images containing specific versions
Configurations specific per environment

The main questions here are “Are the microservices compatible each other? Are we breaking the UI compatibility when releasing a component?”

The answer again is “we use the Compatibility Matrix”. We’ll talk about it in a next post.

The Stages

CIT -> SNAPSHOTS
QA STAGE -> RELEASE
EXPLORATORY TEST STAGE -> PROMOTION OF RELEASE
LIVE -> PROMOTION OF RELEASE

We use Infrastructure as Code (IaC). So, we can build the stages as soon we need it and close them down when they are not used, minimizing dramatically the cost of Cloud IT.

Also, some additional constraints and requirements can exist in the project. Some additional requirements we need take into account:

Immutable server paradigm, based on the repeatibility that IaC provides.

Untested microservices must  be identified in the repository/registry with no tag at all or just with tag RC.

Fully tested microservices are identified the repository/registry with the tag RF. Only RFs are eligible to be deployed to Production.

So, each build which passes testing procedures is potentially shippable. We control this flow with the tagging (RC, RF) stuff.

Deploy/rollback procedure for the entire solution shall be as simple as possible and avoid the complexity of dealing with different microservices, that all have different versions managed through the Compatibility Matrix
Microservices must continue to work when the rest of the world shuts down, so the following consequence is right – it must be built and tested separately first. The pipeline for one microservice would be similar to the one depicted below.

The Release versions build pipeline shows as something like…

Steps:

  1. Merging code changes into the masterbranch in SCM will trigger the build procedure.
  2. Unit and component tests are performed during the build procedure. Static Code Analysis is also performed to pass the source code quality gate.
  3. Publish into the artifact repository and container registry in case the integration testing is passed.
  4. Deploy to the QA Stage. Create the stage if it does ont exist yet.
  5. Post-Deployment Test Phase
    During this phase, a set of microservices will be integrated with the latest stable version of other microservices. It involves Functional (End-to-End ), Stress and Security automated tests.
  6. Scale. Well, it is not usually a build pipeline task but the Cloud IT Platform gives us a lot of flexibility to produce smarter pipelines. If the stress tests are not meeting the expected results the pipelines scales the stage cluster (in our case using IaC to the Cloud Provider)  or the number of containers running the microservice, running again the stress test.  This information is invaluable when deciding what is the scale of each microservice in Production.

If the automated tests are passed OK the promotion is triggered. Here is how the promotion happens..

Steps:

  1. Get the version from the container registry.
  2. Deploy to the stage. Create the stage if it does not exist yet.
  3. Post-Deployment Test Phase
    During this phase, a set of microservices will be integrated with the latest stable version of other microservices. It involves Functional (End-to-End ), Stress and Security automated tests.
  4. Scale. Well, it is not usually a build pipeline task but the Cloud IT Platform gives us a lot of flexibility to produce smarter pipelines. If the stress tests are not meeting the expected results the pipelines scales the stage cluster (in our case using IaC to the Cloud Provider)  or the number of containers running the microservice, running again the stress test.  This information is invaluable when deciding what is the scale of each microservice in Production.

 

Hints and Gotchas

  • It is not a good idea to deploy services using the “latest” anti-pattern, and you need to specify the exact version of the microservice, because not all services are changed and require redeployment.
  • It is an architectural decision but it is not recommendable to use microservice instance to microservice instance communication. Avoid component-based discovery services and that stuff from Spring Cloud. Be serious and professional and use a message broker with the pub/sub pattern.
  • It is almost impossible to test all possible configurations of microservice versions for each environment, and is actually not required in most cases.
  • It’s better to recreate test environments each time the compatibility changes.
  • Keep everything as part of the source code: Jenkins pipelines, IaC, databases, etc.
  • Invest on post-deployment automated tests (functional/e2e, stress, security). They are repeateable and very profitable!  Besides, you can reuse them across differnet projects, minimizing the general cost for their development.

Configuration

It is prioritary to avoid the Spring Cloud approach, highly toxic and strictly framewok attached. The usage of an externalized runtime configuration like Spring Cloud config server is forbidden. It is much better to provide configuration values with environment variables/deployment descriptors

Conclusion

There are a list of pending topics (compatibility Matrix, the SCM workflow features, the Cloud IT Platform) that we’ll try to describe in future posts. But in conclusion, the pipelines are marking a well-defined road to test the components as soon as possible,keeping the modularity (per microservice), keeping the inter-compatibility and dynamically managing the IT platform.

We are able to create well-tested components to Exploratory testing stage with a high confidence on the quality of the delivered stuff, minimizing sensitively the cost of manual testing and live issues.

 

Jesus de Diego

Author: Jesus de Diego

Software Architect and Team Lead

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.