Delivery Platform and Automatic Creation of API Endpoints

This post details how the Delivery Platform provides the required cloud infrastructure to enable architecture key parts, specifically, it focus in the creation of API Endpoints.

Delivery Platform & API Endpoints

We will follow this structure:

  • Background – API Endpoints
  • Challenges
  • Objectives
  • Description
  • Conclusions
  • References

Background – API Endpoints

We have already explained in previous posts why continuous delivery (CD), automation and repeatibility is critical and mandatory in software development. This becomes even more important when a complex architecture with multiple components aligned to act as one is built.

FTS Delivery Platform is the core of the automation and infrastructure maintenace for our software architecture so its where we will focus to create API endpoints for the Computing Platform.

Challenges

The challenges we face for API Endpoints creation in the Delivery Platform are related to:

Technology

We have to combine multiple technologies to make this work. API Endpoints are microservices deployed in an Docker Container in a Cluster. We decided to go with Kubernetes clusters in Azure AKS as platform to deploy the endpoints. There are more technologies involve such as Eventhubs, Servicebus, Cosmosdb and many others as can be seen at the post: API Endpoint Prototypes

As described in previous posts, for all this work we will be doing Infrastructure as Code (IAC) with a combination of Terraform (Azure Provider), Azure CLI, Helm and shell scripts where required. This will be run inside Jenkins slaves (properly setup to the required technology)

Orchestration

Building and deploying an Endpoint requires proper orchestration to ensure all the required infrastructure is there when the endpoint is going to be deployed in the Kubernetes cluster (that includes all required resources such as the Azure Eventhubs, Azure Servicebus, Azure Cosmosdb, and the AKS cluster itself)

There are some orchestration challenges to be addressed like controlling resources already created or in deployment process. What is the ideal sequence to create the infrastructure elements to have the needed information at the proper timing.

Quality Control

One of the most important challenges is Quality, we need to ensure nothing is deployed/promoted unless it overcomes the defined Quality Gates at an static level, functional level and non functional level. This will be controled in each of the stages of an Endpoint.

Templates and Naming Conventions

Finally, the last challenge is to make all this reusable and just by changing some parameters, operate all across the different architecture stages.

This is where name conventions become critical because all the system needs to inherit the resource naming from patterns defined by the Delivery Platform avoiding dangling resources or excessive manual intervention from the engineers.

Objectives

  1. Create a set of Jenkins pipelines to deploy all the required infrastructure for an Endpoint to be built, deployed and tested.
  2. Create a set of Jenkins templates to reuse all the infrastructure creation features for other elements in the future.
  3. Overcome the technical challenge of combining, Terraform, Helm, Azure CLI over Jenkins and Git to successfully create the Endpoints required infrastructure.
  4. Successfully configure the API Endpoints modifying environmet variables adapting to the corresponding stage (CI, BETA, CANDIDATE, PRODUCTION).
  5. Simplify API Endpoint developers work by minimizing the number of variables to configure across the infrastructure. This way we will decrease the possibility of failures by human intervention.

Description

Let’s start by the beginning, there are two main topics here:

  1. Setup the required infrastructure for endpoints to work
  2. Execute the Continuous Delivery flow for the endpoint

Setting Up The Endpoint’s Required Infrastructure

We need to go to the Endpoints Prototype description to see what we need to create to satisfy the requiremets. Here it comes the first difficulty, all the endpoints are not equal. We have two types:

  • Fast Lane Endpoints
  • Slow Lane Endpoints

Fast Lane Endpoint

This enpoint type requires the following infrastructure elements:

  • Kubernetes Cluster
  • Cosmosdb
  • Eventhub Namespace
  • 2 Eventhubs
  • Azure Datalake

Slow Lane Endpoint

On the other hand we have this requirements:

  • Kubernetes Cluster
  • Cosmosdb
  • Servicebus Namespace
  • 2 Servicebus Queues
  • Azure Datalake

Generalization

After reviewing the endpoint types requirements we can see that we have the following common elements:

  • Kubernetes Cluster
  • Cosmosdb
  • Azure Datalake

And the following diferences:

  • Servicebus Namespace
  • 2 Servicebus Queues
  • Eventhub Namespace
  • 2 Eventhubs

So this means that we will create the common elements in the stage creation pipeline and different ones will be in the enpoint one. Additionally, both Servicebus and Eventhub Namespaces are like “containers” for the ServiceBus Queues/Topics and the EventHubs so we are going to define the naming convention in a way that guarantees there will be only one EventHub Namespace per stage. The same applies to ServiceBus Namespace.

The stage creation pipeline is as follows:

pipeline {

  agent any

  parameters {

    choice(name: 'stageName', description: 'The stage name you want to launch', choices: 'BETA\nCANDIDATE\nPRD')

  }

	stages {

		stage ("STAGE CREATION") {

		steps {

			script{

				 build job: 'AKS_CREATE',
					 parameters: [
						 string(name: 'stageName', value: "${params.stageName}"),
						 string(name: 'azureRegion', value: 'westeurope'),
					 ]
				 build job: 'COSMOSDB_CREATE',
					 parameters: [
						 string(name: 'stageName', value: "${params.stageName}"),
						 string(name: 'azureRegion', value: 'westeurope'),
					 ]

			}
		}
	}
}

As we can see, the Main Pipeline will trigger the creation of the AKS Cluster and the Cosmos DB. Now please observe the AKSCreation pipleline:

properties([
  parameters([
    choice(name: 'stageName', defaultValue: 'BETA', description: 'The stage type you want to launch', choices: 'BETA\nCANDIDATE\nPRD'),
    string(name: 'azureRegion', defaultValue: 'westeurope', description: 'The Endpoint you want to deploy')
   ])
])

def resourceGroupName = 'BUSINESS_'+params.stageName+'_STAGE'
def azureRegion = params.azureRegion
def stageFullName = 'BUSINESS_'+params.stageName
def businessTag = 'BUSINESS'
def aksClusterName = ('BUSINESS-'+ stageName + '-AKS').toLowerCase()
def aksNodeCount = 1
def aksVmSize = 'Standard_D2_v3'

tfResourceGroup(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'resource-group',
    providerAction: 'create',
    stageName: params.stageName,
    businessTag: businessTag,
    resourceGroupName: resourceGroupName,
    resourceGroupLocation: azureRegion
)

tfAzureAKS(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'aks',
    providerAction: 'create',
    resourceGroup: resourceGroupName,
    azureRegion: azureRegion,
    businessTag: businessTag,
    stageName: params.stageName,
    aksClusterName: aksClusterName,
    aksNodeCount: aksNodeCount,
    aksVmSize: aksVmSize,
    aksDnsPrefix: aksClusterName
)

Here is where one of the most important parts of the repeatibility of this work is done. You will see in this and the rest of pipelines that there is a naming convention in place that creates the Azure Resource Group by adding the Business label prefix and the stage name and the stage keyword at the end. This will be a pattern all across the infrastructure.

This pipeline makes use of our Jenkins pipeline templates to create the resource group using Terraform. We setup the Terraform backend to ensure the infrastructure is built in the proper order and locks are used to avoid parallel modification of cloud resources: Terraform Backends

	
def call(Map templateParams) {   

    def jenkinsAgentSelector = new com.fexco.jenkins.JenkinsAgentSelector(templateParams.technology,templateParams.technologyVersion)
    def terraformProvider = new com.fexco.infrastructure.terraform.TerraformProvider(this)
    def componentBuilder = new com.fexco.build.ComponentBuilder(templateParams.component,templateParams.gitGroup,templateParams.component,templateParams.technology,this)
    def infrastructureElement = new com.fts.azure.rg.ResourceGroup(
        this, templateParams.stageName, templateParams.businessTag, 
        templateParams.resourceGroupName, templateParams.resourceGroupLocation)

    podTemplate(cloud: jenkinsAgentSelector.podTemplateCloud, name: jenkinsAgentSelector.podTemplateName, label: jenkinsAgentSelector.podTemplateLabel, nodeUsageMode:'EXCLUSIVE', idleMinutes: 10, containers: [
        containerTemplate(name: jenkinsAgentSelector.podTemplateContainerName, image: jenkinsAgentSelector.podTemplateContainerImage, workingDirectory:'/home/jenkins', alwaysPullImage: true, ttyEnabled: true, command: 'cat')
    ]) {
    
        node(jenkinsAgentSelector.podTemplateLabel){
            stage('RESOURCE GROUP - TERRAFORM PREPARE') {
                container(name:jenkinsAgentSelector.podTemplateContainerName, shell: '/bin/bash') {
                    deleteDir()
                    withCredentials([azureServicePrincipal(templateParams.cloudProviderCredentialsId)]) {
                        componentBuilder.checkoutByBranch(templateParams.gitFeatureBranch)
                        dir(templateParams.terraformDataFolder){
                            infrastructureElement.prepareTerraform()
                            for (int i = 0; i < infrastructureElement.terraformVarsKeyList.size(); i++){
								terraformProvider.replaceTerraformVariable(infrastructureElement.terraformVarsKeyList[i],infrastructureElement.terraformVarsValueList[i])
							}
							terraformProvider.initTerraform("$AZURE_CLIENT_ID", "$AZURE_CLIENT_SECRET", "$AZURE_TENANT_ID", "$AZURE_SUBSCRIPTION_ID")
						}
					}
				}
			}
			stage('RESOURCE GROUP - TERRAFORM EXECUTE') {
				container(name:jenkinsAgentSelector.podTemplateContainerName, shell: '/bin/bash') {
					dir(templateParams.terraformDataFolder){ if(templateParams.providerAction == 'create'){
						terraformProvider.planTerraform()
						terraformProvider.applyTerraform()
					}else if(templateParams.providerAction == 'remove'){
						terraformProvider.destroyTerraform()
					}else{
						error('Wrong Provider action. Try with create or remove')
					}
				}
			}
		}
	}
}

As a simple explanation we can see that the resource group requires checking out the Terraform infrastructure definitions to update the proper values in the templates and then execute the terraform plan.

																								  
package com.fexco.infrastructure.terraform

class TerraformProvider implements Serializable {

    def steps

    def TerraformProvider(steps){
        this.steps = steps
        
    }

    def initTerraform(azureClientId, azureClientSecret, azureTenantId, azureSubscriptionId) {

        steps.sh "sed -i -- 's/XXX_AZURE_CLIENT_ID_XXX/$azureClientId/g' azure-provider.tf" 
        steps.sh "sed -i -- 's/XXX_AZURE_CLIENT_SECRET_XXX/$azureClientSecret/g' azure-provider.tf" 
        steps.sh "sed -i -- 's/XXX_AZURE_TENANT_ID_XXX/$azureTenantId/g' azure-provider.tf" 
        steps.sh "sed -i -- 's/XXX_AZURE_SUBSCRIPTION_ID_XXX/$azureSubscriptionId/g' azure-provider.tf"

        steps.sh "terraform init -input=false"

    }


    @NonCPS
    def replaceTerraformVariable(key, value) {

        def sedValues = key + '/' + value
        steps.sh "sed -i -- 's/$sedValues/g' *"

    }

    def planTerraform() {
    
        steps.sh "terraform plan -out=tfplan -input=false"

    }

    def applyTerraform() {
    
        steps.sh "terraform apply -lock=false -input=false tfplan"

    }
    
    def destroyTerraform() {
    
        steps.sh "terraform destroy -auto-approve"

    }


    @NonCPS
    def mapToList(depmap) {
        def dlist = []
        for (def entry2 in depmap) {
            dlist.add(new java.util.AbstractMap.SimpleImmutableEntry(entry2.key, entry2.value))
        }
        dlist
    }

}
>

Our IAC repository for Terraform elements includes multiple files such as the authentication information to work with Azure


provider "azurerm" {
  version           = "1.4"
  client_id         = "XXX_AZURE_CLIENT_ID_XXX"
  client_secret     = "XXX_AZURE_CLIENT_SECRET_XXX"
  tenant_id         = "XXX_AZURE_TENANT_ID_XXX"
  subscription_id   = "XXX_AZURE_SUBSCRIPTION_ID_XXX"
}
																								  

Resource Group

Terraform backend configuration with naming convention allowing the system to repeat the operations over the same infrastructure element. Each time we want to operate over this resource group, we will work over the azure blob storage container with key: BUSINESS.RESOURCE_GROUP_NAME.STAGE. This pattern applies to every element created by the Delivery Platform.


terraform {
  backend "azurerm" {
    storage_account_name = "tfbackend"
    container_name       = "tfbackend"
    key                  = "XXX_BUSINESS_XXX.XXX_BACKEND_COMPONENT_XXX.XXX_STAGE_XXX"
    access_key           = "XXX_ACCESS_KEY_XXX"
  }
}
																								  

Here is where the element configuration information is defined. In this particular case, the Resource Group Name points to a variable (same happens with azure location and tagging for cost management purposes).


resource "azurerm_resource_group" "xxx-resource-group" {
  name     = "${var.rg_name}"
  location = "${var.rg_location}"
  
  tags {
    business = "${var.business}"
    stage    = "${var.stage}"
  }
}

Additionally, two files are needed. The file that includes the variable types definition and the file that actually maps the final values in the template:


variable "rg_name" {
  type = "string"
}

variable "rg_location" {
  type = "string"
}

variable "business" {
  type = "string"
}

variable "stage" {
  type = "string"
}



The values in the placeholders XXX_MYVAR_XXX will be replaced in Jenkins pipeline execution using the provided values in the pipeline trigger.


## Resource Group 
rg_name="XXX_RESOURCE_GROUP_NAME_XXX"
rg_location="XXX_RESOURCE_GROUP_LOCATION_XXX"

## Tags
business="XXX_BUSINESS_TAG_XXX"
stage="XXX_STAGE_XXX"

Azure AKS

After executing the vars/tfResourceGroup.groovy pipeline template, we will have a succesfully created resource group waiting for the elements to be deployed in. Following the pipeline we can see that tfAzureAKS creation is the next step. The pipeline template will be the same and the only difference will be the values taken to fill the template and retrieving the proper terraform template for the AKS cluster.

								
resource "azurerm_kubernetes_cluster" "xxx-kubernetes-cluster" {

  name     = "${var.kube_cluster_name}"
  location = "${var.cluster_location}"

  resource_group_name    = "${var.rg_name}"
  kubernetes_version     = "1.10.3"
  dns_prefix = "${var.custom_dns_prefix}"

  linux_profile {
    admin_username = "${var.cluster_management_user}"

    ssh_key {
      key_data = "${var.ssh_key_data}"
    }
  }

  agent_pool_profile {
    name       = "default"
    count      = "${var.agent_nodes}"
    vm_size    = "${var.agent_vm_size}"
    os_type    = "Linux"
  }

  service_principal {
    client_id     = "${var.service_principal_id}"
    client_secret = "${var.service_principal_password}"
  }

  tags {
    business = "${var.business}"
    stage    = "${var.stage}"
  }

}				

At the end of the pipeline we will have a properly deployed AKS Cluster inside our fancy Resource Group.

Azure AKS created by the Jenkins pipeline using Terraform

We can see there is naming convention implemented in the system and proper tagging so we can keep track of our deployed cloud resources. The next step is to continue with Cosmosdb.

The pipeline that creates the Cosmosdb does not make any assumption about the RG. If the resource group is created, Terraform backend will tell us and the creation will be skipped. In case there is no RG created, the pipeline will create one. The justification for this is that we are preparing Cosmosdb pipeline to work with current platform description but we may need in the future deploy the Cosmosdb under different conditions (maybe a different type of architecture) so we want to keep this as decoupled as possible.

There is a new pipeline template executed at the end of the Cosmosdb creation. Template k8SecretDeployment is used to deploy opaque secrets in a Kubernetes cluster. This way, we can create the Cosmosdb, install the credentials generated in the cluster without human intervention, allowing all the pods inside to connect to the Cosmosdb (îf they are allowed to)


								
tfResourceGroup(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'resource-group',
    providerAction: 'create',
    stageName: stageName,
    businessTag: businessTag,
    resourceGroupName: resourceGroupName,
    resourceGroupLocation: azureRegion
)

tfCosmosDB(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'cosmosdb',
    providerAction: 'create',
    stageName: stageName,
    businessTag: businessTag,
    resourceGroup: resourceGroupName,
    cosmosdbPrimaryRegion: azureRegion,
    cosmosdbName: cosmosdbName,
    cosmosdbOfferType: 'Standard',
    cosmosdbKind: 'MongoDB',
    cosmosdbConsistencyLevel: 'BoundedStaleness',
    cosmosdbConsistencyMaxInteval: 10,
    cosmosdbConsistencyyMaxStaleness: 200,
    cosmosdbEnableFailover: 'true',
    cosmosdbFailoverLocation: 'northeurope'
)

k8SecretDeployment(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-deployments',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    resourceGroup: resourceGroupName,
    aksClusterName: aksClusterName,
    secretType: 'opaque',
    secretPath: 'xxxxxx',
    secretKeyUserEquivalent: cosmosdbUserCredentialsId,
    secretKeyPasswordEquivalent: cosmosdbPasswordCredentialsId,
    providerAction: 'deploy-kube'
)
								

Cosmosdb

Cosmosdb Terraform template is as follows:

resource "azurerm_cosmosdb_account" "xxx-cosmosdb" {

  name     = "${var.cosmosdb_name}"
  location = "${var.cosmosdb_location}"
  resource_group_name = "${var.rg_name}"
  offer_type = "${var.offer_type}"
  kind = "${var.kind}"

  enable_automatic_failover = "${var.enable_failover}"

  consistency_policy {

    consistency_level       = "${var.consistency_level}"
    max_interval_in_seconds = "${var.consistency_max_inteval}"
    max_staleness_prefix    = "${var.consistency_max_staleness}"

  }

  // Primary Geolocation
  geo_location {

    location          = "${var.cosmosdb_location}"
    failover_priority = 0

  }

  // Secondary Geolocation
  geo_location {

    location = "${var.failover_location}"
    failover_priority = 1

  }

  tags {
    business = "${var.business}"
    stage    = "${var.stage}"
  }
}

Then, we gather the Cosmosdb credentials and store them in a keyvault (making use of Cosmosdb and Azure KeyVault Jenkins pipeline templates)

class CosmosDB implements Serializable {

    def steps

    def stageName
    def businessTag

    def resourceGroup

    def name
    def location

    def offerType
    def kind
    
    def consistencyLevel
    def consistencyMaxInteval
    def consistencyyMaxStaleness

    def enableFailover
    def failoverLocation

    def terraformVarsKeyList
    def terraformVarsValueList

    def credentialsUserEquivalentKey
    def credentialsUserEquivalentValue
    def credentialsPasswordEquivalentKey
    def credentialsPasswordEquivalentValue

    def CosmosDB(steps, stageName, resourceGroup, businessTag, name, location, offerType, kind, 
                consistencyLevel, consistencyMaxInteval, consistencyyMaxStaleness, enableFailover, failoverLocation){

        this.steps = steps
        this.stageName = stageName
        this.resourceGroup = resourceGroup
        this.businessTag = businessTag
        this.name = name
        this.location = location
        this.offerType = offerType
        this.kind = kind
        this.consistencyLevel = consistencyLevel
        this.consistencyMaxInteval = consistencyMaxInteval
        this.consistencyyMaxStaleness = consistencyyMaxStaleness
        this.enableFailover = enableFailover
        this.failoverLocation = failoverLocation
        
    }

    def prepareTerraform(){

        terraformVarsKeyList = []
        terraformVarsValueList = []

        terraformVarsKeyList.add('XXX_VAR1_XXX')
        terraformVarsValueList.add('sbnamespace.'+name)

        terraformVarsKeyList.add('XXX_VAR2_XXX')
        terraformVarsValueList.add(stageName)

		// More...

    }	
    def retrieveCredentials(){
        steps.sh("az cosmosdb list-keys --name $name --resource-group $resourceGroup | jq .primaryMasterKey | tr -d '\"' > output.txt")
        credentialsUserEquivalentKey = name+'-user'
        credentialsUserEquivalentValue = name
        credentialsPasswordEquivalentKey = name+'-pass'
        credentialsPasswordEquivalentValue = steps.readFile 'output.txt'
        steps.sh("rm output.txt")
    }						
}
>
class KeyVault implements Serializable {

    def steps

    def name
    def value
								
    def KeyVault(steps, name){

        this.steps = steps
        this.name = name
        
    }								
    def setSecret(secretKey, secretValue, override){
        steps.sh("az keyvault secret set --vault-name $name --name $secretKey --value $secretValue")
    }
}							

So after all this, we will have the Cosmosdb created with its credentials installed in the cluster ready to use.

Azure RG with AKS and Cosmosdb

And the credentials in the cluster.

Cosmosdb Credentials in AKS

Eventhubs and Servicebus Queues

After these sequential steps to have the basis of the environment ready, now we will focus in the Endpoint specifics. Depending on the type (SLOW/FAST LANE) we will create different resources.

This ends up in two pipelines, the one for Slow Lane endpoints and the one for Fast Lane ones.

Fast Lane Pipeline

As we can see in the pipeline, the RG creation is there (same than happened with Cosmosdb and for the same justification). We have the Eventhub Namespace (as required by the Fast Lane endpoints) and it will be one and only one per stage. Terraform (thanks to the backend and the naming convention) will create it if it does not exist or it will leave the one existing to minimize the pipeline execution time and operation actions over working resources.

The namespace is a “container” for the Eventhubs so it needs to be created before them and it cannot be parallelized. After that, as per architecture definition, there will be two Eventhubs per endpoint (one for incoming events and one for outgoing ones). The creation can be in parallel as stated in the pipeline.

Finally a Datalake is created and, the same way that happened with Cosmosdb credentials, we upload the Eventhub Namespace credentials to the cluster just to have them ready for any Fast Lane endpoint to use them.

tfResourceGroup(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'resource-group',
    providerAction: 'create',
    stageName: params.stageName,
    businessTag: businessTag,
    resourceGroupName: resourceGroupName,
    resourceGroupLocation: azureRegion
)

tfEventHubNamespace(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'event-hub-namespace',
    providerAction: 'create',
    stageName: params.stageName,
    businessTag: businessTag,
    resourceGroup: resourceGroupName,
    azureRegion: azureRegion,
    eventHubNamespaceName: eventHubNamespaceName,
    eventHubNamespaceSku: 'Standard',
    eventHubNamespaceCapacity: 1
)

def eventHubs = [:]

eventHubs["INCOMING EVENTHUB"] = {
    tfEventHub(
        technology: 'iac',
        technologyVersion: 'any',
        component: 'cp-infrastructure-definitions',
        gitGroup: 'delivery-platform',
        gitFeatureBranch: 'master',
        cloudProvider: 'azure',
        cloudProviderCredentialsId: 'jenkins-generic-azure',
        terraformDataFolder: 'event-hub',
        providerAction: 'create',
        stageName: params.stageName,
        businessTag: businessTag,
        resourceGroup: resourceGroupName,
        eventHubNamespaceName: eventHubNamespaceName,
        eventHubName: incomingEventHubName,
        eventHubPartitionCount: 2,
        eventHubMessageRetention: 1
    )
}


eventHubs["OUTGOING EVENTHUB"] = {
    tfEventHub(
        technology: 'iac',
        technologyVersion: 'any',
        component: 'cp-infrastructure-definitions',
        gitGroup: 'delivery-platform',
        gitFeatureBranch: 'master',
        cloudProvider: 'azure',
        cloudProviderCredentialsId: 'jenkins-generic-azure',
        terraformDataFolder: 'event-hub',
        providerAction: 'create',
        stageName: params.stageName,
        businessTag: businessTag,
        resourceGroup: resourceGroupName,
        eventHubNamespaceName: eventHubNamespaceName,
        eventHubName: outgoingEventHubName,
        eventHubPartitionCount: 2,
        eventHubMessageRetention: 1
    )
}

parallel eventHubs

tfDataLake(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'data-lake',
    providerAction: 'create',
    stageName: params.stageName,
    businessTag: businessTag,
    resourceGroup: resourceGroupName,
    datalakeName: datalakeName,
    datalakeLocation: azureRegion

)

k8SecretDeployment(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-deployments',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    resourceGroup: resourceGroupName,
    aksClusterName: aksClusterName,
    secretType: 'opaque',
    secretPath: 'xxxx',
    secretKeyUserEquivalent: eventHubNamespaceUserCredentialsId,
    secretKeyPasswordEquivalent: eventHubNamespacePasswordCredentialsId,
    providerAction: 'deploy-kube'
)						

To avoid repeating similar scripts (Terraform, Jenkins templates…) we show only the results in Azure:

Azure EventHubs in the RG

Eventhubs

and in the AKS cluster:

Eventhubs Namespace Credentials in AKS

Slow Lane Pipeline

Comparing to Slow Lane Pipeline, here we don’t have Eventhubs and instead we have Servicebus and Queues

tfResourceGroup(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'resource-group',
    providerAction: 'create',
    stageName: stageName,
    businessTag: businessTag,
    resourceGroupName: resourceGroupName,
    resourceGroupLocation: azureRegion
)

def namespaces = [:]

namespaces["SERVICEBUS NAMESPACE"] = {

    tfServiceBusNamespace(
        technology: 'iac',
        technologyVersion: 'any',
        component: 'cp-infrastructure-definitions',
        gitGroup: 'delivery-platform',
        gitFeatureBranch: 'master',
        cloudProvider: 'azure',
        cloudProviderCredentialsId: 'jenkins-generic-azure',
        terraformDataFolder: 'servicebus-namespace',
        providerAction: 'create',
        stageName: stageName,
        businessTag: businessTag,
        resourceGroup: resourceGroupName,
        azureRegion: azureRegion,
        serviceBusNamespaceName: serviceBusNamespaceName,
        serviceBusNamespaceSku: 'Standard'
    )

}

parallel namespaces


def hubsAndQueues = [:]



hubsAndQueues["SEND SERVICEBUS QUEUE"] = {
    tfServiceBusQueue(
        technology: 'iac',
        technologyVersion: 'any',
        component: 'cp-infrastructure-definitions',
        gitGroup: 'delivery-platform',
        gitFeatureBranch: 'master',
        cloudProvider: 'azure',
        cloudProviderCredentialsId: 'jenkins-generic-azure',
        terraformDataFolder: 'servicebus-queue',
        providerAction: 'create',
        stageName: stageName,
        businessTag: businessTag,
        resourceGroup: resourceGroupName,
        serviceBusQueueNamespaceName: serviceBusNamespaceName,
        serviceBusQueueName: sendServiceBusQueueName,
        serviceBusQueueEnablePartitioning: true        
    )
}

hubsAndQueues["RECEIVE SERVICEBUS QUEUE"] = {
    tfServiceBusQueue(
        technology: 'iac',
        technologyVersion: 'any',
        component: 'cp-infrastructure-definitions',
        gitGroup: 'delivery-platform',
        gitFeatureBranch: 'master',
        cloudProvider: 'azure',
        cloudProviderCredentialsId: 'jenkins-generic-azure',
        terraformDataFolder: 'servicebus-queue',
        providerAction: 'create',
        stageName: stageName,
        businessTag: businessTag,
        resourceGroup: resourceGroupName,
        serviceBusQueueNamespaceName: serviceBusNamespaceName,
        serviceBusQueueName: receiveServiceBusQueueName,
        serviceBusQueueEnablePartitioning: true        
    )
}

parallel hubsAndQueues

tfDataLake(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-definitions',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    terraformDataFolder: 'data-lake',
    providerAction: 'create',
    stageName: stageName,
    businessTag: businessTag,
    resourceGroup: resourceGroupName,
    datalakeName: datalakeName,
    datalakeLocation: azureRegion

)

k8SecretDeployment(
    technology: 'iac',
    technologyVersion: 'any',
    component: 'cp-infrastructure-deployments',
    gitGroup: 'delivery-platform',
    gitFeatureBranch: 'master',
    cloudProvider: 'azure',
    cloudProviderCredentialsId: 'jenkins-generic-azure',
    resourceGroup: resourceGroupName,
    aksClusterName: aksClusterName,
    secretType: 'opaque',
    secretPath: 'xxxx',
    secretKeyUserEquivalent: serviceBusNamespaceUserCredentialsId,
    secretKeyPasswordEquivalent: serviceBusNamespacePasswordCredentialsId,
    providerAction: 'deploy-kube'
)

As we can see, there are the created ServiceBus queues.

Servicebus Queues

API Endpoint Prototype Deployment

Finally, the Endpoint Prototype has to be deployed in the AKS Cluster. This requires Helm to take part as deployment tool and we will have te create a Helm chart for the API Endpoint Prototype.

The endpoint code is made in Scala and using SBT, we generate a docker file (when QGates are OK) and its pushed to the private docker repository. The code includes lots of environment variables that have to be injected to override the “templated functionality”.

So going to the Helm template detail, the most important files are the values.yaml (where we state the actual running values for endpoint such as the ports for the container, healthcheck URL, cosmosdb configurations, eventhubs/servicebus parameters…


# Default values for mychart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
  repository: myrepo/myimage
  tag: ###_ENDPOINT_DOCKER_TAG
  pullPolicy: IfNotPresent
service:
  name: api-endpoint-prototype
  type: LoadBalancer
  externalPort: 80
  internalPort: 8080
  healthcheck: ###_ENDPOINT_VERSION/###_ENDPOINT_SECTION/###_ENDPOINT_NAME/###_ENDPOINT_HC_NAME
ingress:
  enabled: false
  # Used to create an Ingress record.
  hosts:
    - chart-example.local
  annotations:
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  tls:
    # Secrets must be manually created in the namespace.
    # - secretName: chart-example-tls
    #   hosts:
    #     - chart-example.local
resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #  cpu: 100m
  #  memory: 128Mi
  # requests:
  #  cpu: 100m
  #  memory: 128Mi
deployment:
  akka:
    loglevel: ###_ENDPOINT_LOG_LEVEL                  # Options: OFF, ERROR, WARNING, INFO, DEBUG
    log-dead-letters: ###_ENDPOINT_LOG_DEADLETTER
    log-dead-letters-during-shutdown: ###_ENDPOINT_LOG_DEADLETTER_DURINGSHUTDOWN
    genericTimeout: ###_ENDPOINT_GENERICTIMEOUT
    circuitbreaking:                                      # Time Values: callTimeout (sec), resetTimeout (sec)      
      maxFailures: ###_ENDPOINT_CB_MAXFAILS
      callTimeout: ###_ENDPOINT_CB_TO
      resetTimeout: ###_ENDPOINT_CB_RESETTO
  endpoint:
    APIID: ###_ENDPOINT_APIID
    scope: ###_ENDPOINT_SCOPE                         # EXT | INT
    priority: ###_ENDPOINT_PRIORITY                   # LOW | MEDIUM | HIGH
    consistencyLevel: ###_ENDPOINT_CONSISTENCYLEVEL   # LOW | MEDIUM | HIGH
    version: ###_ENDPOINT_VERSION
    section: ###_ENDPOINT_SECTION
    name: ###_ENDPOINT_NAME
    healthcheck: ###_ENDPOINT_HC_NAME
    batchModeEnabled: ###_ENDPOINT_BATCH_ENABLED
    batchName: ###_ENDPOINT_BATCH_NAME
  memory:
    internalEnabled: ###_ENDPOINT_MEM_ENABLED
    prefered: ###_ENDPOINT_STORAGE_PREFERED
    storage:
      protocol: ###_ENDPOINT_STORAGE_PROTOCOL
      accountname: ###_ENDPOINT_STORAGE_ACCOUNTNAME
      accountkey: ###_ENDPOINT_STORAGE_ACCOUNTKEY
      containerid: ###_ENDPOINT_STORAGE_CONTAINERID
    mongodb:
      host: ###_ENDPOINT_DB_HOST
      port: ###_ENDPOINT_DB_PORT
      dbname: ###_ENDPOINT_DB_NAME
      collection: ###_ENDPOINT_DB_COLLECTION
      ssl: ###_ENDPOINT_DB_SSL
      credentials:
        provided: true
        secretName: xxx                         # Only used if provided = true
        username: ###_ENDPOINT_DB_USERNAME                # Only used if provided = false
        password: ###_ENDPOINT_DB_PASSWORD                # Only used if provided = false
  communications:
    mode: ###_ENDPOINT_MESSAGEMODE                        # QUEUE | STREAM
  servicebus:
    namespace: ###_ENDPOINT_SB_NAMESPACE
    serviceBusRootUri: ###_ENDPOINT_SB_URISUFIX
    credentials:
        provided: true
        secretName: xxx             # Only used if provided = true
        saskeyname: ###_ENDPOINT_SB_SASKEYNAME            # Only used if provided = false
        saskey: ###_ENDPOINT_SB_SASKEY                    # Only used if provided = false
    queues:
      send: ###_ENDPOINT_SB_QSEND
      receive: ###_ENDPOINT_SB_QRECEIVE
  eventhub:
    poolInstances: ###_ENDPOINT_EH_POOLINSTANCES
    connection:
      eventHubEndpoint: ###_ENDPOINT_EH_ENDPOINT
      eventHubNamein: ###_ENDPOINT_EH_NAME_IN
      eventHubNameout: ###_ENDPOINT_EH_OUT
      eventHubNamenotifications: ###_ENDPOINT_EH_NOTIFICATIONS
      eventHubPartitions: ###_ENDPOINT_EH_PARTITIONS
      credentials:
        provided: true
        secretName: xxxxx               # Only used if provided = true
        accesspolicy: ###_ENDPOINT_EH_ACCESSPOLICY        # Only used if provided = false
        accesskey: ###_ENDPOINT_EH_ACCESSKEY                 # Only used if provided = false
    consumer:
      maxEventCount: ###_ENDPOINT_CONSUMER_MAXEVENTCOUNT
      consumerGroupName: ###_ENDPOINT_CONSUMER_GROUPNAME
      partitionId: ###_ENDPOINT_CONSUMER_PARTITIONID
      eventPosition: ###_ENDPOINT_CONSUMER_EVENTPOSITION
      receiverOptions:
        identifier: ###_ENDPOINT_CONSUMER_RECEIVER_ID
        runtimeMetricEnabled: ###_ENDPOINT_CONSUMER_RECEIVER_RTMETRIC_ENABLED

You can see commented placeholders in the file. This is because we need placeholders to be overwritten when Jenkins executes the deployment pipeline. The other justification is that if the value is commented, the Helm chart will ignore the value and the Endpoint will use the default value given by the developer in the prototype code.

So, this file will be read by Helm when installing the chart in the kubernetes cluster and as result, the Helm chart files will be generated with proper values.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: {{ template "fullname" . }}
  labels:
    app: {{ template "name" . }}
    chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.replicaCount }}
  template:
    metadata:
      labels:
        app: {{ template "name" . }}
        release: {{ .Release.Name }}
    spec:
      imagePullSecrets:
      - name: nexus-registry-credentials
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{default "latest" .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          env:
          {{- if .Values.deployment.akka }}
          {{- if .Values.deployment.akka.loglevel }}
          - name: "ENDPOINT_LOG_LEVEL"
            value: {{ .Values.deployment.akka.loglevel }}
          {{- end }}
          {{- if .Values.deployment.akka.logdeadletters }}
          - name: "ENDPOINT_LOG_DEADLETTER"
            value: {{ .Values.deployment.akka.logdeadletters }}
          {{- end }}
          {{- if .Values.deployment.akka.logdeadlettersduringshutdown }}
          - name: "ENDPOINT_LOG_DEADLETTER_DURINGSHUTDOWN"
            value: {{ .Values.deployment.akka.logdeadlettersduringshutdown }}
          {{- end }}
          {{- if .Values.deployment.akka.genericTimeout }}
          - name: "ENDPOINT_GENERICTIMEOUT"
            value: {{ .Values.deployment.akka.genericTimeout }}
          {{- end }}
          {{- if .Values.deployment.akka.circuitbreaking }}
          {{- if .Values.deployment.akka.circuitbreaking.maxFailures }}
          - name: "ENDPOINT_CB_MAXFAILS"
            value: {{ .Values.deployment.akka.circuitbreaking.maxFailures }}
          {{- end }}
          {{- if .Values.deployment.akka.circuitbreaking.callTimeout }}
          - name: "ENDPOINT_CB_TO"
            value: {{ .Values.deployment.akka.circuitbreaking.callTimeout }}
          {{- end }}
          {{- if .Values.deployment.akka.circuitbreaking.resetTimeout }}
          - name: "ENDPOINT_CB_RESETTO"
            value: {{ .Values.deployment.akka.circuitbreaking.resetTimeout }}
          {{- end }}
          {{- end }} # End of .akka.circuitbreaking
          {{- end }} # End of .akka
          {{- if .Values.deployment.endpoint }}
          {{- if .Values.deployment.endpoint.APIID }}
          - name: "ENDPOINT_APIID"
            value: {{ .Values.deployment.endpoint.APIID }}
          {{- end }}
          {{- if .Values.deployment.endpoint.scope }}
          - name: "ENDPOINT_SCOPE"
            value: {{ .Values.deployment.endpoint.scope }}
          {{- end }}
          {{- if .Values.deployment.endpoint.priority }}
          - name: "ENDPOINT_PRIORITY"
            value: {{ .Values.deployment.endpoint.priority }}
          {{- end }}
          {{- if .Values.deployment.endpoint.consistencyLevel }}
          - name: "ENDPOINT_CONSISTENCYLEVEL"
            value: {{ .Values.deployment.endpoint.consistencyLevel }}
          {{- end }}
          {{- if .Values.deployment.endpoint.version }}
          - name: "ENDPOINT_VERSION"
            value: {{ .Values.deployment.endpoint.version }}
          {{- end }}
          {{- if .Values.deployment.endpoint.section }}
          - name: "ENDPOINT_SECTION"
            value: {{ .Values.deployment.endpoint.section }}
          {{- end }}
          {{- if .Values.deployment.endpoint.name }}            
          - name: "ENDPOINT_NAME"
            value: {{ .Values.deployment.endpoint.name }}
          {{- end }}
          {{- if .Values.deployment.endpoint.healthcheck }}            
          - name: "ENDPOINT_HC_NAME"
            value: {{ .Values.deployment.endpoint.healthcheck }}
          {{- end }}
          {{- if .Values.deployment.endpoint.batchModeEnabled }}            
          - name: "ENDPOINT_BATCH_ENABLED"
            value: {{ .Values.deployment.endpoint.batchModeEnabled }}
          {{- end }}
          {{- if .Values.deployment.endpoint.batch }}
          - name: "ENDPOINT_BATCH_NAME"
            value: {{ .Values.deployment.endpoint.batch }}
          {{- end }}
          {{- end }} # End of .endpoint
          {{- if .Values.deployment.memory }}              
          {{- if .Values.deployment.memory.internalEnabled }}
          - name: "ENDPOINT_MEM_ENABLED"
            value: {{ .Values.deployment.memory.internalEnabled }}
          {{- end }}
          {{- if .Values.deployment.memory.prefered }}
          - name: "ENDPOINT_STORAGE_PREFERED"
            value: {{ .Values.deployment.memory.prefered }}
          {{- end }}
          {{- if .Values.deployment.memory.storage }}
          {{- if .Values.deployment.memory.storage.protocol }}
          - name: "ENDPOINT_STORAGE_PROTOCOL"
            value: {{ .Values.deployment.memory.storage.protocol }}
          {{- end }}
          {{- if .Values.deployment.memory.storage.accountname }}
          - name: "ENDPOINT_STORAGE_ACCOUNTNAME"
            value: {{ .Values.deployment.memory.storage.accountname }}
          {{- end }}
          {{- if .Values.deployment.memory.storage.accountkey }}
          - name: "ENDPOINT_STORAGE_ACCOUNTKEY"
            value: {{ .Values.deployment.memory.storage.accountkey }}
          {{- end }}
          {{- if .Values.deployment.memory.storage.containerid }}
          - name: "ENDPOINT_STORAGE_CONTAINERID"
            value: {{ .Values.deployment.memory.storage.containerid }}
          {{- end }}
          {{- end }} # End of .memory.storage
          {{- if .Values.deployment.memory.mongodb }}
          {{- if .Values.deployment.memory.mongodb.host }}      
          - name: "ENDPOINT_DB_HOST"
            value: {{ .Values.deployment.memory.mongodb.host }}
          {{- end }}
          {{- if .Values.deployment.memory.mongodb.port }}
          - name: "ENDPOINT_DB_PORT"
            value: {{ .Values.deployment.memory.mongodb.port }}
          {{- end }}
          {{- if .Values.deployment.memory.mongodb.dbname }}
          - name: "ENDPOINT_DB_NAME"
            value: {{ .Values.deployment.memory.mongodb.dbname }}
          {{- end }}
          {{- if eq .Values.deployment.memory.mongodb.credentials.provided true }}
          - name: "ENDPOINT_DB_USERNAME"
            valueFrom:
              secretKeyRef:
                key:  username
                name: {{ .Values.deployment.memory.mongodb.credentials.secretName }}
          - name: "ENDPOINT_DB_PASSWORD"
            valueFrom:
              secretKeyRef:
                key:  password
                name: {{ .Values.deployment.memory.mongodb.credentials.secretName }}
          {{ else if eq .Values.deployment.memory.mongodb.credentials.provided false }}
          - name: "ENDPOINT_DB_USERNAME"
            value: {{ .Values.deployment.memory.mongodb.credentials.username }}
          - name: "ENDPOINT_DB_PASSWORD"
            value: {{ .Values.deployment.memory.mongodb.credentials.password }}
          {{- end }} # End if .memory.mongodb.credentials.provided
          {{- if .Values.deployment.memory.mongodb.collection }}
          - name: "ENDPOINT_DB_COLLECTION"
            value: {{ .Values.deployment.memory.mongodb.collection }}
          {{- end }}
          {{- if .Values.deployment.memory.mongodb.ssl }}
          - name: "ENDPOINT_DB_SSL"
            value: {{ .Values.deployment.memory.mongodb.ssl }}
          {{- end }}
          {{- end }} # End of .memory.mongodb
          {{- end }} # End of .memory
          {{- if .Values.deployment.communications }}
          {{- if .Values.deployment.communications.mode }}
          - name: "ENDPOINT_MESSAGEMODE"
            value: {{ .Values.deployment.communications.mode }}
          {{- end }}
          {{- end }} # End of .communications
          {{- if .Values.deployment.servicebus }}
          {{- if .Values.deployment.servicebus.namespace }}
          - name: "ENDPOINT_SB_NAMESPACE"
            value: {{ .Values.deployment.servicebus.namespace }}
          {{- end }}
          {{- if eq .Values.deployment.servicebus.credentials.provided true}}
          - name: "ENDPOINT_SB_SASKEYNAME"
            valueFrom:
              secretKeyRef:
                key:  saskeyname
                name: servicebus-namespace-auth
          - name: "ENDPOINT_SB_SASKEY"
            valueFrom:
              secretKeyRef:
                key:  saskey
                name: servicebus-namespace-auth
          {{ else if eq .Values.deployment.servicebus.credentials.provided false }}
          - name: "ENDPOINT_SB_SASKEYNAME"
            value: {{ .Values.deployment.servicebus.credentials.saskeyname }}
          - name: "_ENDPOINT_SB_SASKEY"
            value: {{ .Values.deployment.servicebus.credentials.saskey }}
          {{- end }} # End if .servicebus.credentials.provided
          {{- if .Values.deployment.servicebus.serviceBusRootUri }}
          - name: "ENDPOINT_SB_URISUFIX"
            value: {{ .Values.deployment.servicebus.serviceBusRootUri }}
          {{- end }}
          {{- if .Values.deployment.servicebus.queues }}
          {{- if .Values.deployment.servicebus.queues.send }}
          - name: "ENDPOINT_SB_QSEND"
            value: {{ .Values.deployment.servicebus.queues.send }}
          {{- end }}
          {{- if .Values.deployment.servicebus.queues.receive }}
          - name: "ENDPOINT_SB_QRECEIVE"
            value: {{ .Values.deployment.servicebus.queues.receive }}
          {{- end }}
          {{- end }} # End of .servicebus.queues
          {{- end }} # End of .servicebus
          {{- if .Values.deployment.eventhub }}
          {{- if .Values.deployment.eventhub.poolInstances }}
          - name: "ENDPOINT_EH_POOLINSTANCES"
            value: {{ .Values.deployment.eventhub.poolInstances }}
          {{- end }}
          {{- if .Values.deployment.eventhub.connection }}
          {{- if .Values.deployment.eventhub.connection.eventHubEndpoint }}
          - name: "ENDPOINT_EH_ENDPOINT"
            value: {{ .Values.deployment.eventhub.connection.eventHubEndpoint }}
          {{- end }}
          {{- if .Values.deployment.eventhub.connection.eventHubNamein }}
          - name: "ENDPOINT_EH_NAME_IN"
            value: {{ .Values.deployment.eventhub.connection.eventHubNamein }}
          {{- end }}
          {{- if .Values.deployment.eventhub.connection.eventHubNameout }}
          - name: "ENDPOINT_EH_OUT"
            value: {{ .Values.deployment.eventhub.connection.eventHubNameout }}
          {{- end }}
          {{- if .Values.deployment.eventhub.connection.eventHubNamenotifications }}
          - name: "ENDPOINT_EH_NOTIFICATIONS"
            value: {{ .Values.deployment.eventhub.connection.eventHubNamenotifications }}
          {{- end }}
          {{- if .Values.deployment.eventhub.connection.eventHubPartitions }}
          - name: "ENDPOINT_EH_PARTITIONS"
            value: {{ .Values.deployment.eventhub.connection.eventHubPartitions }}
          {{- end }}
          {{- if eq .Values.deployment.eventhub.connection.credentials.provided true}}
          - name: "ENDPOINT_EH_ACCESSPOLICY"
            valueFrom:
              secretKeyRef:
                key:  accesspolicy
                name: eventhub-namespace-auth
          - name: "ENDPOINT_EH_ACCESSKEY"
            valueFrom:
              secretKeyRef:
                key:  accesskey
                name: eventhub-namespace-auth
          {{ else if eq .Values.deployment.eventhub.connection.credentials.provided false }}
          - name: "ENDPOINT_EH_ACCESSPOLICY"
            value: {{ .Values.deployment.eventhub.connection.credentials.accesspolicy }}
          - name: "ENDPOINT_EH_ACCESSKEY"
            value: {{ .Values.deployment.eventhub.connection.credentials.accesskey }}
          {{- end }} # End if .eventhub.connection.credentials.provided
          {{- end }} # End of .eventhub.connection
          {{- if .Values.deployment.eventhub.consumer }}
          {{- if .Values.deployment.eventhub.consumer.maxEventCount }}
          - name: "ENDPOINT_CONSUMER_MAXEVENTCOUNT"
            value: {{ .Values.deployment.eventhub.consumer.maxEventCount }}
          {{- end }}
          {{- if .Values.deployment.eventhub.consumer.consumerGroupName }}
          - name: "ENDPOINT_CONSUMER_GROUPNAME"
            value: {{ .Values.deployment.eventhub.consumer.consumerGroupName }}
          {{- end }}
          {{- if .Values.deployment.eventhub.consumer.partitionId }}
          - name: "ENDPOINT_CONSUMER_PARTITIONID"
            value: {{ .Values.deployment.eventhub.consumer.partitionId }}
          {{- end }}
          {{- if .Values.deployment.eventhub.consumer.eventPosition }}
          - name: "ENDPOINT_CONSUMER_EVENTPOSITION"
            value: {{ .Values.deployment.eventhub.consumer.eventPosition }}
          {{- end }}
          {{- if .Values.deployment.eventhub.consumer.receiverOptions }}
          {{- if .Values.deployment.eventhub.consumer.receiverOptions.identifier }}
          - name: "ENDPOINT_CONSUMER_RECEIVER_ID"
            value: {{ .Values.deployment.eventhub.consumer.receiverOptions.identifier }}
          {{- end }}
          {{- if .Values.deployment.eventhub.consumer.receiverOptions.runtimeMetricEnabled }}
          - name: "ENDPOINT_CONSUMER_RECEIVER_RTMETRIC_ENABLED"
            value: {{ .Values.deployment.eventhub.consumer.receiverOptions.runtimeMetricEnabled }}
          {{- end }}
          {{- end }} # End of .eventhub.consumer.receiverOptions
          {{- end }} # End of .eventhub.consumer
          {{- end }} # End of .eventhub
          ports:
            - containerPort: {{ .Values.service.internalPort }}
          livenessProbe:
            httpGet:
              path: {{default "/v1/api/health-check" .Values.service.healthcheck }}
              port: {{ .Values.service.internalPort }}
          readinessProbe:
            httpGet:
              path: {{default "/v1/api/health-check" .Values.service.healthcheck }}
              port: {{ .Values.service.internalPort }}
          resources:
{{ toYaml .Values.resources | indent 12 }}
    {{- if .Values.nodeSelector }}
      nodeSelector:
{{ toYaml .Values.nodeSelector | indent 8 }}
    {{- end }}

The deployment file has multiple conditional sentences to inject or not, the values in the template so that the deployment is done with the desired configurations.

So, aligned with the Continuous Delivery flow, Jenkins will build the Endpoint, it will run the SCA analysis with SonarQube and after that, the new code will be merged to master. Once master branch is updated with the new code, the pipeline releases the Endpoint Artifact to Nexus Repository and the Docker Image to the private repo.

The next steps are running functional tests in the BETA stage and this requires the Endpoint to be deployed in the AKS Cluster. Then, Jenkins making use of the Helm deployment pipeline template, will deploy the API Endpoint Prototype with the desired configuration values to be tested and certified to progress through the CD flow.

class HelmDeployer implements Serializable {

    def steps

    def HelmDeployer(steps){
        this.steps = steps
        
    }

    def deploy(chartName, chartFolder){
        steps.sh("helm install --name $chartName $chartFolder")
    }

    def remove(chartName){
        steps.sh("helm delete --purge $chartName")
    }

}

And there is the Endpoint in the AKS cluster

Endpoint in AKS

Conclusions

In this post, we have described the process to successfully implement the CD flow for API Endpoint Prototypes including the required cloud infrastructure creation for each specific case.  We had the following objectives:

  1. Create a set of Jenkins Pipelines to deploy all the required infrastructure for an Endpoint to be built, deployed and tested. An orchestrated set of jenkins pipeline templates, Terraform infrastructure templates following a strict naming convention that allows repeatibilty of actions [DONE]
  2. Create a set of Jenkins templates to reuse all the infrastructure creation features for other elements in the future. Templates are there, working as much as possible with generic operation with placeholders and inherited variables where the pipelines can take advantage of them for current and future architecture evolutions [DONE]
  3. Overcome the technical challenge of combining, Terraform, Helm, Azure CLI over Jenkins and Git to successfully create the Endpoints required infrastructure. This objective required tons of investigation and try and error to enable Jenkins to work with these technologies all together. Multiple Jenkins Slaves with different technology stacks have been generated [DONE]
  4. Successfully configure the API Endpoints modifying environmet variables adapting to the corresponding stage (CI, BETA, CANDIDATE, PRODUCTION). Helm plays an important role in here. Just by passing the proper arguments in the Jenkins pipeline, the environment variables will be easily set to keep things working across stages without code changes [DONE]
  5. Simplify API Endpoint developers work by minimizing the number of variables to configure across the infrastructure. This way we will decrease the possibility of failures by human intervention. Naming conventions and the Delivery Platform creating the cloud resources, providing secrets and configurations to link elements minimize the elements that the developer has to care when creating an endpoint based on the protoype. This is a great advantage when the number of endpoints grow (and it will) [DONE]

References

  1. Azure AKS Documentation
  2. Kubernetes Documentation
  3. Terraform (Azure Provider) Documentation
  4. Azure CLI Documentation
  5. Helm Documentation
  6. Azure Eventhubs Documentation
  7. Azure Servicebus Documentation
  8. Azure Cosmosdb Documentation
  9. Azure Datalake Documentation
  10. Jenkins Pipelines Documentation
Fernando Munoz

Author: Fernando Munoz

Software Team Leader

One Reply to “Delivery Platform and Automatic Creation of API Endpoints”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.