Building the Builder – How to create a Continuous Delivery platform in Azure (II) – Deploying Kubernetes cluster

Current post will focus in the creation of a Kubernetes cluster in Azure AKS  with Terraform for the Continuous Delivery platform.

This is the continuation of the post Building the Builder Chapter I

Introduction

As we stated in the previous post, we want a Continous Delivery Platform that is higly available, fault tolerant, replicable, and scalable.

Meeting the Requirements for our Continous Delivery Platform

So, in order to cope with the requirements we considered the following posibilities:

  • Virtual Machines Deployment
  • Azure ACS
  • Azure AKS

Virtual Machines Deployment

Deploying this in standalone virtual machines gives you more fine grained deployment without the resources overhead of the cluster orchestration technlogies like Azure ACS or AKS. The problem of using this is that you have to build on top all the service health check mechanisims, resiliency, automated deployment, networking, scaling…

So that makes this an expensive solution in terms of time and knowledge.

Azure ACS

Azure Container Services (ACS) is (at the time of writing the post) the most extended version of container cluster deployments in Azure. They include customisations for DCOS, Kubernetes…

There is more documentation about this than Azure AKS (which is the new version of Azure ACS for Kubernetes) but it is not where Azure will put the effort in terms of Kubernetes. So using this right now might lead you to change everything in months (if you plan to go with Kubernetes as cluster orchestrator).

Azure AKS

Azure AKS (also know as Azure Managed Kubernetes Solution Azure Managed Kubernetes Solution) is where Azure is doing the evolution for Kubernetes clusters. This is the option we are chosing because it is simple, fast, and it seems to be the one with more updates from Azure.

Deploying in AKS

Deploying an AKS cluster for our Continous Delivery platform is as simple as doing the following with Azure CLI:

az aks create –n myCluster –g myResourceGroup

Obviously this only creates a simple version of an AKS cluster. This is not enough for our purposes because we need to define things like the number of nodes in the cliuster, their cores and memory, the way to access the cluster (rsa keys for ssh) and other elements that become a list of scritps and commands to be run.

We want our deployments to be Infrastructure as Code (IaC) so we will store everything in source code in  a Git repository. Anyway, the problem with that is that we would like to have something like a status tracking tool to know if we have our infrastructure deployed or not, what is the status of it in term of resources, and somehow make possible to collaborate between team members. We also would like to be able to change the cloud provider if needed with the smaller impact possible.

Here is where Terraform gets in. The whole idea behind terraform is to describe our infrastructure and keep track of the modifications we do on it.

Let’s deploy an AKS cluster with Terraform

Terraform Files

To deploy something with terraform we need the following files:

  • my-deployment.tf (The file where we will define the elements we will create/deploy)
  • variables.tf (The file that includes the definitions of the variables included in my-deployment.tf)
  • terraform.tfvars (The file where we include the values for the variables in the file variables.tf)
resource "azurerm_resource_group" "test" {
  name     = "${var.rg_name}"
  location = "${var.cluster_location}"

  lifecycle {
    prevent_destroy = true
  }

  tags {
    myTag = "${var.myTag}"
  }  
}

resource "azurerm_kubernetes_cluster" "test" {
  name                   = "${var.kube_cluster_name}"
  location               = "${azurerm_resource_group.test.location}"
  resource_group_name    = "${azurerm_resource_group.test.name}"
  kubernetes_version     = "1.8.7"
  dns_prefix = "${var.custom_dns_prefix_agent}"

  linux_profile {
    admin_username = "${var.cluster_management_user}"

    ssh_key {
      key_data = "${var.ssh_key_data}"
    }
  }

  agent_pool_profile {
    name       = "default"
    count      = "${var.agent_nodes}"
    vm_size    = "${var.agent_vm_size}"
    os_type    = "Linux"
  }

  service_principal {
    client_id     = "${var.service_principal_id}"
    client_secret = "${var.service_principal_password}"
  }

  tags {
    myTag = "${var.myTag}"
  }

}
// Resource Group Variables
variable "rg_name" {
  type = "string"
}

variable "cluster_location" {
  type = "string"
}

// Kubernetes Cluster Variables
variable "kube_cluster_name" {
  type = "string"
}

variable "ssh_key_data" {
  type = "string"
}

variable "custom_dns_prefix_agent" {
  type = "string"
}


// Agents variables
variable "agent_nodes" {
  type = "string"
}

variable "agent_vm_size" {
  type = "string"
}

// Azure Service Principal
variable "service_principal_id" {
  type = "string"
}

variable "service_principal_password" {
  type = "string"
}

// Tags Variables
variable "myTag" {
  type = "string"
}
# Delivery Platform Global Information
## Resource Group 
rg_name="FTS_Delivery_Platform"
cluster_location="westeurope"

## Kubernetes Cluster Variables
kube_cluster_name="my-cluster"
ssh_key_data="ssh-rsa XXXXXXXX"
custom_dns_prefix_agent="fts-delivery-platform-agent"

## Agents Variables
agent_nodes=3
agent_vm_size="Standard_DS2_v2"

### Azure user authentication
service_principal_id="XXXX"
service_principal_password="XXXX"

## Tags
myTag="My Tag"

Additionally, we want to define a centralized repository where Terraform will store the infrastructure status. This is very helpful in collaborative environments because it defines locks over the infrastructure to allow/block concurrent users to perform actions in the infrastructure while changes are happening. This is known as Terraform Backend and we will use an Azure Storage account for it. We will define all this information in:

  • backend.tf
terraform {
  backend "azurerm" {
    storage_account_name = "myterraformbackendsa"
    container_name       = "myterraformbackend"
    key                  = "backend.terraform.testcluster"
    access_key           = "XXXX"
  }
}

So, right now, we can create the cluster following the next steps:

  1. Install terraform in your computer
  2. Create a folder that includes the terraform files
  3. Log into azure with az login
  4. Go to the terraform files folder and do:
    1. terraform init
    2. terraform plan
    3. terraform apply
      1. yes

Then, go to azure portal and you will see deployed AKS for our Continous Delivery platform.

In following posts we will be deploying elements in our Kubernetes cluster.

 

References

  1. Azure AKS
  2. Terraform
  3. Kubernetes

 

 

Fernando Munoz

Author: Fernando Munoz

Software Team Leader

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.