Turorial - Using Kubernetes Terraform

imt

1. Objective

In the 1st tutorial, you have deployed two containers (redis and gb-frontend) on Docker.

2. Deploy the Application on GKE

We will first configure the provider to connect to the cluster, then apply Kubernetes manifest through Terraform.

2.1. Configure the providers to access the Kubernetes cluster

  • Download the kubeconfig file and place it in a secure location on your machine.

  • Create provider.tf file and paste the following. Adapt FIXME to point to the downloaded kubeconfig.

  kubernetes = {
    source = "hashicorp/kubernetes"
    version = ">= 2.23.0"
  }

provider "kubernetes" {
  config_path = "FIXME"
}
  • In a shell, export the environment variable that sets the kubeconfig path so that the kubectl command will send request to the cluster API server.

export KUBECONFIG="FIXME"
  • Then run kubectl to see if the connection works.

kubectl cluster-info

2.2. Create K8s workloads for Redis

We will create the Kubernetes resources Deployment and Service for both our Redis leader and follower. In a new file redis-leader.tf, copy the following verbose code.

resource "kubernetes_deployment_v1" "redis-leader" {
  metadata {
    name = "redis-leader"
    labels = {
      App = "redis"
      Tier = "backend"
    }
    namespace = "FIXME"
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        App = "redis"
      }
    }
    template {
      metadata {
        labels = {
          App = "redis"
          Tier = "backend"
        }
      }
      spec {
        container {
          name  = "leader"
          image = "docker.io/redis:6.0.5"
          resources {
            requests = {
              cpu = "100m"
              memory = "100Mi"
            }
          }
          port {
            container_port = 6379
          }
        }
      }
    }
  }
}

resource "kubernetes_service_v1" "redis-leader" {
  metadata {
    name = "redis-leader"
    labels = {
      App = "redis"
      Tier = "backend"
    }
    namespace = "FIXME"
  }
  spec {
    selector = {
      App = "redis"
      Tier = "backend"
    }
    port {
      port = 6379
      target_port = 6379
    }
  }
}

And in redis-follower, the following.

resource "kubernetes_deployment_v1" "redis-follower" {
  metadata {
    name = "redis-follower"
    labels = {
      App = "redis"
      Tier = "backend"
    }
    namespace = "FIXME"
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        App = "redis"
      }
    }
    template {
      metadata {
        labels = {
          App = "redis"
          Tier = "backend"
        }
      }
      spec {
        container {
          name  = "follower"
          image = "us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2"
          resources {
            requests = {
              cpu = "100m"
              memory = "100Mi"
            }
          }
          port {
            container_port = 6379
          }
        }
      }
    }
  }
}

resource "kubernetes_service_v1" "redis-follower" {
  metadata {
    name = "redis-follower"
    labels = {
      App = "redis"
      Tier = "backend"
    }
    namespace = "FIXME"
  }
  spec {
    selector = {
      App = "redis"
      Tier = "backend"
    }
    port {
      port = 6379
    }
  }
}

Apply and check that it was deployed: On the dashboard, in Kubernetes EngineWorkloads

2.3. Create the Frontend Workload

In a new file frontend.tf add the two following resources.

resource "kubernetes_deployment_v1" "gb-frontend" {
  metadata {
    name = "frontend"
    namespace = "FIXME"
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        App = "guestbook"
        Tier = "frontend"
      }
    }
    template {
      metadata {
        labels = {
          App = "guestbook"
          Tier = "frontend"
        }
      }
      spec {
        container {
          name  = "gb-frontend"
          image = "us-docker.pkg.dev/google-samples/containers/gke/gb-frontend:v5"
          env {
            name = "GET_HOSTS_FROM"
            value = "dns"
          }
          resources {
            requests = {
              cpu = "100m"
              memory = "100Mi"
            }
          }
          port {
            container_port = 80
          }
        }
      }
    }
  }
}

resource "kubernetes_service_v1" "frontend" {
  metadata {
    name = "frontend"
    labels = {
      App = "guestbook"
      Tier = "frontend"
    }
    namespace = "FIXME"
  }
  spec {
    selector = {
      App = "guestbook"
      Tier = "frontend"
    }
    port {
      port = 8080
      target_port = 80
    }
    type = "LoadBalancer"
  }
}

Apply the configuration.

Then, on the dashboard, go to Kubernetes EngineGateways, Services and Ingress (Passerelles, services et entrées), then Services. As in the Kubernetes tutorial from a previous session, we have indicated the LoadBalancer type in the frontend Service in order to allow the public access of the application. The public IP may take a few minutes to be attributed, but in the end, click on it and make sure the application works.

3. Modules

You may have noticed that in the Kubernetes Deployment and Service resources, there are many repetition of values. We could use locals, but we will abstract more deeply by creating child modules. More information on modules at https://developer.hashicorp.com/terraform/language/modules.

Note
Remember, in Terraform, one directory equals one module. And within a module/directory all .tf files are merged regardless of their names.

3.1. Module authoring

We will create a module to abstract Deployment resources.

Create a directory deployment/ and in this directory, create variables.tf and copy the following.

variable "metadata_name" {
  type = string
}

variable "label_app" {
  type = string
}

variable "label_tier" {
  type = string
}

variable "container_name" {
  type = string
}

variable "container_image" {
  type = string
}

variable "container_port" {
  type = number
}

variable "resources_requests" {
  type = map(any)
  default = {
    cpu = "100m"
    memory = "100Mi"
  }
}

In deployment/ directory, also create a file deployment.tf with the following content.

resource "kubernetes_deployment_v1" "deplt" {
  metadata {
    name = var.metadata_name
    labels = {
      App = var.label_app
      Tier = var.label_tier
    }
  }
  spec {
    replicas = 1
    selector {
      match_labels = {
        App = var.label_app
      }
    }
    template {
      metadata {
        labels = {
          App = var.label_app
          Tier = var.label_tier
        }
      }
      spec {
        container {
          name  = var.container_name
          image = var.container_image
          resources {
            requests = var.resources_requests
          }
          port {
            container_port = var.container_port
          }
        }
      }
    }
  }
}

Modules usually have output variables, they ought to be declared in a separate file for good practices. But here we are only refactoring plain resources.

Important
Input variables of modules must be carefully chosen. All values that are not inputs are hardcoded, meaning that callers of a module cannot change them. Here for example, the number of replicas is fixed !

3.2. Module usage

In redis-leader.tf and redis-follower.tf resources, replaces the deployments by the following.

module "redis-leader-deployment" {
  source = "./deployment/"

  metadata_name   = "redis-leader"
  label_app       = "redis"
  label_tier      = "backend"
  container_name  = "leader"
  container_image = "docker.io/redis:6.0.5"
  container_port  = 6379
}

and

module "redis-follower-deployment" {
  source = "./deployment/"

  metadata_name   = "redis-follower"
  label_app       = "redis"
  label_tier      = "backend"
  container_name  = "follower"
  container_image = "us-docker.pkg.dev/google-samples/containers/gke/gb-redis-follower:v2"
  container_port  = 6379
}

Before applying, we must do a terraform init to include the newly added module blocks in the lock file. Indeed, modules are supposed to be reusable configuration. In such, they can be versioned, uploaded on a registry, etc.

Apply. See that both resources must be completely replaced. They will have another identifier, namely we can access their attributes with module.redis-leader-deployment.kubernetes_deployment_v1.deplt.