Tutorial - Kustomize
Eloi Perdereau
Objective
Understand the basics of Kustomize, a tool whose personalize default YAML manifests with custom values.
Installation and first use
Install Kustomize via the GCloud CLI:
gcloud components install kustomize
Make sure it is properly installed:
kustomize version
Preamble
A nice feature of Kustomize, besides patching values, is to inspect Kubernetes resources in a directory.
In a directory containing some YAML manifests, issue the following command to list resources in a tree-like structure:
kustomize cfg tree .
Another subcommand of cfg is count that resumes how many resources of each kind is present in the directory.
kustomize cfg count .
Add default labels
Create directory named base and add the following Service and Deployment:
kubia-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: kubia
kubia-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- image: luksa/kubia:v1
name: nodejs
Then create a file kustomization.yaml with the following content:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- kubia-deploy.yaml
- kubia-svc.yaml
commonLabels:
pkg: fila2
stage: tuto
This tells Kustomize to manage our two resources and for each of them add two labels pkg: fila2 and stage: tuto.
To apply the customization, issue
kustomize build .
Notice that labels are appended not only on the "/metadata" field, but also under "/selector/matchLabels" and "/template/metadata" of the Deployment, and "/spec/selector" of the Service.
Remember that Kustomize do not modifiy the original YAML manifest, this is why the build command outputs the result but do not write anything to disk. In order to apply the resulting configuration onto the cluster, we can pipe the result into kubectl as such
kustomize build . | kubectl apply -f -
But Kustomize is built-in kubectl, so we can more simply use the -k flag to achieve the same result. In fact, we need not have the kustomize CLI tool to use it. Issue the following command:
kubectl apply -k .
and show Services and Deployments with their labels to confirm:
kubectl get svc,deploy --show-labels
Customise "dev" and "prod" environments
We have applied a basic customization to a base directory. We will now create two sibling directories dev and prod that will apply more specific variants on the base directory. The file structure will look like this
├── base
│ ├── kubia-deploy.yaml
│ ├── kubia-svc.yaml
│ └── kustomization.yaml
├── dev
│ └── kustomization.yaml
└── prod
├── kubia-deploy-patch.yaml
├── kubia-svc-patch.yaml
└── kustomization.yaml
Each directory contains a kustomization.yaml file. The specific dev and prod will make a reference to the base and add some patch overlays.
Create dev/kustomization.yaml with the following:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- target:
kind: Deployment
name: kubia
patch: |-
- op: replace
path: /spec/replicas
value: 5
- op: add
path: /spec/template/spec/containers/0/imagePullPolicy
value: Never
It instructs Kustomize to use the resources from the base directory and apply a patch to the deployment/kubia resource. The patch operations are written in the JSON Patch format.
Build the customization with kustomize build dev from the parent directory. If everything seems ok, apply the patch to the cluster with kubectl apply -k dev and verify the replica count with
kubectl get deploy/kubia
and the imagePullPolicy value with
kubectl get deploy/kubia -oyaml | grep imagePullPolicy
Note that both the prod/kustomization.yaml and the base/kustomization.yaml patches are applied.
Now create prod/kustomization.yaml with the following:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- path: kubia-deploy-patch.yaml
- path: kubia-svc-patch.yaml
Here, the patches are just files which will consist of a subset of the original ones, with updated values. Kustomize will detect the different patch format and will use the strategic merge operation.
In prod/kubia-deploy-patch.yaml, paste the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
template:
spec:
containers:
- name: nodejs
resources:
requests:
cpu: 100m
Notice that applying this file as is to the cluster will not work because it is not complete (e.g. replica is missing). Only the fields we want to overlay and the fields to identify their position are present. Namely in a strategic merge we need to provide the three apiVersion, kind and metadata/name fields to select the corresponding resource. Also, the containers field is an array, so we need to tell Kustomize which element to update, here we use the name field.
In prod/kubia-svc-patch.yaml, paste the following:
apiVersion: v1
kind: Service
metadata:
name: kubia
spec:
ports:
- port: 80
targetPort: 5050
Here, in addition to the resource selection fields (apiVersion, kind and metadata/name), we select an element of the ports array by using the port field. All this to update the targetPort.
Build the customization and apply it to the cluster if everything seems good.
Clean the resources
Delete the resources by using a label selector on the CLI with the following command
kubectl delete svc,deploy -lstate=tuto
which tell Kubernetes to delete all Services and Deployments that have the label stage=tuto.