Turorial - Understanding replicasets and controllers
1. Objective
The goal of this tutorial is to understand the concepts of replicaset and controller.
This tutorial directly uses a subset of the examples of the book “Kubernetes in action” written by Marko Lukša. All examples of the book can be found here.
2. Liveness probe
First we are going to define a liveness probe to our pod. Create a file 4-kubia-liveness-probe.yaml
with the following content:
apiVersion: v1
kind: Pod
metadata:
name: kubia-liveness
spec:
containers:
- image: luksa/kubia-unhealthy
name: kubia
livenessProbe:
httpGet:
path: /
port: 8080
In this example an httpGet
probe is specified. An HTTP GET request is sent to the container in the pod on the entrypoint /
on the port 8080
. Default values are set for initialDelaySeconds
, terminationGracePeriodSeconds
, periodSeconds
etc.
You can see the documentation for details.
Note that we use the container image luksa/kubia-unhealthy
that voluntarily send an error code after 5 requests on /
.
> kubectl create -f 4-kubia-liveness-probe.yaml
After a bit more than one minute the container will be restarted by the kubelet as the liveness probe fails.
> kubectl get po kubia-liveness
NAME READY STATUS RESTARTS AGE
kubia-liveness 1/1 Running 2 (12s ago) 4m3s
3. Replicaset
Create a file 5-kubia-replicaset.yaml
with the following content:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
> kubectl create -f 5-kubia-replicaset.yaml
> kubectl get rs
NAME DESIRED CURRENT READY AGE
kubia 3 3 3 73s
> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-5jnbz 1/1 Running 0 3m51s
kubia-csqr5 1/1 Running 0 3m51s
kubia-gjnkj 1/1 Running 0 3m51s
3.1. Pod failure
We will delete a pod to see how the replicaset react.
> kubectl delete pod kubia-5jnbz
The replicaset should almost immediately recreate a pod to replace the failing instance.
> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-5jnbz 1/1 Terminating 0 6m52s
kubia-csqr5 1/1 Running 0 6m52s
kubia-gjnkj 1/1 Running 0 6m52s
kubia-mkrv9 1/1 Running 0 21s
You can see details on the replicaset with the following command:
> kubectl describe rs kubia
3.2. Node failure
We can also simulate a node failure.
First we connect in SSH to one of the nodes
> kubectl get nodes
> gcloud compute ssh gke-zeus-default-pool-8d102f6f-0w7m
We are going to shutdown the network interface of the node.
gke-zeus-default-pool-8d102f6f-0w7m> sudo ifconfig eth0 down
In another terminal
> kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-zeus-default-pool-8d102f6f-0w7m NotReady <none> 5d1h v1.27.8-gke.1067004
gke-zeus-default-pool-8d102f6f-gbfl Ready <none> 5d1h v1.27.8-gke.1067004
gke-zeus-default-pool-8d102f6f-h721 Ready <none> 5d1h v1.27.8-gke.1067004
For a while the pods will still be seen as ready
> kubectl get pods
NAME READY STATUS RESTARTS AGE
kubia-csqr5 1/1 Running 0 18m
kubia-gjnkj 1/1 Running 0 18m
kubia-manual 1/1 Running 0 89m
kubia-mkrv9 1/1 Running 0 11m
But after a long period, a new pod will be started and the non-responding pod will be terminated.
NAME READY STATUS RESTARTS AGE
kubia-csqr5 1/1 Terminating 0 21m
kubia-gjnkj 1/1 Running 0 21m
kubia-m4426 1/1 Running 0 15s
kubia-mkrv9 1/1 Running 0 14m
You can restore the node in another terminal
> gcloud compute instances reset gke-zeus-default-pool-8d102f6f-0w7m
3.3. Changing labels of pods
You will change the label of one of the three pods managed by the replicaset as follows
> kubectl label pod kubia-gjnkj app=kubiatest --overwrite
By changing the label, the pod moves outside the scope of the replicaset which follows the label selector app: kubia
As a result the replicaset will detect that only two pods are running for kubia
and will create a new one
> kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
kubia-gjnkj 1/1 Running 0 124m app=kubiatest
kubia-m4426 1/1 Running 0 103m app=kubia
kubia-mkrv9 1/1 Running 0 117m app=kubia
kubia-vs4gx 0/1 ContainerCreating 0 15s app=kubia
3.4. Scaling pods
You can easily change the number of replicas
> kubectl scale rs kubia --replicas=10
You can also do it by editing through kubectl the resource
> kubectl edit rs kubia
3.5. Deleting a replicaset
When deleting a replicaset, associated pods will be deleted. You can avoid deleting pods with
> kubectl delete rs kubia --cascade=false