How to upgrade Kubernetes cluster5 min read

We need your support!!
Other Amount:
techrunnr.com:
24-Hour Flash Sale. Courses from just ₹ 490.
Prabhin Prabharkaran Administrator
DevOps Engineer

He is a Technical professional. He is a person who loves to share tricks and tips on the Internet. He Posts what he does!

follow me

In this tutorial, we are upgrading the Kubernetes cluster from 1.17.x to 1.18.20. Below are the steps which  need to be taken care during the upgrade

Its always recommended to perform a backup of the Kubernetes cluster before the upgrade process. it helps you to revert the cluster to an older state in case of any failure.

 

Control Plane Upgrade

If you have multiple Kubernetes control planes (deployed in HA) choose any of the servers and perform the kubeadm upgrade

Upgrade the first control Plane

Repository check
Make sure that you have the latest repository of Kubernetes in the servers.

 

vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl

Determine the stable version to upgrade

 

yum list --showduplicates kubeadm --disableexcludes=kubernetes

In my case, we upgrading the Kubernetes cluster to the 1.18.20 version

Upgrade the kubeadm package

 

yum install -y kubeadm-1.18.20-0 --disableexcludes=kubernetes

verify the version

 

kubeadm version

 

 

sudo kubeadm upgrade plan

Expected Output

 

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 1 x v1.17.11 v1.18.20

Upgrade to the latest version in the v1.17 series:

COMPONENT CURRENT AVAILABLE
API Server v1.17.3 v1.18.20
Controller Manager v1.17.3 v1.18.20
Scheduler v1.17.3 v1.18.20
Kube Proxy v1.17.3 v1.18.20
CoreDNS 1.6.5 1.6.7
Etcd 3.4.3 3.4.3-0

You can now apply the upgrade by executing the following command:

kubeadm upgrade apply v1.18.20
Apply the upgrade

 

sudo kubeadm upgrade apply v1.18.20

 

Now the First control plane is upgraded to 1.18.20 version. Now we need to upgrade the other nodes.

The procedure is slightly different

Upgrade the additional control plane server

Make sure that you have the latest repository of Kubernetes in the servers and upgrade the kubeadm

Upgrade the kubelet configuration

 

kubeadm upgrade node

Repeat the same for additional control nodes.

Drain the Control Plane

Get the node name using the below command

 

kubectl get nodes

# replace <cp-node-name> with the name of your control plane node
kubectl uncordon <cp-node-name>

Upgrade kubelet and kubectl

yum install -y kubelet-1.18.20-0 kubectl-1.18.20-0 --disableexcludes=kubernetes

Restart the service to effect the new version


sudo systemctl daemon-reload
sudo systemctl restart kubelet

Execute the above commands on all the control plane nodes one by one

Uncordon the control plane

Bring the node back online by marking it schedulable:


kubectl get nodes
# replace <cp-node-name> with the name of your control plane node
kubectl uncordon <cp-node-name>

Now check the status

 

kubectl get nodes

Now the version of all the control plane nodes has been upgraded to the desired version.

Upgrade Worker Node

Perform the worker node one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.

Repository check
Make sure that you have the latest repository of Kubernetes in the servers.

 

vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
Determine the stable version to upgrade

 

yum list --showduplicates kubeadm --disableexcludes=kubernetes

In our case, we upgrading the Kubernetes cluster to the 1.18.20 version

Upgrade the kubeadm package

 

yum install -y kubeadm-1.18.20-0 --disableexcludes=kubernetes

Upgrade the kubelet configuration

sudo kubeadm upgrade node

Drain the worker node

# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets

Upgrade kubelet and kubectl packages

yum install -y kubelet-1.18.20-0 kubectl-1.18.20-0 –disableexcludes=kubernetes
Restart the service

 

sudo systemctl daemon-reload
sudo systemctl restart kubelet

Uncordon the node

 

# replace <node-to-drain> with the name of your node
kubectl uncordon <node-to-drain>

Verification

the final step is to verify the cluster upgrade, run the below command to check all the nodes (worker and control plane) are upgraded to the same version

kubectl get node

We need your support!!
Other Amount:
techrunnr.com:
#1
#2
#3
Questions Answered
Articles Written
Overall Points

Prabhin Prabharkaran

He is Technical professional. He is a person who loves to share tricks and tips on the Internet. He Posts what he does!!

You may also like...

Leave a Reply