How to Upgrade a Kubernetes Cluster with `kubeadm` in 2024


TL; DR: I upgraded my Garage Kubernetes Lab cluster from 1.28 to 1.29 recently. Here’s how I did it.

Upgrading the Control Plane

First, the Linux package repository needs to be updated to include kubeadm 1.29. This can be done like this(My cluster is built with Ubuntu 22.04, for other Linux distributions please refer to the official doc.)

# in a root shell of the master node
# download the release key
$ curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# add the repository
$ echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
# install the v1.29 version of kubeadm
$ apt-get update 
$ apt-get install -y kubeadm

Then the control plane can be upgraded with following commands:

# verify the version just in case
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"29" ...
# do a planning for 1.29 first
$ kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
...
# if no error presents, the plan can be applied
$ kubeadm upgrade apply v1.29.5
...
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.29.5". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

As the output suggested, now it’s time to upgrade the other 2 components: kubectl and kubelet

$ apt-get install -y kubelet kubectl
# and restart kubelet service just in case
$ systemctl restart kubelet

That’s all for the control plane!

Upgrading the Worker Nodes

The steps to upgrade a worker node is not very different from the one above. The apt repository needs to be configured in exactly the same way. Then it goes through a divergent way:

# in a root shell of a worker node
# install kubeadm
$ apt-get install -y kubeadm
# verify the version
$ kubeadm version
# no need to do the plan or apply, just upgrade
$ kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1230437851/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

# in a shell where you have kubectl access to the cluster, ie. your laptop
$ kubectl drain --ignore-daemonsets --delete-emptydir-data worker-node-name
...
# this might take a while depending on the size of the node and potentially pod disruption budget
# need to wait until this finishes

# back in the root shell of the node
$ apt-get install -y kubelet kubectl
$ systemctl restart kubelet

# back to your laptop
$ kubectl uncordon worker-node-name

After repeating the above routine on all your work nodes, the result can be verified with satisfaction:

# on laptop
$ k get nodes
NAME       STATUS   ROLES           AGE    VERSION
kmaster2   Ready    control-plane   314d   v1.29.5
knode3     Ready    <none>          313d   v1.29.5
knode4     Ready    <none>          313d   v1.29.5
...

Done 🙂