Since I just got extra 4 CPU cores and 32GB of memory to my garage Kubernetes lab cluster, there’s enough capacity to let me juggle some upgrades. My cluster was running Kubernetes version 1.22 for almost a year which is already reaching end-of-life in many commercial managed Kubernetes offerings. After some reading I found that there’s some major change in version 1.24 because supporting for Docker as container runtime has been dropped, which is rather huge.
It’s been a busy year and I didn’t get a lot of time working on my Kubernetes lab cluster, there’s a list of things pending upgrading: Calico, Istio, ArgoCD… Instead of upgrading each of them and expecting compatibility errors I decided to build a fresh-ish(1.25 is out!) cluster and transfer all my workloads from existing v1.22 cluster to v1.24, and the plan looks like this
Overall Plan
- create a new master node with all the shiny stuff
- drop some inessential workloads from old cluster, such as the MineCraft server I deployed.
- drain and delete a node from the old cluster
- clean-up the node and join it to the new cluster
- deploy some workload to the new cluster
- rinse and repeat from step 2
Step 1:
To prepare a server for kubeadm, I wrote an ansible playbook for that repeating task. It also include steps to install and configure containerd
(the docker replacement) on Ubuntu Server.
# from the ansible-kubeadm repo on my laptop # this will process modprobe, sysctl, apt-get, etc stuff ansible-playbook -i inventory/cluster -l master2 kubeadm.yaml # from master2, as root kubeadm init --pod-network-cidr 10.246.0.0/16
Then I followed official instructions to install Calico and Istio.
And step 2 and 5 are basically delete/deploy apps with ArgoCD so I won’t articulate on those.
Step 3:
# from where I can access the Kubernetes API of the old cluster k drain node6 --ignore-daemonsets --delete-emptydir-data k delete node node6
Step 4:
# ssh me@node6 and sudo of course # reset kubeadm kubeadm reset # purge networking stuff rm -rf /etc/cni/net.d/* iptables -t nat -F && iptables -t nat -X && iptables -t mangle -F && iptables -t mangle -X && iptables -F && iptables -X # bye bye docker apt-get remove --purge docker-ce* ip link delete docker0 # purge old kubeadm, etc. apt-get remove --purge kubeadm kubelet kubectl
Then run the ansible playbook on this node
# from the ansible-kubeadm repo on my laptop ansible-playbook -i inventory/cluster -l node6 kubeadm.yaml
And once finished the node is ready to join the new cluster
# from the new master node, as root kubeadm token create --print-join-command # this will give out a kubeadm join command which can be used on the prepared node # it looks like # kubeadm join <master IP>:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx # then from the node, as root kubeadm join <master IP>:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx # from my laptop # to verify the new node k get nodes
I repeated these steps a few times to migrate a node at a time until all old nodes have joined the new cluster.
🙂