There is a lot of TLS certificates used by the core of a Kubernetes cluster and a popular one is the client-server pair used by kubectl
to authenticate to the cluster control plane.
In my previous notes on how to renew certificates in a Kubernetes cluster with kubeadm, I found that the steps are quite manual. Since Kubernetes V1.15 the whole process can be done much simpler according to the official document. Here are the steps I used to renew my certificates this time.
# in the master node, run as root kubeadm certs renew all [renew] Reading configuration from the cluster... [renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [renew] Error reading configuration from the Cluster. Falling back to default configuration certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates. # it's easier to restart the kubelet service which runs those static pods systemctl restart kubelet.service ...
To update the ~/.kube/config in my laptop, I can simply replace my existing file with the newly generated one if I only manage 1 cluster. Since I have more than 1 cluster to manage, here is how to update the configuration for that cluster with new certs without damaging any other cluster’s configuration.
# copy the updated kubeconfig from the master node ssh ubuntu@kmaster -- sudo cat /etc/kubernetes/admin.conf > /tmp/admin.conf # delete current cluster, user, context from local kubeconfig # by default, the kubeadm cluster is named kubernetes k config delete-context kubernetes k config delete-cluster kubernetes k config delete-user kubernetes-admin # merge the new admin.conf with current kubeconfig KUBECONFIG=~/.kube/config:/tmp/admin.conf k config view --flatten > /tmp/config # test the new kubeconfig k --kubeconfig=/tmp/config get nodes NAME STATUS ROLES AGE VERSION kmaster1 Ready control-plane,master 376d v1.21.0 ... # replace the current kubeconfig mv /tmp/config ~/.kube/config
🙂