Renew Certificates Used in Kubeadm Kubernetes Cluster

It’s been more than a year since I built my Kubernetes cluster with some Raspberry PIs. There was a few times that I need to power down everything to let electricians do their work and the cluster came back online and seemed to be Ok afterwards, given that I didn’t shutdown the PIs properly at all.

Recently I found that I lost contact with the cluster, it looked like:

$ kubectl get node
The connection to the server 192.168.x.x:6443 was refused - did you specify the right host or port?

The first thought came to my mind is the cluster must have got hacked since it’s on auto-pilot for months. But I still could ssh into the master node so it’s not that bad. I saw the error logs from kubelet.service:

Sep 23 15:58:05 kmaster kubelet[1233]: E0923 15:58:05.341773    1233 bootstrap.go:263] Part of the existing bootstrap client certificate is expired: 2020-09-15 10:40:36 +0000 UTC

That makes perfect sense! The anniversary was just a few days ago and the certificate seems only last a year. Here’s the StackOverflow answer which I found very helpful for this issue.

I tried the following command in the master node and the API server was back to life

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} /tmp/backup
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} /tmp/backup
$ kubeadm init phase kubeconfig all
$ systemctl restart kubelet.service

I’m not sure if all the new certs will be distributed to nodes automatically but at least the API didn’t complain anymore. I might do a kubeadm upgrade soon.

$ kubectl get node
NAME      STATUS     ROLES    AGE    VERSION
kmaster   NotReady   master   372d   v1.15.3
knode1    NotReady   <none>   372d   v1.15.3
knode2    NotReady   <none>   372d   v1.15.3

EDIT: After the certs are renewed, kubelet service couldn’t authenticate anymore and nodes appeared NotReady. This can be fixed by delete the obsolete kubelet client certificate by

$ ls /var/lib/kubelet/pki -lht
total 28K
-rw------- 1 root root 1.1K Sep 23 19:12 kubelet-client-2020-09-23-19-12-52.pem
lrwxrwxrwx 1 root root   59 Sep 23 19:12 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2020-09-23-19-12-52.pem
-rw------- 1 root root 2.7K Sep 23 19:12 kubelet-client-2020-09-23-19-12-51.pem
-rw------- 1 root root 1.1K Jun 17 00:56 kubelet-client-2020-06-17-00-56-59.pem
-rw------- 1 root root 1.1K Sep 16  2019 kubelet-client-2019-09-16-20-41-53.pem
-rw------- 1 root root 2.7K Sep 16  2019 kubelet-client-2019-09-16-20-40-40.pem
-rw-r--r-- 1 root root 2.2K Sep 16  2019 kubelet.crt
-rw------- 1 root root 1.7K Sep 16  2019 kubelet.key
$ rm /var/lib/kubelet/pki/kubelet-client-current.pem
$ systemctl restart kubelet.service

🙂

Use Variable in Kustomize

Variables in Kustomize are handy helpers from time to time, with these variables I can link some settings together which should share the same value all the time. Without variable I probably need to use some template engine like Jinja2 to do the same trick.

Some examples here.

In my case, there’s a bug in kustomize as of now(3.6.1) where configMap object names don’t get properly suffixed in a patch file. The issue is here. I can however use variable to overcome this bug. Imagine in a scenario I have a configMap in a base template and it will be referenced in a patch file:

# common/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

configMapGenerator:
  - name: common
    literals:
      - TEST=YES

# test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test
bases:
  - ../base
  - ../common
nameSuffix: -raynix
patchesStrategicMerge:
  - patch.yaml

# test/patch.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  templates:
    spec:
      volumes:
        - name: common
          configMap:
            name: common
            # this should be linked to the configMap in common/kustomization.yaml but it won't be updated with a hash and suffix.

Using variable can get around this bug. Please see the following example:

# common/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
  - configuration.yaml
configMapGenerator:
  - name: common
    literals:
      - TEST=YES
vars:
  - name: COMMON
    objref:
      apiVersion: v1
      kind: ConfigMap
      name: common
    fieldref:
      # this can be omitted as metadata.name is the default fieldPath 
      fieldPath: metadata.name

# test/kustomization.yaml unchanged

# test/patch.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  templates:
    spec:
      volumes:
        - name: common
          configMap:
            name: $(COMMON)
            # now $(COMMON) will be updated with whatever the real configmap name is

Problem solved 🙂

Using Sealed Secrets in a Raspberry Pi Kubernetes Cluster

Sealed Secrets is a bitnami Kubernetes operator aimed to one-way encrypt secrets into sealed secrets so that they can be safely checked-in into GitHub or other VCS. It’s rather easy to install and use Sealed Secrets in a Kubernetes cluster on AMD64 architecture, but not so on my Raspberry Pi cluster.

First, the container image for the sealed-secrets-controller wasn’t built for ARM architecture. I managed to build it in my Raspberry Pi 2 with following commands:

git clone https://github.com/bitnami-labs/sealed-secrets.git
cd sealed-secrets
# golang build tools are needed here
make controller.image
# you can tag it to your docker registry instead of mine
docker tag quay.io/bitnami/sealed-secrets-controller:latest raynix/sealed-secrets-controller-arm:latest
docker push raynix/sealed-secrets-controller-arm

The next step is to use kustomize to override the default sealed-secrets deployment schema to use my newly built container image that runs on ARM

# kustomization.yaml
# controller.yaml is from https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/controller.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: sealed-secrets
images:
  - name: quay.io/bitnami/sealed-secrets-controller
    newName: raynix/sealed-secrets-controller-arm
    newTag: latest
patchesStrategicMerge:
  - patch.yaml

resources:
  - controller.yaml
  - ns.yaml
# ns.yaml
# I'd like to install the controller into its own namespace
apiVersion: v1
kind: Namespace
metadata:
  name: sealed-secrets
# patch.yaml
# apparently the controller running on Raspberry Pi 4 needs more time to initialize
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sealed-secrets-controller
spec:
  template:
    spec:
      containers:
        - name: sealed-secrets-controller
          readinessProbe:
            initialDelaySeconds: 100

Then the controller can be deployed with command kubectl apply -k .

The CLI installation is much easier on a Linux laptop. After kubeseal is installed. The public key used to encrypt secrets can be obtained from the controller deployed above. Since I installed the controller in it’s own namespace sealed-secrets instead of the default kube-system the command to encrypt secrets is a bit different:

kubectl create secret generic test-secret --from-literal=username=admin --from-literal=password=password --dry-run -o yaml | \
  kubeseal --controller-namespace=sealed-secrets -o yaml > sealed-secrets.yaml

Then the generated file sealed-secrets.yaml can be deploy with kubectl apply -f sealed-secrets.yaml and a secret called test-secret will be created. Now feel free to check-in sealed-secrets.yaml into a public GitHub repository!

🙂

Customize the Kustomize for Kubernetes CRDs

I’ve introduced Kustomize in this earlier post, now I feel even happier because Kustomize can be even more customized for some CRDs(Custom Resource Definition). For instance, Kustomize doesn’t know how to handle Istio’s VirtualService object, but with some simple YAML style configurations it ‘learns’ to handle that easily.

# k8s service, ie. service.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress-svc
spec:
  selector:
    app: wordpress
  ports:
    - name: http
      port: 80
      targetPort: 8080
  type: NodePort
# istio virtual service. ie. virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: wordpress-vs
spec:
  hosts: 
    - wordpress
  http:
    - match:
      - uri: /
      route:
        - destination:
          host: wordpress-svc
          port:
            number: 80
# kustomize name reference, ie. name-reference.yaml
nameReference:
  - kind: Service
    version: v1
    fieldSpecs:
      - path: spec/http/route/destination/host
        kind: VirtualService
# main kustomize entry, ie. kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
  - name-reference.yaml
namespace: wordpress
nameSuffix: -raynix

So in name-reference.yaml, kustomize will learn the host property in VirtualService will be linked to the metadata.name of a service. When the name suffix -raynix is applied to the Service, it will also be applied to the VirtualService, eg.

kind: Service
metadata:
  name: wordpress-svc-raynix
...

kind: VirtualService
spec:
  http:
    - route:
      - destination:
          host: wordpress-svc-raynix
...

For more information: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md

🙂