A Kubernetes ClusterSecret

No, at this moment ClusterSecret, unlike ClusterRole, doesn’t officially exist in any version of Kubernetes yet. I’ve seen some discussion like this, so looks like it will be a while to have a ClusterSecret.

But why do I need a ClusterSecret in the first place? The reason is very simple: To be DRY. Imagine I have a few apps deployed into several different namespaces and they all need to pull from my private docker registry. This looks like:

├── namespace-1
│   ├── image-pull-secret
│   └── deployment-app-1
├── namespace-2
│   ├── image-pull-secret
│   └── deployment-app-2
...

It’s a tad straight forward that all the image-pull-secret secrets are the same but as there’s no ClusterSecret they have to be duplicated all over the place. And to make things nicer, if the private registry changes its token, all of these secrets need to be updated at once.

Of course I’m not the first one to be frustrated by this and there are tools built by the community already. ClusterSecret operator is one of them. But when I looked at kubernetes-reflector I immediately liked its simple approach: it can reflects 1 source secret or configmap to many mirror ones in all namespaces! Also it’s easy to integrate with existing SealedSecret operator with reflector.

Here’s how to install kubernetes-reflector quickly with all default settings(copied from its README). I chose to save this file and let my FluxCD to install it for me.

kubectl apply -f https://github.com/emberstack/kubernetes-reflector/releases/latest/download/reflector.yaml

Now I can create a image pull secret for my private docker registry in kube-system namespace and then the reflector will copy it to a few namespaces which match the regex for the namespace whitelist.

The command to create a image pull secret is

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

The full sealed secret command will be

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email> | \
  kubeseal --controller-namespace=sealed-secrets --controller-name=sealed-secrets -o yaml > image-pull-secret.yaml

Then I’ll add a few magic annotation to let the reflector pick up the job

# this is image-pull-secret.yaml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: image-pull-secret
  namespace: kube-system
spec:
  encryptedData:
    .dockerconfigjson: AgA4E6mcpri...
  template:
    metadata:
      creationTimestamp: null
      name: image-pull-secret
      namespace: kube-system
      annotations:
        reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "wordpress-.*"
status: {}

So when I deploy this file, first the SealedSecret operator will decrypt this into a normal secret with those annotations(note. adding annotations won’t break the encryption, but changing name or namespace could). And then the reflector will create the image-pull-secret secrets in all namespaces which start with wordpress- prefix.

Mission accomplished 🙂

Hello World, Grafana Tanka

I liked YAML a lot, until it gets longer and longer, and even longer.

There are tools to make YAML ‘DRY’, the popular ones are Helm and Kustomize. But none of them can say it got the job done.

To be honest, I didn’t like Helm much from the start. Helm uses templating syntax similar to Jinja2, or PHP, or ASP 20 years ago. It tried to solve a recent problem using an ancient approach: generate text and substitute placeholders with values from variables. Obviously it works and a lot of people are using it, but this bothers me as there are issues that Helm can hardly resolve due to this design:

  • It’s a mixture of YAML and scripts. When working on it, developer has to maintain YAML indentations in the code fragments too, to ensure YAML can be generated as expected.
  • The values of a Helm chart are the parts that can be customized when re-using this chart, this hides the complexity of a full chart from end users but if what the user wanted wasn’t in the values, the user will have to fork the whole chart.

I liked Kustomize a bit more because it doesn’t break the elegance of YAML, ie. the base and overlay Kustomize templates are both valid YAML files. But this is also its Achilles’ heel because YAML is data, not code, it’s Ok to have a few variables, but loops, functions, modules? Those are too much to ask.

I was introduced to Jsonnet a while ago but why should I go back to JSON when I’m better with YAML? However I happened to get to know Grafana Tanka project, after a few minutes’ read I think Tanka has solved most problems which Helm and Kustomize did not. After I watch this presentation(to my surprise the views on this video is shockingly low) I decided to give it a go!

The first step is to install the tanka binary. I personally installed it as a golang module. The go get command looks like:

$ GO111MODULE=on go get github.com/grafana/tanka/cmd/tk
...
# install JsonnetBundler as it's recommended
$ GO111MODULE=on go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
...
tk --version
tk version dev

Then we can initialize a hello-world project for tanka:

$ mkdir tanka-hello
$ cd tanka-hello
$ tk init
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/grafana/jsonnet-libs/archive/4d4b5b1ce01003547a110f93cc86b8b7afb282a6.tar.gz 200
Directory structure set up! Remember to configure the API endpoint:
`tk env set environments/default --server=https://127.0.0.1:6443`

The initial directory structure looks like this:

$ tree
.
├── environments
│   └── default
│       ├── main.jsonnet # <-- this is where we will do the hello-world
│       └── spec.json
├── jsonnetfile.json
├── jsonnetfile.lock.json
├── lib
│   └── k.libsonnet
└── vendor
    ├── github.com
    │   ├── grafana
    │   │   └── jsonnet-libs
    │   │       └── ksonnet-util
    │   │           ├── grafana.libsonnet
    │   │           ├── kausal.libsonnet
    │   │           ├── k-compat.libsonnet
    │   │           ├── legacy-custom.libsonnet
    │   │           ├── legacy-noname.libsonnet
    │   │           ├── legacy-subtypes.libsonnet
    │   │           ├── legacy-types.libsonnet
    │   │           └── util.libsonnet
    │   └── ksonnet
    │       └── ksonnet-lib
    │           └── ksonnet.beta.4
    │               ├── k8s.libsonnet
    │               └── k.libsonnet
    ├── ksonnet.beta.4 -> github.com/ksonnet/ksonnet-lib/ksonnet.beta.4
    └── ksonnet-util -> github.com/grafana/jsonnet-libs/ksonnet-util

To make this demo more intuitive, I ran an inotifywait command in left pane and vim environments/default/main.jsonnet in right pane

# the inotifywait command explained:
$ inotifywait -q -m -e moved_to  environments/default/ \ # every time the file is saved in vim, an event will be triggered here 
  |while read -r filename event; do \ # the changed filename and event name are read here but we don't really need to do anything about them
    echo -e '\n\n'; tk show environments/default/ # print out generated YAML file 
  done

As shown in the screenshot, after I understood minimum set of tanka syntax, I can get a Deployment YAML out of a few lines of Jsonnet code.

🙂

Rebuild a Kubernetes Node Without Downtime

When I built the in-house Kubernetes cluster with Raspberry PIs, I followed the kubeadm instructions and installed Raspberry PI OS on the PIs. It was all good except the RPI OS is 32-bit. Now I want to install a Ubuntu 20.04 Server ARM64 on this PI, below are steps with which I rebuilt the node with Ubuntu and without disrupting the workloads running in my cluster.

First, I didn’t need to shutdown the running node because I’ve got a spare MicroSD card to prepare the Ubuntu image. The instruction for writing the image to the MicroSD card is here. When the card is prepared by the Imager, I kept it in the card reader because I wanted to set the IP address instead of the automatic IP by default. A fixed IP makes more sense if I want to connect to it, right?

To set a static IP in the Ubuntu MicroSD card, open system-boot/network-config file with a text editor and put in something like this:

version: 2
ethernets:
  eth0:
    # Rename the built-in ethernet device to "eth0"
    match:
      driver: bcmgenet smsc95xx lan78xx
    set-name: eth0
    addresses: [192.168.1.82/24]
    gateway4: 192.168.1.1
    nameservers:
      addresses: [192.168.1.1]
    optional: true

Now the new OS is ready. To gracefully shutdown the node, drain it with

kubectl drain node-name
# wait until it finishes
# the pods on this node will be evicted and re-deployed into other nodes
kubectl delete node node-name

Then I powered down the PI and replaced the MicroSD card with the one I just prepared, then I powered it back on. After a minute or 2, I was able to ssh into the node with

# wipe the previous trusted server signature
ssh-keygen -R 192.168.1.82
# login, default password is ubuntu and will be changed upon first login
ssh [email protected]
# install ssh key, login with updated password
ssh-copy-id [email protected]

The node needs to be prepared for kubeadm, I used my good old ansible playbook for this task. The ansible-playbook command looks like

ansible-playbook -i inventory/cluster -l node-name kubeadm.yaml

At the moment I have to install less recent versions of docker and kubeadm to keep it compatible with the existing cluster.

When running kubeadm join command I encountered an error message saying CGROUPS_MEMORY: missing. This can be fixed with this. And one more thing is to create a new token from the master node with command:

kubeadm token create

At last the new node can be joined into the cluster with command:

kubeadm join 192.168.1.80:6443 --token xxx     --discovery-token-ca-cert-hash sha256:xxx

The node will then be bootstrapped in a few minutes. I can tell it’s now ARM64

k get node node-name -o yaml |rg arch
    beta.kubernetes.io/arch: arm64
    kubernetes.io/arch: arm64
...

🙂

OpenSSL Commands to Verify TLS Certs in Kubernetes Secrets

Sometimes a TLS cert deployed into a Kubernetes cluster in a Secret doesn’t work as expected. Here are some handy commands to verify the certs. The sample commands work for Istio Ingressgateway, but should be adapted to other CNIs without huge efforts.

Commands to verify the cert served by your web-app

# Use openssl to retrieve cert and decrypt and print it out
# This can be used to verify that the correct cert is in use in an gateway
# use ctrl-c to end it
openssl s_client  -connect my.example.com:443 -showcerts -servername my.example.com |openssl x509 -noout -text

# Print out dates of the cert
openssl s_client  -connect my.example.com:443 -showcerts -servername my.example.com |openssl x509 -noout -dates

# Print out the subject/CN of the cert
openssl s_client  -connect my.example.com:443 -showcerts -servername my.example.com |openssl x509 -noout -subject

# Print out the subjectAltName/SAN of the cert
openssl s_client  -connect my.example.com:443 -showcerts -servername my.example.com |openssl x509 -noout -text |grep 'Subject Alternative Name' -A1

Commands to verify the cert installed in a secret

# This needs access to secrets so the cert secret can be downloaded and verified
kubectl get secret -n istio-system my-namespace-cert-secret -o yaml

# one-liner to print md5 hash of the X509 modulus from the cert
kubectl get secret -n istio-system my-namespace-cert-secret -o jsonpath={'.data.cert'} |base64 -d | openssl x509 -noout -modulus |openssl md5
# example output
c17642...

# one-liner to print md5 hash of the RSA modulus from the key
# this output has to match the previous one.
kubectl get secret -n istio-system my-namespace-cert-secret -o jsonpath={'.data.key'} |base64 -d | openssl rsa -noout -modulus |openssl md5
# example output
c17642...

🙂