A Kubernetes ClusterSecret

No, at this moment ClusterSecret, unlike ClusterRole, doesn’t officially exist in any version of Kubernetes yet. I’ve seen some discussion like this, so looks like it will be a while to have a ClusterSecret.

But why do I need a ClusterSecret in the first place? The reason is very simple: To be DRY. Imagine I have a few apps deployed into several different namespaces and they all need to pull from my private docker registry. This looks like:

├── namespace-1
│   ├── image-pull-secret
│   └── deployment-app-1
├── namespace-2
│   ├── image-pull-secret
│   └── deployment-app-2
...

It’s a tad straight forward that all the image-pull-secret secrets are the same but as there’s no ClusterSecret they have to be duplicated all over the place. And to make things nicer, if the private registry changes its token, all of these secrets need to be updated at once.

Of course I’m not the first one to be frustrated by this and there are tools built by the community already. ClusterSecret operator is one of them. But when I looked at kubernetes-reflector I immediately liked its simple approach: it can reflects 1 source secret or configmap to many mirror ones in all namespaces! Also it’s easy to integrate with existing SealedSecret operator with reflector.

Here’s how to install kubernetes-reflector quickly with all default settings(copied from its README). I chose to save this file and let my FluxCD to install it for me.

kubectl apply -f https://github.com/emberstack/kubernetes-reflector/releases/latest/download/reflector.yaml

Now I can create a image pull secret for my private docker registry in kube-system namespace and then the reflector will copy it to a few namespaces which match the regex for the namespace whitelist.

The command to create a image pull secret is

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

The full sealed secret command will be

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email> | \
  kubeseal --controller-namespace=sealed-secrets --controller-name=sealed-secrets -o yaml > image-pull-secret.yaml

Then I’ll add a few magic annotation to let the reflector pick up the job

# this is image-pull-secret.yaml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: image-pull-secret
  namespace: kube-system
spec:
  encryptedData:
    .dockerconfigjson: AgA4E6mcpri...
  template:
    metadata:
      creationTimestamp: null
      name: image-pull-secret
      namespace: kube-system
      annotations:
        reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "wordpress-.*"
status: {}

So when I deploy this file, first the SealedSecret operator will decrypt this into a normal secret with those annotations(note. adding annotations won’t break the encryption, but changing name or namespace could). And then the reflector will create the image-pull-secret secrets in all namespaces which start with wordpress- prefix.

Mission accomplished 🙂

Hello World, Grafana Tanka

I liked YAML a lot, until it gets longer and longer, and even longer.

There are tools to make YAML ‘DRY’, the popular ones are Helm and Kustomize. But none of them can say it got the job done.

To be honest, I didn’t like Helm much from the start. Helm uses templating syntax similar to Jinja2, or PHP, or ASP 20 years ago. It tried to solve a recent problem using an ancient approach: generate text and substitute placeholders with values from variables. Obviously it works and a lot of people are using it, but this bothers me as there are issues that Helm can hardly resolve due to this design:

  • It’s a mixture of YAML and scripts. When working on it, developer has to maintain YAML indentations in the code fragments too, to ensure YAML can be generated as expected.
  • The values of a Helm chart are the parts that can be customized when re-using this chart, this hides the complexity of a full chart from end users but if what the user wanted wasn’t in the values, the user will have to fork the whole chart.

I liked Kustomize a bit more because it doesn’t break the elegance of YAML, ie. the base and overlay Kustomize templates are both valid YAML files. But this is also its Achilles’ heel because YAML is data, not code, it’s Ok to have a few variables, but loops, functions, modules? Those are too much to ask.

I was introduced to Jsonnet a while ago but why should I go back to JSON when I’m better with YAML? However I happened to get to know Grafana Tanka project, after a few minutes’ read I think Tanka has solved most problems which Helm and Kustomize did not. After I watch this presentation(to my surprise the views on this video is shockingly low) I decided to give it a go!

The first step is to install the tanka binary. I personally installed it as a golang module. The go get command looks like:

$ GO111MODULE=on go get github.com/grafana/tanka/cmd/tk
...
# install JsonnetBundler as it's recommended
$ GO111MODULE=on go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
...
tk --version
tk version dev

Then we can initialize a hello-world project for tanka:

$ mkdir tanka-hello
$ cd tanka-hello
$ tk init
GET https://github.com/ksonnet/ksonnet-lib/archive/0d2f82676817bbf9e4acf6495b2090205f323b9f.tar.gz 200
GET https://github.com/grafana/jsonnet-libs/archive/4d4b5b1ce01003547a110f93cc86b8b7afb282a6.tar.gz 200
Directory structure set up! Remember to configure the API endpoint:
`tk env set environments/default --server=https://127.0.0.1:6443`

The initial directory structure looks like this:

$ tree
.
├── environments
│   └── default
│       ├── main.jsonnet # <-- this is where we will do the hello-world
│       └── spec.json
├── jsonnetfile.json
├── jsonnetfile.lock.json
├── lib
│   └── k.libsonnet
└── vendor
    ├── github.com
    │   ├── grafana
    │   │   └── jsonnet-libs
    │   │       └── ksonnet-util
    │   │           ├── grafana.libsonnet
    │   │           ├── kausal.libsonnet
    │   │           ├── k-compat.libsonnet
    │   │           ├── legacy-custom.libsonnet
    │   │           ├── legacy-noname.libsonnet
    │   │           ├── legacy-subtypes.libsonnet
    │   │           ├── legacy-types.libsonnet
    │   │           └── util.libsonnet
    │   └── ksonnet
    │       └── ksonnet-lib
    │           └── ksonnet.beta.4
    │               ├── k8s.libsonnet
    │               └── k.libsonnet
    ├── ksonnet.beta.4 -> github.com/ksonnet/ksonnet-lib/ksonnet.beta.4
    └── ksonnet-util -> github.com/grafana/jsonnet-libs/ksonnet-util

To make this demo more intuitive, I ran an inotifywait command in left pane and vim environments/default/main.jsonnet in right pane

# the inotifywait command explained:
$ inotifywait -q -m -e moved_to  environments/default/ \ # every time the file is saved in vim, an event will be triggered here 
  |while read -r filename event; do \ # the changed filename and event name are read here but we don't really need to do anything about them
    echo -e '\n\n'; tk show environments/default/ # print out generated YAML file 
  done

As shown in the screenshot, after I understood minimum set of tanka syntax, I can get a Deployment YAML out of a few lines of Jsonnet code.

🙂

Rebuild a Kubernetes Node Without Downtime

When I built the in-house Kubernetes cluster with Raspberry PIs, I followed the kubeadm instructions and installed Raspberry PI OS on the PIs. It was all good except the RPI OS is 32-bit. Now I want to install a Ubuntu 20.04 Server ARM64 on this PI, below are steps with which I rebuilt the node with Ubuntu and without disrupting the workloads running in my cluster.

First, I didn’t need to shutdown the running node because I’ve got a spare MicroSD card to prepare the Ubuntu image. The instruction for writing the image to the MicroSD card is here. When the card is prepared by the Imager, I kept it in the card reader because I wanted to set the IP address instead of the automatic IP by default. A fixed IP makes more sense if I want to connect to it, right?

To set a static IP in the Ubuntu MicroSD card, open system-boot/network-config file with a text editor and put in something like this:

version: 2
ethernets:
  eth0:
    # Rename the built-in ethernet device to "eth0"
    match:
      driver: bcmgenet smsc95xx lan78xx
    set-name: eth0
    addresses: [192.168.1.82/24]
    gateway4: 192.168.1.1
    nameservers:
      addresses: [192.168.1.1]
    optional: true

Now the new OS is ready. To gracefully shutdown the node, drain it with

kubectl drain node-name
# wait until it finishes
# the pods on this node will be evicted and re-deployed into other nodes
kubectl delete node node-name

Then I powered down the PI and replaced the MicroSD card with the one I just prepared, then I powered it back on. After a minute or 2, I was able to ssh into the node with

# wipe the previous trusted server signature
ssh-keygen -R 192.168.1.82
# login, default password is ubuntu and will be changed upon first login
ssh [email protected]
# install ssh key, login with updated password
ssh-copy-id [email protected]

The node needs to be prepared for kubeadm, I used my good old ansible playbook for this task. The ansible-playbook command looks like

ansible-playbook -i inventory/cluster -l node-name kubeadm.yaml

At the moment I have to install less recent versions of docker and kubeadm to keep it compatible with the existing cluster.

When running kubeadm join command I encountered an error message saying CGROUPS_MEMORY: missing. This can be fixed with this. And one more thing is to create a new token from the master node with command:

kubeadm token create

At last the new node can be joined into the cluster with command:

kubeadm join 192.168.1.80:6443 --token xxx     --discovery-token-ca-cert-hash sha256:xxx

The node will then be bootstrapped in a few minutes. I can tell it’s now ARM64

k get node node-name -o yaml |rg arch
    beta.kubernetes.io/arch: arm64
    kubernetes.io/arch: arm64
...

🙂

Ethereum(Crypto) Mining with Nvidia 3070(Ampere) on Ubuntu 20.04 with Renewable Energy

Snagged a pair of RTX 3070 cards. With only 2 cards this is more like an experiment than an investment.

I’ve done Crypto mining before and since the price is now almost all time high I’ll do that again, but only with my solar energy. Mining with dirty coal power isn’t ethical any more as the climate change has accelerated in the past few years.

To start ETH mining here are some prerequisites:

  • Energy efficient video cards, in this case I got RTX 3070. 3060TI is also a good choice but it was sold out already
  • A desktop computer where you can attach multiple video cards to PCI express slots. But I’m not focusing hardware installation here, ie. not showing how to install the card and connect cables, etc.
  • My OS is Ubuntu 20.04 so I choose t-rex miner which has better support for Nvidia Ampere architecture. The releases can be found here

Here are the steps with which I set up t-rex miner on my Ubuntu 20.04 desktop:

# as root user
sudo -i
# install nvidia 460 driver for Ubuntu
apt install nvidia-driver-460
# install t-rex to /opt/t-rex
mkdir /opt/t-rex
wget https://github.com/trexminer/T-Rex/releases/download/0.19.9/t-rex-0.19.9-linux-cuda11.1.tar.gz
tar -zxvf t-rex-0.19.9-linux-cuda11.1.tar.gz -C /opt/t-rex
# change ownership for security reasons
chown -R nobody:nogroup /opt/t-rex

Now in directory /opt/t-rex there are many shell script(.sh) files. I was using ethermine.org so I had a look at ETH-ethermine.sh, it has:

#!/bin/sh
./t-rex -a ethash -o stratum+tcp://eu1.ethermine.org:4444 -u <ETH wallet address> -p x -w <worker name>

Since I’m proudly an experienced Linux user, I choose to create a systemd service out of that shell script:

# cat /etc/systemd/system/ethminer.service 
[Unit]
Description=Ethereum Miner

[Service]
Type=simple
User=nobody
ExecStart=/opt/t-rex/t-rex -a ethash -o stratum+tcp://us2.ethermine.org:4444 -u <my ETH wallet address> -p "" -w <my worker name, can be hostname>

Restart=on-failure

[Install]
WantedBy=multi-user.target

I choose us2 node as it’s geologically close to me. The user is set to nobody so it won’t cause harm to my system if it wants to. Then the service can be enabled and started with systemctl command:

# reload systemd as a new service is added
# following commands run as root user
systemctl daemon-reload
# enable the service so it starts automatically
systemctl enable ethminer
# start the service
systemctl start ethminer
# check status
systemctl status -l ethminer
# watch the logs
journalctl -f |grep t-rex
Jan 24 13:55:30 hostname t-rex[6621]: 20210124 13:55:30 Mining at us2.ethermine.org:4444, diff: 4.00 G
...

According to other miners online, the TDP of 3070 is better set as 50%(130W), because it can run hotter with higher wattage but won’t make it compute faster. Here’s how I use a cronjob to set TDP to 130W except when I’m playing a game(assuming I’ll stop the miner when playing some game on it).

EDIT: 115W is good enough.

# still as root user, as only root can use nvidia-smi command
# crontab -l
*/10 * * * * /bin/ps -C t-rex && /usr/bin/nvidia-smi -i 0 -pl 115 2>&1 >>/var/log/nvidia.log

This can be verified in t-rex ‘s logs

journalctl -f |grep t-rex
Jan 24 13:55:30 hostname t-rex[6621]: 20210124 13:55:30 GPU #0: Gigabyte RTX 3070 - 52.07 MH/s, [T:53C, P:129W, F:60%, E:404kH/W], 1370/1370 R:0%
# it's running at 129W and temperature is 53 degree and fan speed cruising at 60%

Regarding mining solely with solar energy, there can be 3 approaches:

  • Switch electricity supplier to the renewable-friendly ones such as Ember, so you can use solar energy generated by community and enjoy the low wholesale price and mine crypto when the sun shines. This requires the API access supplier so you know when the energy is from renewables and cheap
  • Install and use your own solar energy to mine crypto when the sun shines. This requires API access from your inverter so know when to start mining with enough solar energy.
  • Install solar and battery so it’s guaranteed to mine with your own solar energy until the battery runs flat of course

I’m going with the last option 🙂