Kubernetes: How to Use Affinity

Affinity is a great feature in Kubernetes to assign pods to nodes based on labels. In my case, I have a hybrid Kubernetes cluster with half nodes are of X86 architecture and other half of ARM architecture, and I need to deploy the X86 only containers to the X86 nodes. Of course I can build multi-arch containers to get rid of this restriction too, but let’s see how Affinity works first.

All the nodes have labels of their architecture, and those labels can be printed out like this

# the key in jsonpath is to escape the dot "." and slash "/" in the key names, in this example, kubernetes.io/arch
k get node -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.io\/arch}{"\n"}{end}'
kmaster	arm
knode1	arm
knode2	arm
knode3	amd64
knode4	amd64
knode5	amd64

To deploy a Pod or Deployment, StatefulSet, etc, the Affinity should be put into the pod’s spec, eg.

# this is only a partial example of a deployment with affinity
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/arch
                  operator: In
                  values:
                    - amd64

The Deployment above will be scheduled onto a node running on X86 architecture.

Note: requiredDuringSchedulingIgnoredDuringExecution is a hard requirement and if it’s not met the pod won’t be deployed. If it’s a soft requirement, preferredDuringSchedulingIgnoredDuringExecution should be used instead.

🙂

Kubernetes and GitOps with Flux CD V2.0

GitOps could be the next big thing in cloud automation so I’d give it a try with my in house hybrid Kubernetes cluster. I was recommended to try Flux CD and there’s a good reference project initiated by my colleage: k8s-gitops.

However, in order to fully understand how to use Flux CD, I chose to start from scratch. Following the official instructions it didn’t take me long to fully enable GitOps on my cluster. Here’s how I did it on my laptop running Ubuntu:

First, create a GitHub PAT(Personal Access Token) with full repository permissions. Details can be read here. Also make sure you can create a private repository in GitHub (everyone gets 1 for free). Export GitHub username and PAT as environment variables as following:

export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>

Latest Flux2 CLI can be downloaded here. You can also use the installation script from Flux if you fully trust it:

curl -s https://toolkit.fluxcd.io/install.sh | sudo bash

From this step onward, you will need access to a Kubernetes cluster, eg. kubectl cluster-info command works and returns cluster information. Check Flux2’s prerequisites with:

flux check --pre
► checking prerequisites
✔ kubectl 1.18.6 >=1.18.0
✔ Kubernetes 1.18.9 >=1.16.0
✔ prerequisites checks passed

Then the Flux2 command below can be executed to bootstrap a private GitHub repository flux-gitops using your GitHub PAT and the repository will be your cluster-as-code command center for GitOps practice, also the CRD(Custom Resource Definition) and controllers for Flux2 will be installed to the current cluster

flux bootstrap github \
  --owner=$GITHUB_USER \
  --repository=flux-gitops \
  --branch=main \
  --path=home-cluster \
  --personal

In the generated flux-gitops repository, the file structure looks like

flux-gitops
  - home-cluster
    - flux-system

Now you can simply add Helm charts or Kustomization templates into this repository and the changes will be applied to the cluster automatically. The following commands will create a simple namespace in the cluster, then register it with Flux2. After the changes pushed to GitHub, Flux2 controllers will apply the changes and create the new namespace.

cd flux-gitops/home-cluster
mkdir my-test
cd my-test
kustomize create
kubectl create namespace my-test --dry-run=client -o yaml > ns.yaml
kustomize edit add resource ns.yaml
cd .. # in home-cluster
flux create kustomization my-test --source=flux-system --path=home-cluster/my-test --prune=true --validation=client --interval=2m --export > my-test.yaml
# check-in everything to test GitOps
git add my-test my-test.yaml
git commit -m "Added my-test"
git push

Then you use a watch command to see how the new change get applied

watch flux get kustomizations
NAME                    READY   MESSAGE                                                         REVISION                                        SUSPENDED
flux-system             True    Applied revision: main/529288eed6105909a97f0d3539bc68e5e934418a main/529288eed6105909a97f0d3539bc68e5e934418a   False
my-test                 True    Applied revision: main/529288eed6105909a97f0d3539bc68e5e934418a main/529288eed6105909a97f0d3539bc68e5e934418a   False

That’s it, the Flux2 Hello-world. 🙂

Build Multi-arch Docker Images on Ubuntu Linux

Since I’ve made my Raspberry PI Kubernetes cluster hybrid, now I have good reasons to build multi-arch(which means multi CPU architecture) Docker images so I don’t care if my pod is deployed to a Raspberry PI node or a X86 node.

I followed a lot of instructions from this guide and finally made it work on my Ubuntu Linux laptop. Here are the relevant steps for Ubuntu:

First, just in case, the package docker.io needs to be installed with

# installation
sudo apt install docker.io

# add current user to the docker group, so you don't need to sudo to use docker commands
# and you might need to logout and login again to let this be effective
sudo usermod -a -G docker $(whoami)

# verification
docker version
Client:
 Version:           19.03.8
...

I’ve followed this tutorial to install buildx on Ubuntu. The exact commands are

# instructions to build the buildx plugin for docker CLI
export DOCKER_BUILDKIT=1
docker build --platform=local -o . git://github.com/docker/buildx
mkdir -p ~/.docker/cli-plugins
mv buildx ~/.docker/cli-plugins/docker-buildx

# verification
docker buildx create --help

Usage:  docker buildx create [OPTIONS] [CONTEXT|ENDPOINT]
...

A simplest “Hello world” golang app will be used to build this multi-arch docker image

# main.go
package main

import (
  "fmt"
)

func main() {
  fmt.Println("Hello!")
}

The Dockerfile for this golang app looks like

ARG ARCH=
FROM ${ARCH}golang:1.13.1 AS builder

ENV CGO_ENABLED=0 GOOS=linux
WORKDIR /app
COPY . .
RUN go build -a -installsuffix cgo -o hello main.go

FROM scratch

WORKDIR /app
COPY --from=builder /app/hello .

ENTRYPOINT ["/app/hello"]

This can be tested with the default docker build command just to iron out any error before going into multi-arch. The buildx command will be like

docker buildx build --push --platform linux/arm/v7,linux/arm64/v8,linux/amd64 -t <docker user>/<repo>:<tag> .

It might take a while to build all 3 images. After that the same image should be able to run on AMD64 or ARM platforms.

# in AMD64 or ARM environment
docker run --rm <docker user>/<repo>:<tag>

🙂

Hybrid Kubernetes Cluster (X86 + ARM)

My old ASUS 15″ laptop bought in 2014. It has a sub-woofer!

The one in the picture was my old laptop, then my daughter’s for a few years. Now she got a nice new 2-in-1 ultra book the school asked us parents to buy, this clunky one was gathering dust on shelves. I tried to sell it but got no one’s attention despite it has got i7 CPU and 16GB of memory.

So I was thinking, this has same amount of memory as 4 x Raspberry PI 4, but I probably won’t be able to sell it to pay for the PIs. Why not just use it as a glorified Raspberry PI? I measured its power consumption and to my surprise, this one with gen 4 i7 only asks for 10W when idle and screen off, not bad at all. In comparison 4 x PI 4 probably need 20W to stay up.

Let’s do it then!

I re-installed the OS with Ubuntu Server 20.04 LTS and prepared it for kubeadm to run with my ansible playbooks here. Since I’ve updated my playbook to let it handle both Raspbian on ARM and Ubuntu on X86_64 it was fairly easy to get the laptop(calling it knode3 afterwards) ready.

I haven’t locked down versions in my playbook so the installed docker and kubeadm are vastly newer than the ones in my existing Raspberry PI cluster so there will be some compatibility issues if I don’t match them. I used the following commands to downgrade docker and kubeadm:

apt remove docker-ce --purge
apt install docker-ce=5:19.03.9~3-0~ubuntu-focal
apt install kubeadm=1.18.13-00

The kubeadm join command I ran earlier on nodes didn’t work anymore and it complained about the token. Of course the token has expired after a year or so. Here’s command to issue a new token from the master node

kubeadm token create
xxxxxx.xxx...

Grab the new token and replace the one in the join command

kubeadm join <master IP>:6443 --token <new token xxx> --discovery-token-ca-cert-hash sha256:<hash didn't change>

For debugging purpose I ran journalctl -f in the other tab of the terminal to see the output. When the join command finished, I ran kubectl get nodes in my local terminal session to verify the result

kubectl get node
NAME      STATUS   ROLES    AGE    VERSION
kmaster   Ready    master   89d    v1.18.8
knode1    Ready    <none>   89d    v1.18.8
knode2    Ready    <none>   89d    v1.18.8
knode3    Ready    <none>   3m     v1.20.1

The kubernetes version is a bit newer, maybe I will upgrade the old nodes quickly. Now I have a node which has 16GB of memory 🙂

PS. to keep the laptop running when the lid is closed I used this tweak.