Golang and Docker Multi-Stage Build MK2

In my previous post I used Docker multi stage technique to build a Docker container image which only has the golang executable in a tiny Alpine Linux base image. I’ll go further to use the scratch base image which has basically nothing.

Here’s the Dockerfile I tested on my own project, I’ve also added comments to help understand important lines:

FROM golang:1.13.1 AS builder

# ENVs to ensure golang will need no external libraries
ENV CGO_ENABLED=0 GOOS=linux
WORKDIR /app
COPY . .
# build switches to ensure golang will need no external libraries
RUN go build -a -installsuffix cgo -o myapp main.go && \
# create non-privileged user and group to run the app
  addgroup --system --gid 2000 golang && \
  adduser --system --gid 2000 --uid 2000 golang

FROM scratch
# some sample ENVs for the app
ENV API_KEY=xxx \
  API_EMAIL=xxx
WORKDIR /app
# copy the golang executable over
COPY --from=builder /app/myapp .
# scratch has no adduser command so just copy the files from builder
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
# use the CA cert from builder to enable HTTPS access
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
USER golang
# run the executable directly
ENTRYPOINT ["/app/myapp"]

The result image is only 7.7MB, only 1MB larger than the golang app itself. 🙂

Deploy WordPress to Kubernetes with Kustomize

I’ve just migrated this blog site itself into the kubernetes cluster I built with Raspberry Pi 4s, and this post is about the steps and approach I used to achieve this goal. Yes, what you have been reading is served by 1 of the Raspberry Pi boards.

First of all, a bit introduction on kustomize: It’s a bit late to the game but better late than never. Since kubectl v1.14, kustomize has been merged as a switch like kubectl apply -k test.yaml ...

I guess the reason behind something like kustomize is when a guy like me deploying apps into k8s clusters, it’s actually a lot of “YAML engineering”, ie. writing YAML files for the Namespace, Deployment, Service, etc. It’s OK to do it for the first time, but very soon I felt repeating myself with all those metadata or annotation tags.

helm is the other tool to manage kubernetes schema files and it started quite earlier than kustomize. I never liked it though. A sample helm chart template looks like this. The main reason I don’t like it is that it brings placeholders like {{ .Values.stuff }} into yaml and they are everywhere, just like good old asp/jsp templates, this makes the template no longer a valid YAML any more. Also I’m not a fan to put a lot of logic into configuration files.

Here’s a very good blog on how to kustomize. With kustomize I can put values associated with some conditions, eg. git branch, or ops environments, etc. into YAML files without any efforts to template the original schema, which enabling me to check-in the base schema into a public repository without the need to worry if I put any database password there.

Here’s the github repository where I store the YAML files which I used to deploy wordpress into my cluster, including following implementations:

  • Namespace for each installation
  • typical WordPress on PHP7.2-FPM and nginx containers running as non-root user
  • K8s PersistedVolume on NFS shared partition for files, eg. photos, plug-ins, etc.
  • Redis cache for PHP session
  • Ingress routing for nginx-ingress-controller

The wordpress-base directory has almost all the schema and with some dummy values. And the wordress-site directory has kustomize patch files which should hold your domain name, NFS server address for storage, etc.

To reuse my schema, you can simply duplicate the wordpress-site directory along side with the wordpress-base directory and put in real configuration as fit. Such as:

pik8s/
  + wordpress-base/
  + wordpress-site/
  + wordpress-mysite/

Then assuming you’ve configured kubectl, database and NFS already, you can preview the wordpress deployment with:

# in pik8s/wordpress-mysite/
$ kubectl apply -k . --dry-run -o yaml |less

And do the real thing without the --dry-run switch.

But the secret referenced in deploy.yaml is not checked in obviously. You need to create it manually with:

# prepare files to be used in the secret
$ echo -n 'mydbpass' > dbpass
# do the similar for dbhost, dbname, dbuser
...
# then create the secret
$ kubectl create secret --namespace wordpress-mysite generic wordpress-secret --from-file=dbuser --from-file=dbhost --from-file=dbname --from-file=dbpass

🙂

Kubernetes at Home on Raspberry Pi 4, Part 3

Continue from part 2, this is mostly about installing ingress controller. In short, an ingress controller is like a single entry point for all ingress connections into the cluster.

The reason I chose Flannel over other CNIs is that it’s lightweight and not bloated with features. I would like to keep the Pi 4s easy before they are tasked with anything. Same reason I’ll install nginx-ingress-controller to have control with ingress. MetalLB looks a good fit for A Raspberry Pi cluster but I’ll pass at this moment, because this is more like a hobby project, if the load is really high and redundancy is necessary I’ll probably use AWS or GCP which has decent load balancers.

The official nginx-ingress-controller image at quay.io doesn’t seem to support armhf/armv7 architecture, so I built one myself here. To deploy the ingress controller schema but using my own container image, I chose kustomize for the little tweak. (Also kustomize has been integrated into kubectl v1.14+).

First I downloaded the official nginx-ingress-controller schema:

$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

Then I use kustomize to replace the container image with my own:

$ cat <<EOF >kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - mandatory.yaml
  - service-nodeport.yaml 
images:
  - name: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
    newName: raynix/nginx-ingress-controller-arm
    newTag: 0.25.1
EOF
# then there should be 3 files in current directory
$ ls
kustomization.yaml  mandatory.yaml  service-nodeport.yaml
# install with kubectl
$ kubectl apply -k .

To see the node port for this ingress controller, do

$ k get --namespace ingress-nginx svc
 NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
 ingress-nginx   NodePort   10.100.0.246           80:32283/TCP,443:30841/TCP   7d14h

In the upstream nginx or HAProxy box, a static route can be set to route traffic to ingress controller:

$ sudo ip route add 10.100.0.0/24 \
  nexthop via <node1 LAN IP> dev <LAN interface> weight 1 \
  nexthop via <node2 LAN IP> dev <LAN interface> weight 1
$ sudo ip route
...
10.100.0.0/24 
     nexthop via 192.168.1.101 dev enp0xxx weight 1 
     nexthop via 192.168.1.102 dev enp0xxx weight 1  
...

To make the above route permanent, add the following line into /etc/network/interfaces (this is for ubuntu, other distro may defer)

iface enp0s1f1 inet static
...
up ip route add 10.100.0.0/24 nexthop via 192.168.1.81 dev enp0s31f6 weight 1 nexthop via 192.168.1.82 dev enp0s1f1 weight 1

To test if the ingress controller is visible from the upstream box, do

$ curl -H "Host: nonexist.com" http://10.100.0.246
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>

Now the ingress controller works 🙂

Kubernetes at Home on Raspberry Pi 4, Part 2

Continue from part 1

It’s recommended to change all Pi’s password also run ssh-copy-id [email protected] to enable SSH public key login.

There are lots of steps to prepare before kubeadm is installed, so I made this ansible repository to simplify this repeating process. Please see here. The ansible role will do the following tasks:

  • set host name, update /etc/hosts file
  • enable network bridge
  • disable swap, kubeadm doesn’t like it!
  • set timezone. You may want to change it to yours
  • install docker community edition
  • install kubeadm
  • use iptables-legacy (Reference here)

Just to emphasise at this moment Raspbian has iptables 1.8, a new strain used to be called netfilter tables or nftables. The original iptables is renamed to iptables-legacy. You can use my ansible role to use iptables-legacy or do it with:

# update-alternatives --set iptables /usr/sbin/iptables-legacy 
# update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 
# update-alternatives --set arptables /usr/sbin/arptables-legacy 
# update-alternatives --set ebtables /usr/sbin/ebtables-legacy

This is absolutely necessary because current CNI implementations only work with the legacy iptables.

Once the ansible playbook finishes successfully, kubeadm is ready for some action to set up the kubernetes master node, aka. control plane

# the following command is to be run in the master node
# I prefer to use flannelas the CNI(container network interface) because it's lightweight comparing to others like weave.net. So the CIDR is to be set as follow
$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16

Then as the kubeadm finishes it will give some instructions to continue. First thing is to copy the admin.conf so kubectl command can authenticate with the control plane. Also save the kubeadm join 192.168.1.80:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx instruction as it will be needed later

$ sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
$ kubectl get node
...
$ kubectl get pods
...

The coredns pods will be at pending state, this is expected. After the CNI is installed this will be fixed automatically. Next step is to install a CNI, in my case it’s flannel.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

In a few minutes the flannel and coredns pods should be in available state. The run the join command saved earlier on other Pi nodes

kubeadm join 192.168.1.80:6443 --token xxx     --discovery-token-ca-cert-hash sha256:xxx

And back to the master node, you should be able to see the new work node in the output

$ kubectl get nodes

TBC