Deploy WordPress to Kubernetes with Kustomize

I’ve just migrated this blog site itself into the kubernetes cluster I built with Raspberry Pi 4s, and this post is about the steps and approach I used to achieve this goal. Yes, what you have been reading is served by 1 of the Raspberry Pi boards.

First of all, a bit introduction on kustomize: It’s a bit late to the game but better late than never. Since kubectl v1.14, kustomize has been merged as a switch like kubectl apply -k test.yaml ...

I guess the reason behind something like kustomize is when a guy like me deploying apps into k8s clusters, it’s actually a lot of “YAML engineering”, ie. writing YAML files for the Namespace, Deployment, Service, etc. It’s OK to do it for the first time, but very soon I felt repeating myself with all those metadata or annotation tags.

helm is the other tool to manage kubernetes schema files and it started quite earlier than kustomize. I never liked it though. A sample helm chart template looks like this. The main reason I don’t like it is that it brings placeholders like {{ .Values.stuff }} into yaml and they are everywhere, just like good old asp/jsp templates, this makes the template no longer a valid YAML any more. Also I’m not a fan to put a lot of logic into configuration files.

Here’s a very good blog on how to kustomize. With kustomize I can put values associated with some conditions, eg. git branch, or ops environments, etc. into YAML files without any efforts to template the original schema, which enabling me to check-in the base schema into a public repository without the need to worry if I put any database password there.

Here’s the github repository where I store the YAML files which I used to deploy wordpress into my cluster, including following implementations:

  • Namespace for each installation
  • typical WordPress on PHP7.2-FPM and nginx containers running as non-root user
  • K8s PersistedVolume on NFS shared partition for files, eg. photos, plug-ins, etc.
  • Redis cache for PHP session
  • Ingress routing for nginx-ingress-controller

The wordpress-base directory has almost all the schema and with some dummy values. And the wordress-site directory has kustomize patch files which should hold your domain name, NFS server address for storage, etc.

To reuse my schema, you can simply duplicate the wordpress-site directory along side with the wordpress-base directory and put in real configuration as fit. Such as:

pik8s/
  + wordpress-base/
  + wordpress-site/
  + wordpress-mysite/

Then assuming you’ve configured kubectl, database and NFS already, you can preview the wordpress deployment with:

# in pik8s/wordpress-mysite/
$ kubectl apply -k . --dry-run -o yaml |less

And do the real thing without the --dry-run switch.

But the secret referenced in deploy.yaml is not checked in obviously. You need to create it manually with:

# prepare files to be used in the secret
$ echo -n 'mydbpass' > dbpass
# do the similar for dbhost, dbname, dbuser
...
# then create the secret
$ kubectl create secret --namespace wordpress-mysite generic wordpress-secret --from-file=dbuser --from-file=dbhost --from-file=dbname --from-file=dbpass

🙂

Kubernetes at Home on Raspberry Pi 4, Part 3

Continue from part 2, this is mostly about installing ingress controller. In short, an ingress controller is like a single entry point for all ingress connections into the cluster.

The reason I chose Flannel over other CNIs is that it’s lightweight and not bloated with features. I would like to keep the Pi 4s easy before they are tasked with anything. Same reason I’ll install nginx-ingress-controller to have control with ingress. MetalLB looks a good fit for A Raspberry Pi cluster but I’ll pass at this moment, because this is more like a hobby project, if the load is really high and redundancy is necessary I’ll probably use AWS or GCP which has decent load balancers.

The official nginx-ingress-controller image at quay.io doesn’t seem to support armhf/armv7 architecture, so I built one myself here. To deploy the ingress controller schema but using my own container image, I chose kustomize for the little tweak. (Also kustomize has been integrated into kubectl v1.14+).

First I downloaded the official nginx-ingress-controller schema:

$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

Then I use kustomize to replace the container image with my own:

$ cat <<EOF >kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - mandatory.yaml
  - service-nodeport.yaml 
images:
  - name: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
    newName: raynix/nginx-ingress-controller-arm
    newTag: 0.25.1
EOF
# then there should be 3 files in current directory
$ ls
kustomization.yaml  mandatory.yaml  service-nodeport.yaml
# install with kubectl
$ kubectl apply -k .

To see the node port for this ingress controller, do

$ k get --namespace ingress-nginx svc
 NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
 ingress-nginx   NodePort   10.100.0.246           80:32283/TCP,443:30841/TCP   7d14h

In the upstream nginx or HAProxy box, a static route can be set to route traffic to ingress controller:

$ sudo ip route add 10.100.0.0/24 \
  nexthop via <node1 LAN IP> dev <LAN interface> weight 1 \
  nexthop via <node2 LAN IP> dev <LAN interface> weight 1
$ sudo ip route
...
10.100.0.0/24 
     nexthop via 192.168.1.101 dev enp0xxx weight 1 
     nexthop via 192.168.1.102 dev enp0xxx weight 1  
...

To make the above route permanent, add the following line into /etc/network/interfaces (this is for ubuntu, other distro may defer)

iface enp0s1f1 inet static
...
up ip route add 10.100.0.0/24 nexthop via 192.168.1.81 dev enp0s31f6 weight 1 nexthop via 192.168.1.82 dev enp0s1f1 weight 1

To test if the ingress controller is visible from the upstream box, do

$ curl -H "Host: nonexist.com" http://10.100.0.246
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>

Now the ingress controller works 🙂

Kubernetes at Home on Raspberry Pi 4, Part 2

Continue from part 1

It’s recommended to change all Pi’s password also run ssh-copy-id [email protected] to enable SSH public key login.

There are lots of steps to prepare before kubeadm is installed, so I made this ansible repository to simplify this repeating process. Please see here. The ansible role will do the following tasks:

  • set host name, update /etc/hosts file
  • enable network bridge
  • disable swap, kubeadm doesn’t like it!
  • set timezone. You may want to change it to yours
  • install docker community edition
  • install kubeadm
  • use iptables-legacy (Reference here)

Just to emphasise at this moment Raspbian has iptables 1.8, a new strain used to be called netfilter tables or nftables. The original iptables is renamed to iptables-legacy. You can use my ansible role to use iptables-legacy or do it with:

# update-alternatives --set iptables /usr/sbin/iptables-legacy 
# update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 
# update-alternatives --set arptables /usr/sbin/arptables-legacy 
# update-alternatives --set ebtables /usr/sbin/ebtables-legacy

This is absolutely necessary because current CNI implementations only work with the legacy iptables.

Once the ansible playbook finishes successfully, kubeadm is ready for some action to set up the kubernetes master node, aka. control plane

# the following command is to be run in the master node
# I prefer to use flannelas the CNI(container network interface) because it's lightweight comparing to others like weave.net. So the CIDR is to be set as follow
$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16

Then as the kubeadm finishes it will give some instructions to continue. First thing is to copy the admin.conf so kubectl command can authenticate with the control plane. Also save the kubeadm join 192.168.1.80:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx instruction as it will be needed later

$ sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
$ kubectl get node
...
$ kubectl get pods
...

The coredns pods will be at pending state, this is expected. After the CNI is installed this will be fixed automatically. Next step is to install a CNI, in my case it’s flannel.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

In a few minutes the flannel and coredns pods should be in available state. The run the join command saved earlier on other Pi nodes

kubeadm join 192.168.1.80:6443 --token xxx     --discovery-token-ca-cert-hash sha256:xxx

And back to the master node, you should be able to see the new work node in the output

$ kubectl get nodes

TBC

Kubernetes at Home on Raspberry Pi 4, Part 1

3 x Raspberry Pi 4

I mostly followed/was inspired by this tutorial but with some tweak/fix to recent(Sep 2019) software versions. Also this is a pure Linux walk-through as I don’t use a Mac.

I planned to build a home Kubernetes(k8s) cluster and migrate home servers including the one this blog is running on to the k8s cluster, for a long time. But the Raspberry Pi 2 has 1GB of memory and is not quite appealing for any practical purpose. (I know, I know, we used to run a computer with mega bytes of memory…) When Raspberry Pi 4 with 4GB of memory is available, I believed I need to wait no more.

The 3 Pi 4s I got are from eBay, surprisingly this time the offer in eBay was better than Amazon! I didn’t think I need the cases for the Pis, because I heard the Pi 4 is more powerful and can get hot comparing to previous ones.

I chose Raspbian for now, as it supports all devices in the Pi 4. Ubuntu Server could be a better choice but it only support up to Pi 3. And as a command line veteran I use this line to flash the MicroSD cards:

# if you copy and paste you may need to verify the file name and the card reader device in your computer, ie. I'm not responsible for anything :)
$ unzip -p 2019-07-10-raspbian-buster-lite.zip |sudo dd bs=4M of=/dev/mmcblk0

To enable SSH access at first boot, create an empty file called ssh in /boot partition:

# once again, this path could be different on your system.
$ sudo touch /run/media/raynix/boot/ssh

After this, use the sync command to make sure everything has been written to the card. Then you can pull the MicroSD card out of your card reader slot and put it into the Pi 4.

Something required before powering up the Pi 4:

  • Pi 4 connected to the router/switch with Ethernet cable
  • 5V power supply with USB-C connector
  • DHCP enabled in LAN

After the Pi 4 is powered up, the green LED should flash a bit before you can see raspberrypi.localdomain online( the localdomain part is usually the default for some routers, but can be something else depending on your router setup). Then you should be able to:

# default user is pi, and password is raspberry
$ ssh [email protected]
$ cat <<EOF |sudo tee -a /etc/dhcpcd.conf
interface eth0
static ip_address=192.168.1.200/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
EOF

This will set the Pi 4 to a static IP address after reboot. Repeat this step for each Pi 4 but obviously they should have different IPs, eg. master has 192.168.1.200 and node1 has 192.168.1.201, etc.

TBC.