Kubernetes at Home on Raspberry Pi 4, Part 3

Continue from part 2, this is mostly about installing ingress controller. In short, an ingress controller is like a single entry point for all ingress connections into the cluster.

The reason I chose Flannel over other CNIs is that it’s lightweight and not bloated with features. I would like to keep the Pi 4s easy before they are tasked with anything. Same reason I’ll install nginx-ingress-controller to have control with ingress. MetalLB looks a good fit for A Raspberry Pi cluster but I’ll pass at this moment, because this is more like a hobby project, if the load is really high and redundancy is necessary I’ll probably use AWS or GCP which has decent load balancers.

The official nginx-ingress-controller image at quay.io doesn’t seem to support armhf/armv7 architecture, so I built one myself here. To deploy the ingress controller schema but using my own container image, I chose kustomize for the little tweak. (Also kustomize has been integrated into kubectl v1.14+).

First I downloaded the official nginx-ingress-controller schema:

$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

Then I use kustomize to replace the container image with my own:

$ cat <<EOF >kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - mandatory.yaml
  - service-nodeport.yaml 
images:
  - name: quay.io/kubernetes-ingress-controller/nginx-ingress-controller
    newName: raynix/nginx-ingress-controller-arm
    newTag: 0.25.1
EOF
# then there should be 3 files in current directory
$ ls
kustomization.yaml  mandatory.yaml  service-nodeport.yaml
# install with kubectl
$ kubectl apply -k .

To see the node port for this ingress controller, do

$ k get --namespace ingress-nginx svc
 NAME            TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
 ingress-nginx   NodePort   10.100.0.246           80:32283/TCP,443:30841/TCP   7d14h

In the upstream nginx or HAProxy box, a static route can be set to route traffic to ingress controller:

$ sudo ip route add 10.100.0.0/24 \
  nexthop via <node1 LAN IP> dev <LAN interface> weight 1 \
  nexthop via <node2 LAN IP> dev <LAN interface> weight 1
$ sudo ip route
...
10.100.0.0/24 
     nexthop via 192.168.1.101 dev enp0xxx weight 1 
     nexthop via 192.168.1.102 dev enp0xxx weight 1  
...

To test if the ingress controller is visible from the upstream box, do

$ curl -H "Host: nonexist.com" http://10.100.0.246
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>

Now the ingress controller works 🙂

Kubernetes at Home on Raspberry Pi 4, Part 2

Continue from part 1

It’s recommended to change all Pi’s password also run ssh-copy-id [email protected] to enable SSH public key login.

There are lots of steps to prepare before kubeadm is installed, so I made this ansible repository to simplify this repeating process. Please see here. The ansible role will do the following tasks:

  • set host name, update /etc/hosts file
  • enable network bridge
  • disable swap, kubeadm doesn’t like it!
  • set timezone. You may want to change it to yours
  • install docker community edition
  • install kubeadm
  • use iptables-legacy (Reference here)

Just to emphasise at this moment Raspbian has iptables 1.8, a new strain used to be called netfilter tables or nftables. The original iptables is renamed to iptables-legacy. You can use my ansible role to use iptables-legacy or do it with:

# update-alternatives --set iptables /usr/sbin/iptables-legacy 
# update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy 
# update-alternatives --set arptables /usr/sbin/arptables-legacy 
# update-alternatives --set ebtables /usr/sbin/ebtables-legacy

This is absolutely necessary because current CNI implementations only work with the legacy iptables.

Once the ansible playbook finishes successfully, kubeadm is ready for some action to set up the kubernetes master node, aka. control plane

# the following command is to be run in the master node
# I prefer to use flannelas the CNI(container network interface) because it's lightweight comparing to others like weave.net. So the CIDR is to be set as follow
$ sudo kubeadm init --pod-network-cidr 10.244.0.0/16

Then as the kubeadm finishes it will give some instructions to continue. First thing is to copy the admin.conf so kubectl command can authenticate with the control plane. Also save the kubeadm join 192.168.1.80:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx instruction as it will be needed later

$ sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
$ kubectl get node
...
$ kubectl get pods
...

The coredns pods will be at pending state, this is expected. After the CNI is installed this will be fixed automatically. Next step is to install a CNI, in my case it’s flannel.

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

In a few minutes the flannel and coredns pods should be in available state. The run the join command saved earlier on other Pi nodes

kubeadm join 192.168.1.80:6443 --token xxx     --discovery-token-ca-cert-hash sha256:xxx

And back to the master node, you should be able to see the new work node in the output

$ kubectl get nodes

TBC

Kubernetes at Home on Raspberry Pi 4, Part 1

3 x Raspberry Pi 4

I mostly followed/was inspired by this tutorial but with some tweak/fix to recent(Sep 2019) software versions. Also this is a pure Linux walk-through as I don’t use a Mac.

I planned to build a home Kubernetes(k8s) cluster and migrate home servers including the one this blog is running on to the k8s cluster, for a long time. But the Raspberry Pi 2 has 1GB of memory and is not quite appealing for any practical purpose. (I know, I know, we used to run a computer with mega bytes of memory…) When Raspberry Pi 4 with 4GB of memory is available, I believed I need to wait no more.

The 3 Pi 4s I got are from eBay, surprisingly this time the offer in eBay was better than Amazon! I didn’t think I need the cases for the Pis, because I heard the Pi 4 is more powerful and can get hot comparing to previous ones.

I chose Raspbian for now, as it supports all devices in the Pi 4. Ubuntu Server could be a better choice but it only support up to Pi 3. And as a command line veteran I use this line to flash the MicroSD cards:

# if you copy and paste you may need to verify the file name and the card reader device in your computer, ie. I'm not responsible for anything :)
$ unzip -p 2019-07-10-raspbian-buster-lite.zip |sudo dd bs=4M of=/dev/mmcblk0

To enable SSH access at first boot, create an empty file called ssh in /boot partition:

# once again, this path could be different on your system.
$ sudo touch /run/media/raynix/boot/ssh

After this, use the sync command to make sure everything has been written to the card. Then you can pull the MicroSD card out of your card reader slot and put it into the Pi 4.

Something required before powering up the Pi 4:

  • Pi 4 connected to the router/switch with Ethernet cable
  • 5V power supply with USB-C connector
  • DHCP enabled in LAN

After the Pi 4 is powered up, the green LED should flash a bit before you can see raspberrypi.localdomain online( the localdomain part is usually the default for some routers, but can be something else depending on your router setup). Then you should be able to:

# default user is pi, and password is raspberry
$ ssh [email protected]
$ cat <<EOF |sudo tee -a /etc/dhcpcd.conf
interface eth0
static ip_address=192.168.1.200/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
EOF

This will set the Pi 4 to a static IP address after reboot. Repeat this step for each Pi 4 but obviously they should have different IPs, eg. master has 192.168.1.200 and node1 has 192.168.1.201, etc.

TBC.

在 Debian 6 上安装 GroundWork

GroundWork 是很漂亮的 Nagios 前端, 并增加了很多易用的功能(单纯的配置 Nagios 很头疼对吧). 而且根据目前 GW 的销售方式, 管理50台设备以下的情况可以免费试用, 提供 Email 即可:

http://www.gwos.com/downloads/core/

在 Debian 6 上安装 GW 时还是有些小问题, 就是 PostgreSQL 提示SHMMAX (最大共享内存?)值不够大. 按照如下方式调整一下, 即可安装:

sysctl -w kernel.shmmax=2147483648
sysctl -w kernel.shmall=524288
sysctl -p

第一次运行会提示要求输入 License, 如果之前提交了 Email 地址的话应该已经收到了, 抄过来就可以了.

另外 check_icmp 这个命令有时候会出现”setuid or root”一类的问题导致误报. 按照如下方法可修复.

chown root:nagios check_icmp
chmod 4750 check_icmp

尚未没发现其它问题. 😀