Continue from part 2, this is mostly about installing ingress controller. In short, an ingress controller is like a single entry point for all ingress connections into the cluster.
The reason I chose Flannel
over other CNIs is that it’s lightweight and not bloated with features. I would like to keep the Pi 4s easy before they are tasked with anything. Same reason I’ll install nginx-ingress-controller
to have control with ingress. MetalLB looks a good fit for A Raspberry Pi cluster but I’ll pass at this moment, because this is more like a hobby project, if the load is really high and redundancy is necessary I’ll probably use AWS or GCP which has decent load balancers.
The official nginx-ingress-controller
image at quay.io
doesn’t seem to support armhf/armv7
architecture, so I built one myself here. To deploy the ingress controller schema but using my own container image, I chose kustomize
for the little tweak. (Also kustomize
has been integrated into kubectl
v1.14+).
First I downloaded the official nginx-ingress-controller
schema:
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml $ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
Then I use kustomize
to replace the container image with my own:
$ cat <<EOF >kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - mandatory.yaml - service-nodeport.yaml images: - name: quay.io/kubernetes-ingress-controller/nginx-ingress-controller newName: raynix/nginx-ingress-controller-arm newTag: 0.25.1 EOF # then there should be 3 files in current directory $ ls kustomization.yaml mandatory.yaml service-nodeport.yaml # install with kubectl $ kubectl apply -k .
To see the node port for this ingress controller, do
$ k get --namespace ingress-nginx svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx NodePort 10.100.0.246 80:32283/TCP,443:30841/TCP 7d14h
In the upstream nginx or HAProxy box, a static route can be set to route traffic to ingress controller:
$ sudo ip route add 10.100.0.0/24 \ nexthop via <node1 LAN IP> dev <LAN interface> weight 1 \ nexthop via <node2 LAN IP> dev <LAN interface> weight 1 $ sudo ip route ... 10.100.0.0/24 nexthop via 192.168.1.101 dev enp0xxx weight 1 nexthop via 192.168.1.102 dev enp0xxx weight 1 ...
To make the above route permanent, add the following line into /etc/network/interfaces
(this is for ubuntu, other distro may defer)
iface enp0s1f1 inet static ... up ip route add 10.100.0.0/24 nexthop via 192.168.1.81 dev enp0s31f6 weight 1 nexthop via 192.168.1.82 dev enp0s1f1 weight 1
To test if the ingress controller is visible from the upstream box, do
$ curl -H "Host: nonexist.com" http://10.100.0.246 <html> <head><title>404 Not Found</title></head> <body> <center><h1>404 Not Found</h1></center> <hr><center>openresty/1.15.8.1</center> </body> </html>
Now the ingress controller works 🙂