Use Variable in Kustomize

Variables in Kustomize are handy helpers from time to time, with these variables I can link some settings together which should share the same value all the time. Without variable I probably need to use some template engine like Jinja2 to do the same trick.

Some examples here.

In my case, there’s a bug in kustomize as of now(3.6.1) where configMap object names don’t get properly suffixed in a patch file. The issue is here. I can however use variable to overcome this bug. Imagine in a scenario I have a configMap in a base template and it will be referenced in a patch file:

# common/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

configMapGenerator:
  - name: common
    literals:
      - TEST=YES

# test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test
bases:
  - ../base
  - ../common
nameSuffix: -raynix
patchesStrategicMerge:
  - patch.yaml

# test/patch.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  templates:
    spec:
      volumes:
        - name: common
          configMap:
            name: common
            # this should be linked to the configMap in common/kustomization.yaml but it won't be updated with a hash and suffix.

Using variable can get around this bug. Please see the following example:

# common/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
  - configuration.yaml
configMapGenerator:
  - name: common
    literals:
      - TEST=YES
vars:
  - name: COMMON
    objref:
      apiVersion: v1
      kind: ConfigMap
      name: common
    fieldref:
      # this can be omitted as metadata.name is the default fieldPath 
      fieldPath: metadata.name

# test/kustomization.yaml unchanged

# test/patch.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  templates:
    spec:
      volumes:
        - name: common
          configMap:
            name: $(COMMON)
            # now $(COMMON) will be updated with whatever the real configmap name is

Problem solved 🙂

5G + Public IP with OpenVPN

I’ve done a proof of concept with SSH tunneling to add a public IP to my 5G home broadband connection, it works for my garage-hosted blogs but it’s not a complete solution. Since I still have free credit in my personal Google Cloud account, I decided to make an improvement with OpenVPN. The diagram looks like:

        [CloudFlare] 
             |
            HTTP
             |
     [VM:35.197.x.x:80]
             |
       [iptables DNAT]
             |
      [OpenVPN tunnel]
             |
[local server tun0 interface: 10.8.0.51:80]

Following an outstanding tutorial on DigitalOcean I set up an OpenVPN server on Debian 10 running in a Google Cloud Compute instance. There’s a few more thing to do for my case.

First I needed to add port forwarding from the public interface of the OpenVPN server to home server’s tunnel interface. Here’s my ufw configuration file:

# this is /etc/ufw/before.rules
# START OPENVPN RULES
# NAT table rules
*nat
:PREROUTING ACCEPT [0:0]
# port forwarding to home server
-A PREROUTING -i eth0 -p tcp -d <public ip> --dport 80 -j DNAT --to 10.8.0.51:80

:POSTROUTING ACCEPT [0:0] 
# Allow traffic from OpenVPN client to eth0, ie. internet access
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
# END OPENVPN RULES

Make sure to restart ufw after this.

Then in my home server, the OpenVPN client can be configured to run as a service:

# this is /etc/systemd/system/vpnclient.service
[Unit]
Description=Setup an openvpn tunnel to kite server
After=network.target

[Service]
ExecStart=/usr/sbin/openvpn --config /etc/openvpn/client1.conf
RestartSec=5
Restart=always

[Install]
WantedBy=multi-user.target

To enable it and start immediately:

sudo systemctl daemon-reload
sudo systemctl enable vpnclient
sudo systemctl start vpnclient

Also I need my home server to have a fixed IP for its tun0 network interface, so the nginx server can proxy traffic to this IP reliably. I followed this guide, except it suggested to do client-config-dir on both server and client sides but I only did on the server side and it worked for me:

# this is /etc/openvpn/server.conf
# uncomment the following line
client-config-dir ccd

# this is /etc/openvpn/ccd/client1
ifconfig-push 10.8.0.51 255.255.255.255

After this the OpenVPN server on the VM needs to be restarted:

sudo systemctl restart [email protected]

Reload the nginx server and it should be working. I tested it with curl -H "Host: raynix.info" 35.197.x.x and the request hit my home server.

🙂

Using Sealed Secrets in a Raspberry Pi Kubernetes Cluster

Sealed Secrets is a bitnami Kubernetes operator aimed to one-way encrypt secrets into sealed secrets so that they can be safely checked-in into GitHub or other VCS. It’s rather easy to install and use Sealed Secrets in a Kubernetes cluster on AMD64 architecture, but not so on my Raspberry Pi cluster.

First, the container image for the sealed-secrets-controller wasn’t built for ARM architecture. I managed to build it in my Raspberry Pi 2 with following commands:

git clone https://github.com/bitnami-labs/sealed-secrets.git
cd sealed-secrets
# golang build tools are needed here
make controller.image
# you can tag it to your docker registry instead of mine
docker tag quay.io/bitnami/sealed-secrets-controller:latest raynix/sealed-secrets-controller-arm:latest
docker push raynix/sealed-secrets-controller-arm

The next step is to use kustomize to override the default sealed-secrets deployment schema to use my newly built container image that runs on ARM

# kustomization.yaml
# controller.yaml is from https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/controller.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: sealed-secrets
images:
  - name: quay.io/bitnami/sealed-secrets-controller
    newName: raynix/sealed-secrets-controller-arm
    newTag: latest
patchesStrategicMerge:
  - patch.yaml

resources:
  - controller.yaml
  - ns.yaml
# ns.yaml
# I'd like to install the controller into its own namespace
apiVersion: v1
kind: Namespace
metadata:
  name: sealed-secrets
# patch.yaml
# apparently the controller running on Raspberry Pi 4 needs more time to initialize
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sealed-secrets-controller
spec:
  template:
    spec:
      containers:
        - name: sealed-secrets-controller
          readinessProbe:
            initialDelaySeconds: 100

Then the controller can be deployed with command kubectl apply -k .

The CLI installation is much easier on a Linux laptop. After kubeseal is installed. The public key used to encrypt secrets can be obtained from the controller deployed above. Since I installed the controller in it’s own namespace sealed-secrets instead of the default kube-system the command to encrypt secrets is a bit different:

kubectl create secret generic test-secret --from-literal=username=admin --from-literal=password=password --dry-run -o yaml | \
  kubeseal --controller-namespace=sealed-secrets -o yaml > sealed-secrets.yaml

Then the generated file sealed-secrets.yaml can be deploy with kubectl apply -f sealed-secrets.yaml and a secret called test-secret will be created. Now feel free to check-in sealed-secrets.yaml into a public GitHub repository!

🙂

Customize the Kustomize for Kubernetes CRDs

I’ve introduced Kustomize in this earlier post, now I feel even happier because Kustomize can be even more customized for some CRDs(Custom Resource Definition). For instance, Kustomize doesn’t know how to handle Istio’s VirtualService object, but with some simple YAML style configurations it ‘learns’ to handle that easily.

# k8s service, ie. service.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress-svc
spec:
  selector:
    app: wordpress
  ports:
    - name: http
      port: 80
      targetPort: 8080
  type: NodePort
# istio virtual service. ie. virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: wordpress-vs
spec:
  hosts: 
    - wordpress
  http:
    - match:
      - uri: /
      route:
        - destination:
          host: wordpress-svc
          port:
            number: 80
# kustomize name reference, ie. name-reference.yaml
nameReference:
  - kind: Service
    version: v1
    fieldSpecs:
      - path: spec/http/route/destination/host
        kind: VirtualService
# main kustomize entry, ie. kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
  - name-reference.yaml
namespace: wordpress
nameSuffix: -raynix

So in name-reference.yaml, kustomize will learn the host property in VirtualService will be linked to the metadata.name of a service. When the name suffix -raynix is applied to the Service, it will also be applied to the VirtualService, eg.

kind: Service
metadata:
  name: wordpress-svc-raynix
...

kind: VirtualService
spec:
  http:
    - route:
      - destination:
          host: wordpress-svc-raynix
...

For more information: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md

🙂