Using Sealed Secrets in a Raspberry Pi Kubernetes Cluster

Sealed Secrets is a bitnami Kubernetes operator aimed to one-way encrypt secrets into sealed secrets so that they can be safely checked-in into GitHub or other VCS. It’s rather easy to install and use Sealed Secrets in a Kubernetes cluster on AMD64 architecture, but not so on my Raspberry Pi cluster.

First, the container image for the sealed-secrets-controller wasn’t built for ARM architecture. I managed to build it in my Raspberry Pi 2 with following commands:

git clone https://github.com/bitnami-labs/sealed-secrets.git
cd sealed-secrets
# golang build tools are needed here
make controller.image
# you can tag it to your docker registry instead of mine
docker tag quay.io/bitnami/sealed-secrets-controller:latest raynix/sealed-secrets-controller-arm:latest
docker push raynix/sealed-secrets-controller-arm

The next step is to use kustomize to override the default sealed-secrets deployment schema to use my newly built container image that runs on ARM

# kustomization.yaml
# controller.yaml is from https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.9.7/controller.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: sealed-secrets
images:
  - name: quay.io/bitnami/sealed-secrets-controller
    newName: raynix/sealed-secrets-controller-arm
    newTag: latest
patchesStrategicMerge:
  - patch.yaml

resources:
  - controller.yaml
  - ns.yaml
# ns.yaml
# I'd like to install the controller into its own namespace
apiVersion: v1
kind: Namespace
metadata:
  name: sealed-secrets
# patch.yaml
# apparently the controller running on Raspberry Pi 4 needs more time to initialize
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sealed-secrets-controller
spec:
  template:
    spec:
      containers:
        - name: sealed-secrets-controller
          readinessProbe:
            initialDelaySeconds: 100

Then the controller can be deployed with command kubectl apply -k .

The CLI installation is much easier on a Linux laptop. After kubeseal is installed. The public key used to encrypt secrets can be obtained from the controller deployed above. Since I installed the controller in it’s own namespace sealed-secrets instead of the default kube-system the command to encrypt secrets is a bit different:

kubectl create secret generic test-secret --from-literal=username=admin --from-literal=password=password --dry-run -o yaml | \
  kubeseal --controller-namespace=sealed-secrets -o yaml > sealed-secrets.yaml

Then the generated file sealed-secrets.yaml can be deploy with kubectl apply -f sealed-secrets.yaml and a secret called test-secret will be created. Now feel free to check-in sealed-secrets.yaml into a public GitHub repository!

🙂

Customize the Kustomize for Kubernetes CRDs

I’ve introduced Kustomize in this earlier post, now I feel even happier because Kustomize can be even more customized for some CRDs(Custom Resource Definition). For instance, Kustomize doesn’t know how to handle Istio’s VirtualService object, but with some simple YAML style configurations it ‘learns’ to handle that easily.

# k8s service, ie. service.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress-svc
spec:
  selector:
    app: wordpress
  ports:
    - name: http
      port: 80
      targetPort: 8080
  type: NodePort
# istio virtual service. ie. virtual-service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: wordpress-vs
spec:
  hosts: 
    - wordpress
  http:
    - match:
      - uri: /
      route:
        - destination:
          host: wordpress-svc
          port:
            number: 80
# kustomize name reference, ie. name-reference.yaml
nameReference:
  - kind: Service
    version: v1
    fieldSpecs:
      - path: spec/http/route/destination/host
        kind: VirtualService
# main kustomize entry, ie. kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
  - name-reference.yaml
namespace: wordpress
nameSuffix: -raynix

So in name-reference.yaml, kustomize will learn the host property in VirtualService will be linked to the metadata.name of a service. When the name suffix -raynix is applied to the Service, it will also be applied to the VirtualService, eg.

kind: Service
metadata:
  name: wordpress-svc-raynix
...

kind: VirtualService
spec:
  http:
    - route:
      - destination:
          host: wordpress-svc-raynix
...

For more information: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/transformerconfigs/README.md

🙂

Use fzf to Supercharge Your kubectl Command

First let’s have a look at fzf, a super fast command line fuzzy finder. It’s mostly written in golang and there are so many ways to use it. In this note, I’ll just use it to select a value from a list in a terminal and then return it to next command.

When working with kubernetes, I open need to switch between namespaces in a cluster. The raw kubectl command for this is:

 kubectl config set-context --current --namespace='my-namespace'

But I don’t remember every namespace, do I? If auto-completion of kubectl is available I can double-tab and then get a list of namespaces like this:

This is not quite convenient, not at all. So I created this simple bash function using fzf to speed this up.

#select k8s namespace
function kns() {
  kubectl config set-context --current --namespace=$(kubectl get namespaces --output=jsonpath='{range .items[*]}{.metadata.name}{"\n"}'|fzf)
}

The kubectl get... command in the sub shell above simple output a list of available kubernetes namespaces and then pipe to fzf to make an interactive auto-completion. And then the selected namespace will be passed back to kubectl config ... command to actually set the namespace to current context. The result looks like this:

PS. I’ve put this and other handy shorthand functions here.

Fedora 31, Optimus, Bumblebee and Steam

My laptop(XPS 15) has Nvidia 1050 Ti but I never used it to play games, because I replaced Windows 10 with Fedora Linux on the first day, not even a dual boot. I’ve tried Bumblebee to run Linux native games on Nvidia a few years ago but it wasn’t stable enough.

After I upgraded the laptop to Fedora 31, I decided to give Bumblebee another try. Luckily I got it working just by following instructions here. With very little efforts, I got Nvidia Settings and GLXSpheres working.

And it’s almost a no-brainer to launch Steam Linux native games with an extra launch option optirun -b primus %command%

That’s it, now I can play my Steam games on my Linux laptop 🙂