Grant a Service Account an IAM Role in AWS/GCP

How to grant a pod running in a Kubernetes cluster necessary permissions to access cloud resources such as S3 buckets? The most straight forward approach is to save some API key in the pod and use it to authenticate against cloud APIs. If the cluster is running inside the cloud, an IAM role can then be bound to a service account in the cluster, which is both convenient and safe.

I’ll compare the ways IAM role and service account bind in AWS/EKS and GCP/GKE.

AWS/EKS

The EKS is the managed Kubernetes service in AWS. To bind an EKS service account to an AWS IAM role:

  1. Create an IAM OIDC provider for your cluster
  2. Create an IAM role which can be assumed by the EKS service account
  3. Annotate the EKS service account to assume the IAM role

GCP/GKE

The GKE is the managed Kubernetes service in GCP. In GCP this is called Workload Identity(WLI), in a nut shell it binds a GKE service account to a GCP IAM service account, so it’s a bit different than the one above. The full instruction is here but in short:

  1. Enable WLI for the GKE cluster
  2. Create or update node-pool to enable WLI
  3. Create IAM service account and assign roles with necessary permissions
  4. Allow IAM service account to be impersonated by a GKE service account
  5. Annotate the GKE service account to impersonate the GCP service account

🙂

I Farmed Some Chia(XCH)

Chia is a relatively new crypto currency which can be ‘mined’ with hard disks. It’s advertised as a green crypto because hard disks consumes way less energy comparing to mining rigs with high end graphic cards.

I installed Chia on my Ubuntu Linux desktop computer because it has some vacant SATA ports that I can use for mining. The official instructions for installation is here.

In a nut shell, the mining power of Chia is determined how many plots do I have on disks and they are huge – about 100GB each. The process to generate plots is very IO intensive so it’s better be done on the fastest SSDs( and expect that SSD wears out quickly).

I used an old 512GB m.2 SSD as plotting disk and 2 x 8TB HDDs to store generated plots, eg.
/var/chia/tmp –> 512GB SSD
/var/chia/barn1 –> 8TB HDD
/var/chia/barn2 –> 8TB HDD

At the moment 512GB isn’t big enough to do 2 plotting processes in parallel, so I used a simple bash loop to keep generating plots in a single threaded manner, until the disk is full and then start it with next storage destination.

# this can be run in a screen to keep it running in background
while chia plots create -k 32 -n 1 -t /var/chia/tmp -d /var/chia/barn1 ; do sleep 1; done

The chia command to check farming status is

chia farm summary

With 2 x 8TB ( 14T usable) of storage, I have put 138 plots in them. It sounds like a lot at first but with a lot of people doing this plotting including some professional miners, the estimate time to be awarded has kept increasing, from 2 months to nearly a year now. But I’ve got some serious luck here as I was awarded 2 XCH coin the other day!

chia farm summary
Farming status: Farming
Total chia farmed: 2.0
User transaction fees: 0.0
Block rewards: 2.0
Last height farmed: 185xxx
Plot count: 138
Total size of plots: 13.660 TiB
Estimated network space: 15766.681 PiB
Expected time to win: 7 months and 2 weeks

A typical HDD will consume around 10W of power, and it can be set to spin down when idle. To my understanding that is necessary because the plots very very rarely get challenged which means most of time the drives do not have activity. Comparing this to the power consumption of those PoW(Proof of Work) crypto currencies such as BTC, ETH(for now), Chia(XCH) is indeed much much greener.

A warning here though, the plotting disk will wear heavily and might even get expired during the process.

🙂

TLS Full Site Encryption with Istio and Let’s Encrypt

These are steps to easily install TLS certs to a Kubernetes cluster with Istio service mesh as ingress controller, provided by Let’s Encrypt‘s awesome certbot.

Installation of the certbot (on Ubuntu Linux 20.04LTS)

The certbot can be install via snap on Ubuntu Linux

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/local/bin/certbot
certbot --version
certbot 1.15.0

By default certbot needs to write to system directories which I thought unnecessary. I use this alias to run certbot as a normal user

mkdir ~/.certbot
alias certbot="certbot --config-dir ~/.certbot/ --work-dir ~/.certbot/ --logs-dir ~/.certbot"

Generate a new cert

Here’s an example to use certbot’s plugin to create certificate for domains hosted at CloudFlare. Here for more info on the plugin.

# install the plugin first
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install certbot-dns-cloudflare

# save a cloudflare API token
echo "dns_cloudflare_api_token = xxxx" > ~/.cloudflare.ini

# generate the cert
# cert and key will be in ~/.certbot/live/raynix.info
certbot certonly --dns-cloudflare -d raynix.info -d '*.raynix.info' --dns-cloudflare-credentials ~/.cloudflare.ini
ls ~/.certbot/live/raynix.info/ -lht
total 4.0K
-rw-rw-r-- 1 ray ray 692 May 10 11:52 README
lrwxrwxrwx 1 ray ray  35 May 10 11:52 cert.pem -> ../../archive/raynix.info/cert1.pem
lrwxrwxrwx 1 ray ray  36 May 10 11:52 chain.pem -> ../../archive/raynix.info/chain1.pem
lrwxrwxrwx 1 ray ray  40 May 10 11:52 fullchain.pem -> ../../archive/raynix.info/fullchain1.pem
lrwxrwxrwx 1 ray ray  38 May 10 11:52 privkey.pem -> ../../archive/raynix.info/privkey1.pem

Install the cert to an Istio gateway

The cert and the key will be put into a Kubernetes secret in istio-system namespace

# assuming kubectl is installed and configured
kubectl create secret -n istio-system tls wild-cert --key ~/.certbot/live/raynix.info/privkey.pem --cert ~/.certbot/live/raynix.info/fullchain.pem

Now the Istio gateway object needs to use this secret as TLS credential

cat <<EOF >gw.yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: wordpress-gateway
  namespace: wordpress
spec:
  selector:
    # default istio ingress gateway
    istio: ingressgateway
  servers:
  - hosts:
    - raynix.info
    port:
      name: https
      number: 443
      protocol: HTTPS
    tls:
      credentialName: wild-cert
      mode: SIMPLE
  - hosts:
    - raynix.info
    port:
      name: http
      number: 80
      protocol: HTTP
    tls:
      httpsRedirect: true

Then this can be locally tested with curl

curl -v -HHost:raynix.info --resolve "raynix.info:<TLS node port>:<node IP>" "https://raynix.info:<TLS node port>"

🙂

A Kubernetes ClusterSecret

No, at this moment ClusterSecret, unlike ClusterRole, doesn’t officially exist in any version of Kubernetes yet. I’ve seen some discussion like this, so looks like it will be a while to have a ClusterSecret.

But why do I need a ClusterSecret in the first place? The reason is very simple: To be DRY. Imagine I have a few apps deployed into several different namespaces and they all need to pull from my private docker registry. This looks like:

├── namespace-1
│   ├── image-pull-secret
│   └── deployment-app-1
├── namespace-2
│   ├── image-pull-secret
│   └── deployment-app-2
...

It’s a tad straight forward that all the image-pull-secret secrets are the same but as there’s no ClusterSecret they have to be duplicated all over the place. And to make things nicer, if the private registry changes its token, all of these secrets need to be updated at once.

Of course I’m not the first one to be frustrated by this and there are tools built by the community already. ClusterSecret operator is one of them. But when I looked at kubernetes-reflector I immediately liked its simple approach: it can reflects 1 source secret or configmap to many mirror ones in all namespaces! Also it’s easy to integrate with existing SealedSecret operator with reflector.

Here’s how to install kubernetes-reflector quickly with all default settings(copied from its README). I chose to save this file and let my FluxCD to install it for me.

kubectl apply -f https://github.com/emberstack/kubernetes-reflector/releases/latest/download/reflector.yaml

Now I can create a image pull secret for my private docker registry in kube-system namespace and then the reflector will copy it to a few namespaces which match the regex for the namespace whitelist.

The command to create a image pull secret is

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

The full sealed secret command will be

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email> | \
  kubeseal --controller-namespace=sealed-secrets --controller-name=sealed-secrets -o yaml > image-pull-secret.yaml

Then I’ll add a few magic annotation to let the reflector pick up the job

# this is image-pull-secret.yaml
apiVersion: bitnami.com/v1alpha1
kind: SealedSecret
metadata:
  creationTimestamp: null
  name: image-pull-secret
  namespace: kube-system
spec:
  encryptedData:
    .dockerconfigjson: AgA4E6mcpri...
  template:
    metadata:
      creationTimestamp: null
      name: image-pull-secret
      namespace: kube-system
      annotations:
        reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
        reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: "wordpress-.*"
status: {}

So when I deploy this file, first the SealedSecret operator will decrypt this into a normal secret with those annotations(note. adding annotations won’t break the encryption, but changing name or namespace could). And then the reflector will create the image-pull-secret secrets in all namespaces which start with wordpress- prefix.

Mission accomplished 🙂