Grant a Service Account an IAM Role in AWS/GCP

How to grant a pod running in a Kubernetes cluster necessary permissions to access cloud resources such as S3 buckets? The most straight forward approach is to save some API key in the pod and use it to authenticate against cloud APIs. If the cluster is running inside the cloud, an IAM role can then be bound to a service account in the cluster, which is both convenient and safe.

I’ll compare the ways IAM role and service account bind in AWS/EKS and GCP/GKE.


The EKS is the managed Kubernetes service in AWS. To bind an EKS service account to an AWS IAM role:

  1. Create an IAM OIDC provider for your cluster
  2. Create an IAM role which can be assumed by the EKS service account
  3. Annotate the EKS service account to assume the IAM role


The GKE is the managed Kubernetes service in GCP. In GCP this is called Workload Identity(WLI), in a nut shell it binds a GKE service account to a GCP IAM service account, so it’s a bit different than the one above. The full instruction is here but in short:

  1. Enable WLI for the GKE cluster
  2. Create or update node-pool to enable WLI
  3. Create IAM service account and assign roles with necessary permissions
  4. Allow IAM service account to be impersonated by a GKE service account
  5. Annotate the GKE service account to impersonate the GCP service account


TLS Full Site Encryption with Istio and Let’s Encrypt

These are steps to easily install TLS certs to a Kubernetes cluster with Istio service mesh as ingress controller, provided by Let’s Encrypt‘s awesome certbot.

Installation of the certbot (on Ubuntu Linux 20.04LTS)

The certbot can be install via snap on Ubuntu Linux

sudo snap install --classic certbot
sudo ln -s /snap/bin/certbot /usr/local/bin/certbot
certbot --version
certbot 1.15.0

By default certbot needs to write to system directories which I thought unnecessary. I use this alias to run certbot as a normal user

mkdir ~/.certbot
alias certbot="certbot --config-dir ~/.certbot/ --work-dir ~/.certbot/ --logs-dir ~/.certbot"

Generate a new cert

Here’s an example to use certbot’s plugin to create certificate for domains hosted at CloudFlare. Here for more info on the plugin.

# install the plugin first
sudo snap set certbot trust-plugin-with-root=ok
sudo snap install certbot-dns-cloudflare

# save a cloudflare API token
echo "dns_cloudflare_api_token = xxxx" > ~/.cloudflare.ini

# generate the cert
# cert and key will be in ~/.certbot/live/
certbot certonly --dns-cloudflare -d -d '*' --dns-cloudflare-credentials ~/.cloudflare.ini
ls ~/.certbot/live/ -lht
total 4.0K
-rw-rw-r-- 1 ray ray 692 May 10 11:52 README
lrwxrwxrwx 1 ray ray  35 May 10 11:52 cert.pem -> ../../archive/
lrwxrwxrwx 1 ray ray  36 May 10 11:52 chain.pem -> ../../archive/
lrwxrwxrwx 1 ray ray  40 May 10 11:52 fullchain.pem -> ../../archive/
lrwxrwxrwx 1 ray ray  38 May 10 11:52 privkey.pem -> ../../archive/

Install the cert to an Istio gateway

The cert and the key will be put into a Kubernetes secret in istio-system namespace

# assuming kubectl is installed and configured
kubectl create secret -n istio-system tls wild-cert --key ~/.certbot/live/ --cert ~/.certbot/live/

Now the Istio gateway object needs to use this secret as TLS credential

cat <<EOF >gw.yaml
kind: Gateway
  name: wordpress-gateway
  namespace: wordpress
    # default istio ingress gateway
    istio: ingressgateway
  - hosts:
      name: https
      number: 443
      protocol: HTTPS
      credentialName: wild-cert
      mode: SIMPLE
  - hosts:
      name: http
      number: 80
      protocol: HTTP
      httpsRedirect: true

Then this can be locally tested with curl

curl -v --resolve "<TLS node port>:<node IP>" "<TLS node port>"


A Kubernetes ClusterSecret

No, at this moment ClusterSecret, unlike ClusterRole, doesn’t officially exist in any version of Kubernetes yet. I’ve seen some discussion like this, so looks like it will be a while to have a ClusterSecret.

But why do I need a ClusterSecret in the first place? The reason is very simple: To be DRY. Imagine I have a few apps deployed into several different namespaces and they all need to pull from my private docker registry. This looks like:

├── namespace-1
│   ├── image-pull-secret
│   └── deployment-app-1
├── namespace-2
│   ├── image-pull-secret
│   └── deployment-app-2

It’s a tad straight forward that all the image-pull-secret secrets are the same but as there’s no ClusterSecret they have to be duplicated all over the place. And to make things nicer, if the private registry changes its token, all of these secrets need to be updated at once.

Of course I’m not the first one to be frustrated by this and there are tools built by the community already. ClusterSecret operator is one of them. But when I looked at kubernetes-reflector I immediately liked its simple approach: it can reflects 1 source secret or configmap to many mirror ones in all namespaces! Also it’s easy to integrate with existing SealedSecret operator with reflector.

Here’s how to install kubernetes-reflector quickly with all default settings(copied from its README). I chose to save this file and let my FluxCD to install it for me.

kubectl apply -f

Now I can create a image pull secret for my private docker registry in kube-system namespace and then the reflector will copy it to a few namespaces which match the regex for the namespace whitelist.

The command to create a image pull secret is

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

The full sealed secret command will be

kubectl create secret docker-registry image-pull-secret -n kube-system --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email> | \
  kubeseal --controller-namespace=sealed-secrets --controller-name=sealed-secrets -o yaml > image-pull-secret.yaml

Then I’ll add a few magic annotation to let the reflector pick up the job

# this is image-pull-secret.yaml
kind: SealedSecret
  creationTimestamp: null
  name: image-pull-secret
  namespace: kube-system
    .dockerconfigjson: AgA4E6mcpri...
      creationTimestamp: null
      name: image-pull-secret
      namespace: kube-system
      annotations: "true" "true" "wordpress-.*"
status: {}

So when I deploy this file, first the SealedSecret operator will decrypt this into a normal secret with those annotations(note. adding annotations won’t break the encryption, but changing name or namespace could). And then the reflector will create the image-pull-secret secrets in all namespaces which start with wordpress- prefix.

Mission accomplished 🙂

Hello World, Grafana Tanka

I liked YAML a lot, until it gets longer and longer, and even longer.

There are tools to make YAML ‘DRY’, the popular ones are Helm and Kustomize. But none of them can say it got the job done.

To be honest, I didn’t like Helm much from the start. Helm uses templating syntax similar to Jinja2, or PHP, or ASP 20 years ago. It tried to solve a recent problem using an ancient approach: generate text and substitute placeholders with values from variables. Obviously it works and a lot of people are using it, but this bothers me as there are issues that Helm can hardly resolve due to this design:

  • It’s a mixture of YAML and scripts. When working on it, developer has to maintain YAML indentations in the code fragments too, to ensure YAML can be generated as expected.
  • The values of a Helm chart are the parts that can be customized when re-using this chart, this hides the complexity of a full chart from end users but if what the user wanted wasn’t in the values, the user will have to fork the whole chart.

I liked Kustomize a bit more because it doesn’t break the elegance of YAML, ie. the base and overlay Kustomize templates are both valid YAML files. But this is also its Achilles’ heel because YAML is data, not code, it’s Ok to have a few variables, but loops, functions, modules? Those are too much to ask.

I was introduced to Jsonnet a while ago but why should I go back to JSON when I’m better with YAML? However I happened to get to know Grafana Tanka project, after a few minutes’ read I think Tanka has solved most problems which Helm and Kustomize did not. After I watch this presentation(to my surprise the views on this video is shockingly low) I decided to give it a go!

The first step is to install the tanka binary. I personally installed it as a golang module. The go get command looks like:

$ GO111MODULE=on go get
# install JsonnetBundler as it's recommended
$ GO111MODULE=on go get
tk --version
tk version dev

Then we can initialize a hello-world project for tanka:

$ mkdir tanka-hello
$ cd tanka-hello
$ tk init
GET 200
GET 200
Directory structure set up! Remember to configure the API endpoint:
`tk env set environments/default --server=`

The initial directory structure looks like this:

$ tree
├── environments
│   └── default
│       ├── main.jsonnet # <-- this is where we will do the hello-world
│       └── spec.json
├── jsonnetfile.json
├── jsonnetfile.lock.json
├── lib
│   └── k.libsonnet
└── vendor
    │   ├── grafana
    │   │   └── jsonnet-libs
    │   │       └── ksonnet-util
    │   │           ├── grafana.libsonnet
    │   │           ├── kausal.libsonnet
    │   │           ├── k-compat.libsonnet
    │   │           ├── legacy-custom.libsonnet
    │   │           ├── legacy-noname.libsonnet
    │   │           ├── legacy-subtypes.libsonnet
    │   │           ├── legacy-types.libsonnet
    │   │           └── util.libsonnet
    │   └── ksonnet
    │       └── ksonnet-lib
    │           └── ksonnet.beta.4
    │               ├── k8s.libsonnet
    │               └── k.libsonnet
    ├── ksonnet.beta.4 ->
    └── ksonnet-util ->

To make this demo more intuitive, I ran an inotifywait command in left pane and vim environments/default/main.jsonnet in right pane

# the inotifywait command explained:
$ inotifywait -q -m -e moved_to  environments/default/ \ # every time the file is saved in vim, an event will be triggered here 
  |while read -r filename event; do \ # the changed filename and event name are read here but we don't really need to do anything about them
    echo -e '\n\n'; tk show environments/default/ # print out generated YAML file 

As shown in the screenshot, after I understood minimum set of tanka syntax, I can get a Deployment YAML out of a few lines of Jsonnet code.