A Few Things I’m Grateful about Microsoft Windows

Minesweeper on Windows 3.1

TL;DR. this is my story with Microsoft Windows, from 3.1 to 7.

Many years after my mom bought me my first PC, I realized the Windows 3.1 installed by the vendor was a cracked copy, or pirate copy if you will. But Windows 3.1 was really amazing, and I liked it a lot. There were some much innovations in the UI and the Minesweeper game was completely next level. I played a lot Minesweeper and compared scores with friends, so much fun, not to mention that strengthened my ability to reason.

Then, after I got a job and saved some money, I decided to buy my own copy of Windows. It’s Windows XP at that time. Again, Windows XP was just amazing: Everything in XP was smoother, including the Minesweeper. At a software store in Beijing I saw 2 kinds of packaging, the simple OEM box and the beautiful retail box, of course the beautiful one was more expensive. Predictably I chose the cheaper OEM copy of Windows XP.

I was really happy when I started to install my own copy of Windows XP immediately after I got home, especially the part I looked up and enter the CD key — it’s unique and it only belongs to me! Then with internet connection the copy was activated, which means, the CD key has been registered with my PC’s hardware details. So good luck even if my roommate tries to use my XP disc and CD key to install on his PC.

So far so good. A few years later I saved enough money for upgrading my PC to Pentium 4(a quite hot one then, literally). After everything was plugged in properly, I turned on my shiny new Pentium 4 PC and started to install Windows XP. To my surprise, I couldn’t activate my copy of Windows XP because it thought I tried to activate it on the other PC. To be fair, it was the other PC technically, the CPU and motherboard are all new.

But it’s my only PC(I sold my old gear online) and I intended to use it. So I called Microsoft support hotline, and I didn’t like the answer: The OEM copy of Windows can only be activated once. Without the previous CPU and motherboard I was no longer the legitimate owner of my Windows XP OEM copy. Cool, lessons learned! I would only buy retail version from the day onwards, and with retail version I could deactivate it from my old PC and activate it on the new PC. Problem solved!

Years later, yes years later, I finally decided to upgrade my Windows XP retail version to Windows 7 retail version. I didn’t know what happened to Windows Vista but it wasn’t cool so I waited for a long time for the next Windows and even migrated to Australia in the mean while. Until I saw Windows 7, which had that kind of good Windows quality, same went for Windows XP and 3.1.

Then I carefully deactivated my Windows XP retail version and reinstalled my PC with Windows 7 retail version of course, everything was awesome again. Looking at my decade old Windows XP retail CD I decided to sell it on eBay. And to my surprise eBay promptly took down my Windows XP retail CD’s listing, the reason was, I only remembered vaguely, I as an individual was not entitled to resell Microsoft products.

That was just the 2nd strike but later that night I gifted my Windows 7 retail CD to a friend and started to use Linux at home.

Fast forward, Windows 10 was released and it actually has Linux as its component call WSL. I felt that I have cut the middle man out as I use Linux directly 🙂

First Month With Sanden Heap Pump

TL;DR: This thing is purely awesome!

An ordinary day’s power consumption of the Sanden Heat Pump unit, marked in red lines

It’s been almost a month since my Sanden heat pump system being installed.

The unit is scheduled to start running after 9AM. It probably should start a bit later than that because my solar system can barely produce 900w of power at 9AM in late winter, if it’s sunny of course. I think it should start in the afternoon so even if it’s cloudy winter day the solar system can carry the heat pump without struggling.

But looking at the home energy chart, the heat pump probably consumed less than 2kw each day given we do a few hot showers and do other daily cooking and cleaning routines, so it’s not a big deal at all even if I don’t have solar.

Water is pre-heated and stored in the well insulated steel water tank, so at the nearest tap the hot water is almost instant, even more instantaneous than my old gas one. Guess it takes a few seconds for a gas heater to warm up its heat exchange pipes which obviously not needed by the water tank.

The unit is almost silent when it’s running. I can only identify its noise when I’m within 5 meters with it. If I’m in my kitchen, I can certainly tell if the fridge is working but not the heat pump which sits just outside of the kitchen wall.

This heat pump also works like a good combo with my induction cook top. For instance when I start to cook some noodle, I use hot water from the water tank, which is an instant boost from 10°C to 60°C. This saves me some time waiting for the water to boil and also some energy thanks to heat pump’s very high thermal co-efficient, ie. only 1/6th energy is used to heat water to 60°C, then 100% energy to heat up to 100°C.

Now I only hope this thing runs reliably and won’t have a break down. Fingers crossed 🙂

Update GCP IAM Adaptively with Terraform DataSources

In a scenario where a service account in a central GCP project needs to be accessible by a group of GKE service accounts across multiple GCP projects, the IAM part in Terraform HCL could look like

resource "google_service_account" "service_account" {
  account_id   = "sa-${var.environment}"
  display_name = "Test Service Account"
  project      = var.project_id
}

resource "google_service_account_iam_binding" "service_account_workload_identity_binding" {
  service_account_id = google_service_account.service_account.name
  role               = "roles/iam.workloadIdentityUser"

  members = [
    "serviceAccount:xxx.svc.id.goog[k8s-namespace/k8s-sa]",
    "serviceAccount:yyy.svc.id.goog[k8s-namespace/k8s-sa]",
    ...
  ]
}

I can make a variable for the members so it becomes

variable "project_ids" {
  type = list(string)
}

resource "google_service_account_iam_binding" "service_account_workload_identity_binding" {
  service_account_id = google_service_account.service_account.name
  role               = "roles/iam.workloadIdentityUser"

  members = [
    for project_id in var.project_ids: "serviceAccount:${project_id}.svc.id.goog[k8s-namespace/k8s-sa]"
  ]
}

But still the project_ids variable needs to be populated in a tfvars file with hard-coded project IDs. Is there a more flexible way to do this, so that I don’t need to add or remove a project ID from the list when projects come and go?

With google_projects data source, I can list and filter project IDs based on a filter string, however I couldn’t find a filter for the condition that the project has a GKE cluster with Workload Identity turned on, such as

# this does NOT work! Just my good wish
data "google_projects" "cas_projects" {
  filter = "gke_workload_identity: true"
}

Then the last hope is external data source as always. I use the google_projects data source to get filtered project IDs first, then use a bash script as the external data source to filter GCP projects which has GKE and Workload Identity enabled.

First, the google_projects data source filtering with GCP folder IDs

variable "gcp_folder_ids" {
  type = list(string)
}

data "google_projects" "gcp_projects" {
  filter = join(" OR ", [ for folder_id in var.gcp_folder_ids: "parent.id: ${folder_id}"])
}

The the external data source picks up the project IDs and further filter those with the bash script.

data "external" "gcp_projects_with_wli" {
  program = ["bash", "${path.module}/scripts/project-ids-with-wli-enabled.sh"]

  query = {
    project_ids = join(",", [ for proj in data.google_projects.gcp_projects.projects: proj.project_id ])
  }
}

The bash script requires gcloud and jq to run, also it needs to impersonate a service account which has permission to list and query all GCP projects under an organization.

#!/bin/bash
# this is scripts/project-ids-with-wli-enabled.sh
# set -e
if [[ -z "${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}" ]]; then
  export CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$HOME/.config/gcloud/application_default_credentials.json
else
  gcloud config set auth/impersonate_service_account "${GOOGLE_IMPERSONATE_SERVICE_ACCOUNT}"
fi

function filter_gcp_project() {
  for project_id in $(jq -rc '.project_ids'| tr ',' ' '); do
    pool_id=$(
      gcloud container clusters list --project $project_id --format json \
        | jq -r .[0].workloadIdentityConfig.workloadPool
    )
    [[ $pool_id == ${project_id}.svc.id.goog ]] && echo $project_id
  done
}

declare -a VERIFIED_PROJECT_IDS=()
VERIFIED_PROJECT_IDS+=( $(filter_gcp_project) )
jq -rn '{ "verified_project_ids": $ARGS.positional|@csv }' --args ${VERIFIED_PROJECT_IDS[*]} |sed 's|\\\"||g'
# sample output
# { "verified_project_ids": "projectid1,projectid2" }

Unfortunately external data source only support a string as input and output, so all the project IDs have to be joined into a string as input and then get split to form an array, etc.

Finally the updated IAM binding block using the external data source, with a lot of string manipulations 🙂

resource "google_service_account_iam_binding" "service_account_workload_identity_binding" {
  service_account_id = google_service_account.service_account.name
  role               = "roles/iam.workloadIdentityUser"

  members = [
    for proj_id in split(",", data.external.gcp_projects_with_wli.result.verified_project_ids) : "serviceAccount:${proj_id}.svc.id.goog[cert-manager/ksa-google-cas-issuer]"
  ]
}

Kubernetes Jobs and Istio

Note: the Job in the title refers to the Job resource in a Kubernetes cluster.

At the time the Istio sidecar doesn’t play well with a Job or a Cronjob, because the istio-proxy might not be ready when the Job starts (which causes connection issues for the job) and won’t exit after the job finishes (which causes the Job stuck and won’t be marked as complete).

Here’s a simple Bash script for a Job assuming the Job’s container has Bash and curl

apiVersion: batch/v1
kind: Job
metadata:
  name: db-migrate
spec:
  template:
    metadata:
      name: db-migrate
    spec:
      restartPolicy: Never
      containers:
        - name: db-migrate
          image: "some-image-with-curl:v0.1"
          command:
            - /bin/bash
            - -c
            - |
              # wait for the istio-proxy to become ready
              until curl -fsI http://localhost:15021/healthz/ready; do
                echo 'Waiting for Sidecar...'
                sleep 1
              done
              # do the job here
              bundle exec rails db:migrate
              # ask the istio-proxy to exit
              curl -fsI -X POST http://localhost:15020/quitquitquit

And if the job container image doesn’t have bash or curl, I used a curl image as another sidecar to get the job done

---
apiVersion: batch/v1
kind: Job
metadata:
  name: db-migrate
spec:
  template:
    metadata:
      name: db-migrate
    spec:
      restartPolicy: Never
      volumes:
        - name: flags
          emptyDir: {}
      containers:
        - name: curl
          image: curlimages/curl:7.78.0
          command:
            - /bin/sh
            - -c
            - |
              # test istio-proxy
              until curl -fsI http://localhost:15021/healthz/ready; do
                echo 'Waiting for Sidecar...'
                sleep 1
              done
              # touch the flag in tmp dir
              touch /tmp/flags/istio-proxy-ready
              # then wait for the job to finish
              until [ -f /tmp/flags/done ]; do
                echo 'Waiting for the job to finish...'
                sleep 1
              done
              # ask istio-proxy to exit
              curl -fsI -X POST http://localhost:15020/quitquitquit
          volumeMounts:
            - name: flags
              mountPath: /tmp/flags
        - name: db-migrate
          image: "some-image-without-curl:v0.1"
          command:
            - /bin/bash
            - -c
            - |
              # wait for the flag of istio-proxy
              until [[ -f /tmp/flags/istio-proxy-ready ]]; do
                echo 'Waiting for Sidecar...'
                sleep 1
              done
              # do the job
              bundle exec rails db:migrate
              # set the flag so curl can shut down istio-proxy
              touch /tmp/flags/done
          volumeMounts:
            - name: flags
              mountPath: /tmp/flags

🙂