Time Machine for Arch Linux

I’ve been using Arch Linux for some years, and it’s still my favorite Linux distribution. The feature that distinguished Arch from others is its rolling release which means there’s no such a thing called version in Arch. Using latest packages in Arch is the norm.

However living on the edge means it’s not quite safe. After I installed a bunch of updates including Gnome Shell 3.28, my XPS 15 laptop had trouble to bring up external monitor. It even froze when I plug the HDMI in hot.

I tried to revert some packages like

sudo pacman -U /var/cache/packman/pkg/some-package-1.0.xx.pkg.tar.xz

But it didn’t solve the problem because there were hundreds of packages in last update.

Almost going to panic, I found this instruction to revert all packages to a snapshot in time. And it actually worked wonders for me.

Only surprise is when downgrading packages, I saw errors like

 package-name: /path/to/package-file exists in filesystem

Guess it’s a safe guarding mechanism of pacman but since I know what I was doing so I simply deleted those files. The final command is

sudo pacman -Syyuu

which will bring Arch Linux back to a point of time and the issue has been fixed 🙂

Kubernetes External Service with HTTPS

This is a quick example to assign an SSL certificate to a Kubernetes external service(which is an ELB in AWS). Tested with kops 1.8 and kubernetes 1.8.

apiVersion: v1
kind: Service
 name: my-https-service
 namespace: my-project
   app: my-website-ssl
   service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:ap-southeast-2:xxx:certificate/xxx..."
   service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
   service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
   service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
 type: LoadBalancer
   app: my-website
   - name: http
     port: 80
     targetPort: 80
   - name: https
     port: 443
     targetPort: 80


Get access to a container in Kubernetes cluster

With Kubernetes(K8s), there’s no need to do ssh [email protected] anymore since everything is running as containers. There are still occasions when I need shell access to a container to do some troubleshooting.

With Docker I can do

docker exec -ti <container_id> /bin/bash

It’s quite similar in K8s

kubectl exec -ti <container_id> -- /bin/bash

However in K8s containers have random IDs so I need to know the container ID first

kubectl get pods

Then I can grab the container ID and do the `kubectl exec` command. This is hard to automate because picking up the expected container ID using `grep` and `awk` commands can fail if the matching condition is too strict.

Given a deployment like this

apiVersion: extensions/v1beta1
kind: Deployment
 name: my-app-deploy
 replicas: 2
       app: my-app

the K8s way to query the container will be

kubectl get pods --selector=app=my-app -o jsonpath='{.items[0].metadata.name}'

A chained one-liner could be

kubectl exec -ti $(kubectl get pods --selector=app=my-app -o jsonpath='{.items[0].metadata.name}') -- /bin/bash

This doesn’t check for errors, eg. if no container matching `app=my-app` was found, but a better script can be easily crafted from here.


Notes: BuildKite and Kubernetes Rolling Update

This is kind of a textbook case that container is much more efficient than VM. The CI pipeline in comparison uses AWS CloudFormation to build new VMs and drain old VMs to do a rolling update, which takes around 10 minutes for everything even if it’s just 1 line of code changed. I did a new pipeline with BuildKite and Kubernetes and a deploy is done within 2 minutes.

The key points to make the pipeline fast are:

  1. In the Dockerfile, the part that changes more frequently should be put at the bottom of the file, so that docker can maximise its build speed by using cached intermediate images.
  2. Reload Kubernetes config maps with this:
    kubectl create configmap nginx-config --from-file path/to/nginx.conf -o yaml --dry-run |kubectl replace -f -
  3. Reload containers with this(I use ECR):
    $BUILDKITE_BUILD_NUMBER obviously is the build number environment variable provided by BuildKite.

    kubectl set image deployment/my_deployment \
     nginx=my.ecr.amazonaws.com/nginx:$BUILDKITE_BUILD_NUMBER \
  4. Finally watch rolling update progress with this command:
    kubectl rollout status deployment/my_deployment