Golang and Docker Multi-Stage Build MK2

In my previous post I used Docker multi stage technique to build a Docker container image which only has the golang executable in a tiny Alpine Linux base image. I’ll go further to use the scratch base image which has basically nothing.

Here’s the Dockerfile I tested on my own project, I’ve also added comments to help understand important lines:

FROM golang:1.13.1 AS builder

# ENVs to ensure golang will need no external libraries
ENV CGO_ENABLED=0 GOOS=linux
WORKDIR /app
COPY . .
# build switches to ensure golang will need no external libraries
RUN go build -a -installsuffix cgo -o myapp main.go && \
# create non-privileged user and group to run the app
  addgroup --system --gid 2000 golang && \
  adduser --system --gid 2000 --uid 2000 golang

FROM scratch
# some sample ENVs for the app
ENV API_KEY=xxx \
  API_EMAIL=xxx
WORKDIR /app
# copy the golang executable over
COPY --from=builder /app/myapp .
# scratch has no adduser command so just copy the files from builder
COPY --from=builder /etc/passwd /etc/passwd
COPY --from=builder /etc/group /etc/group
# use the CA cert from builder to enable HTTPS access
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
USER golang
# run the executable directly
ENTRYPOINT ["/app/myapp"]

The result image is only 7.7MB, only 1MB larger than the golang app itself. 🙂

Deploy WordPress to Kubernetes with Kustomize

I’ve just migrated this blog site itself into the kubernetes cluster I built with Raspberry Pi 4s, and this post is about the steps and approach I used to achieve this goal. Yes, what you have been reading is served by 1 of the Raspberry Pi boards.

First of all, a bit introduction on kustomize: It’s a bit late to the game but better late than never. Since kubectl v1.14, kustomize has been merged as a switch like kubectl apply -k test.yaml ...

I guess the reason behind something like kustomize is when a guy like me deploying apps into k8s clusters, it’s actually a lot of “YAML engineering”, ie. writing YAML files for the Namespace, Deployment, Service, etc. It’s OK to do it for the first time, but very soon I felt repeating myself with all those metadata or annotation tags.

helm is the other tool to manage kubernetes schema files and it started quite earlier than kustomize. I never liked it though. A sample helm chart template looks like this. The main reason I don’t like it is that it brings placeholders like {{ .Values.stuff }} into yaml and they are everywhere, just like good old asp/jsp templates, this makes the template no longer a valid YAML any more. Also I’m not a fan to put a lot of logic into configuration files.

Here’s a very good blog on how to kustomize. With kustomize I can put values associated with some conditions, eg. git branch, or ops environments, etc. into YAML files without any efforts to template the original schema, which enabling me to check-in the base schema into a public repository without the need to worry if I put any database password there.

Here’s the github repository where I store the YAML files which I used to deploy wordpress into my cluster, including following implementations:

  • Namespace for each installation
  • typical WordPress on PHP7.2-FPM and nginx containers running as non-root user
  • K8s PersistedVolume on NFS shared partition for files, eg. photos, plug-ins, etc.
  • Redis cache for PHP session
  • Ingress routing for nginx-ingress-controller

The wordpress-base directory has almost all the schema and with some dummy values. And the wordress-site directory has kustomize patch files which should hold your domain name, NFS server address for storage, etc.

To reuse my schema, you can simply duplicate the wordpress-site directory along side with the wordpress-base directory and put in real configuration as fit. Such as:

pik8s/
  + wordpress-base/
  + wordpress-site/
  + wordpress-mysite/

Then assuming you’ve configured kubectl, database and NFS already, you can preview the wordpress deployment with:

# in pik8s/wordpress-mysite/
$ kubectl apply -k . --dry-run -o yaml |less

And do the real thing without the --dry-run switch.

But the secret referenced in deploy.yaml is not checked in obviously. You need to create it manually with:

# prepare files to be used in the secret
$ echo -n 'mydbpass' > dbpass
# do the similar for dbhost, dbname, dbuser
...
# then create the secret
$ kubectl create secret --namespace wordpress-mysite generic wordpress-secret --from-file=dbuser --from-file=dbhost --from-file=dbname --from-file=dbpass

🙂

Golang and Docker Multi-Stage Build

I have noticed a common pattern amonst some new utilities such as kubectl, kops and terraform: There’s only 1 single executable file to install, and by ‘install’ it can be put anywhere as long as it’s in $PATH. This was before I learned some Golang but it’s easy to find out that the reason behind this pattern is that they are all written in Go.

And in the containers’ realm, the new-ish multi-stage build steps of Docker released in 2017 are super beneficial to Golang containers. A TL;DR example looks like:

  1. use a 1GB Debian container with all Golang tools and build dependencies to build the Golang executable( FROM ... AS in the sample ).
  2. put the executable into a tiny run-time container such as Alpine Linux, resulting in a < 20MB container image(depending on the size of the app obviously) ( COPY --FROM in the sample )

A multi-stage ‘hello world’ Dockerfile looks like:

FROM golang:1.12.5-alpine3.9 as builder
ENV GO111MODULE=on
RUN apk update --no-cache && \
apk add git
WORKDIR /app
ADD ./ /app
RUN go build -o golang-test .

FROM alpine:3.9.4
WORKDIR /app
RUN addgroup -g 2000 golang && \
adduser -D -u 2000 -G golang golang
USER golang
COPY --from=builder /app/golang-test .
CMD ["/app/golang-test"]
EXPOSE 8000

Note: To be able to use the multi-stage feature, the Docker version has to be > 17.06.

🙂

Working with a Big Corporation

So it’s been a while since I started this job in a big corporation. I always enjoy new challenges, now my wish got granted. Not in a very good way.

The things work in a quite different manner here. There are big silos and layers between teams and departments, so the challenges here are not quite technical in nature. How unexpected this is.

Still there are lots of things can be improved with technology, here’s one example. When I was migrating an old web application stack from on-premises infrastructure to AWS, the AWS landing zone has already been provisioned with a duo-VPC setup. I really really miss the days that working with Kubernetes clusters and I can just run kubectl exec -ti ... and get a terminal session quickly.

Now things look like year 2000 and I need to use SSH proxy command again, without old school static IP addresses though. Ansible dynamic inventory is quite handy in most cases but it failed due to some unknown corporate firewall rules. I still have bash, aws-cli and jq, so this is my handy bash script to connect to 1 instance of an auto scaling group, via a bastion host(they both can be rebuilt and change IP).

#!/bin/bash
function get_stack_ip(){
aws ec2 describe-instances \
--fileter "Name=tag-key,Values=aws:cloudformation:stack-name" "Name=tag-value,Values=$1" \
|jq '.Reservations[] |select(.Instance[0].PrivateIpAddress != null).Instance[0].PrivateIpAddress' \
|tr -d '"'
}

Then it’s easy to use this function to get IPs of the bastion stack and the target stack, such as:

IP_BASTION=$(get_stack_ip bastion_stack)
IP_TARGET=$(get_stack_ip target_stack)
ssh -o ProxyCommand="ssh [email protected]_BASTION nc %h %p" [email protected]_TARGET

🙂