Golang and Docker Multi-Stage Build

I have noticed a common pattern amonst some new utilities such as kubectl, kops and terraform: There’s only 1 single executable file to install, and by ‘install’ it can be put anywhere as long as it’s in $PATH. This was before I learned some Golang but it’s easy to find out that the reason behind this pattern is that they are all written in Go.

And in the containers’ realm, the new-ish multi-stage build steps of Docker released in 2017 are super beneficial to Golang containers. A TL;DR example looks like:

  1. use a 1GB Debian container with all Golang tools and build dependencies to build the Golang executable( FROM ... AS in the sample ).
  2. put the executable into a tiny run-time container such as Alpine Linux, resulting in a < 20MB container image(depending on the size of the app obviously) ( COPY --FROM in the sample )

A multi-stage ‘hello world’ Dockerfile looks like:

FROM golang:1.12.5-alpine3.9 as builder
ENV GO111MODULE=on
RUN apk update --no-cache && \
apk add git
WORKDIR /app
ADD ./ /app
RUN go build -o golang-test .

FROM alpine:3.9.4
WORKDIR /app
RUN addgroup -g 2000 golang && \
adduser -D -u 2000 -G golang golang
USER golang
COPY --from=builder /app/golang-test .
CMD ["/app/golang-test"]
EXPOSE 8000

Note: To be able to use the multi-stage feature, the Docker version has to be > 17.06.

🙂

Working with a Big Corporation

So it’s been a while since I started this job in a big corporation. I always enjoy new challenges, now my wish got granted. Not in a very good way.

The things work in a quite different manner here. There are big silos and layers between teams and departments, so the challenges here are not quite technical in nature. How unexpected this is.

Still there are lots of things can be improved with technology, here’s one example. When I was migrating an old web application stack from on-premises infrastructure to AWS, the AWS landing zone has already been provisioned with a duo-VPC setup. I really really miss the days that working with Kubernetes clusters and I can just run kubectl exec -ti ... and get a terminal session quickly.

Now things look like year 2000 and I need to use SSH proxy command again, without old school static IP addresses though. Ansible dynamic inventory is quite handy in most cases but it failed due to some unknown corporate firewall rules. I still have bash, aws-cli and jq, so this is my handy bash script to connect to 1 instance of an auto scaling group, via a bastion host(they both can be rebuilt and change IP).

#!/bin/bash
function get_stack_ip(){
aws ec2 describe-instances \
--fileter "Name=tag-key,Values=aws:cloudformation:stack-name" "Name=tag-value,Values=$1" \
|jq '.Reservations[] |select(.Instance[0].PrivateIpAddress != null).Instance[0].PrivateIpAddress' \
|tr -d '"'
}

Then it’s easy to use this function to get IPs of the bastion stack and the target stack, such as:

IP_BASTION=$(get_stack_ip bastion_stack)
IP_TARGET=$(get_stack_ip target_stack)
ssh -o ProxyCommand="ssh [email protected]_BASTION nc %h %p" [email protected]_TARGET

🙂

Run Fedora 29 on Dell XPS 15 9570

Here’s a list of things to do to get Fedora 29 running optimally on Dell XPS 15 9570:

First, disable Secure Boot with the stock Windows 10 and in BIOS otherwise Fedora installer on a USB stick won’t boot. I still don’t really see the necessity to have this Secure Boot, except to buy more time for Windows obviously.

Then I need to set SATA mode from RAID to AHCI in BIOS, or the Linux installer can’t find the drive. The SATA mode was set to RAID ON, which probably makes more sense if there’s 1 more drive in the laptop.

Hit F12 to choose boot device and install Fedora 29 using a USB drive, then the laptop will be booted into Fedora Live.

There were some warnings regarding nouveau drive so I had to disable nouveau and turn off nvidia device at start. The way bbswitch is installed has changed a bit so I installed it following this. After the nvidia device disabled at boot, the laptop is much much quieter.

According to Arch Linux Wiki, the laptop uses S2 suspend instead of S3. This can be fixed by added mem_sleep_default=deep to kernel parameters and then

grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

and reboot. The result can be verified by

$ cat /sys/power/mem_sleep
s2idle [deep]

I’ll see how long the battery can hold up. 🙂

Fixed tracker-store’s CPU hogging issue on Arch Linux

I think it’s since when I updated Arch Linux a while ago the tracker-storeprocess has become CPU-hogging and it can drain the battery pretty quickly and turn my laptop into a heater.

Obviously there are people experiencing this issue already, but most of them that I found were trying to disable the Gnome tracker. I’d hesitate to do that because the tracker’s purpose is to index stuff so when I hit the super key and type, relevant things will come up quickly. Also as I’m a big fan of Gnome Shell, I trust the team wouldn’t just release a buggy program and left it broken for months.

My trouble-shooting 101: if I want to see what the program is complaining, run it from the command line!

tracker daemon -s
Starting miners…
** (tracker daemon:24398): CRITICAL **: 10:08:51.616: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.RSS: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:09:16.640: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Files: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:09:41.663: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Extract: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Applications: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.RSS'
✗ RSS/ATOM Feeds (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Files'
✗ File System (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Extract'
✗ Extractor (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Applications'
✗ Applications (perhaps a disabled plugin?)

I don’t think those are the root causes though.

Later I found this issue for Gnome 3.12 and I think it could be my case as well. To my surprise this fixed my issue with tracker-store easily.

First, stop tracker-store gracefully with tracker daemon -tThen I found:

ls ~/.cache/tracker/ -lht
total 908M
-rw-r--r-- 1 ray users 32K Dec 26 10:10 meta.db-shm
-rw-r--r-- 1 ray users 132M Dec 26 10:10 meta.db-wal
-rw-r--r-- 1 ray users 776M Oct 4 09:20 meta.db
-rw-r--r-- 1 ray users 11 Oct 4 09:17 locale-for-miner-apps.txt
-rw-r--r-- 1 ray users 22 Aug 2 13:49 parser-version.txt
-rw-r--r-- 1 ray users 354K Apr 13 2018 ontologies.gvdb
-rw-r--r-- 1 ray users 11 Apr 13 2018 db-locale.txt
-rw-r--r-- 1 ray users 6 Apr 8 2017 first-index.txt
-rw-r--r-- 1 ray users 10 Apr 8 2017 last-crawl.txt
-rw-r--r-- 1 ray users 40 Apr 8 2017 parser-sha1.txt
-rw-r--r-- 1 ray users 2 Apr 8 2017 db-version.txt

ls ~/.local/share/tracker/data -lht
total 77M
-rw-r----- 1 ray users 16M Oct 4 09:17 tracker-store.journal
-rw-r--r-- 1 ray users 8.8M Apr 13 2018 tracker-store.journal.7.gz
-rw-r--r-- 1 ray users 8.6M Apr 13 2018 tracker-store.journal.6.gz
-rw-r--r-- 1 ray users 9.1M Apr 13 2018 tracker-store.journal.5.gz
-rw-r--r-- 1 ray users 8.5M Apr 13 2018 tracker-store.journal.4.gz
-rw-r----- 1 ray users 148K Apr 13 2018 tracker-store.ontology.journal
-rw-r--r-- 1 ray users 8.3M Jun 19 2017 tracker-store.journal.3.gz
-rw-r--r-- 1 ray users 9.1M Jun 18 2017 tracker-store.journal.2.gz
-rw-r--r-- 1 ray users 8.8M Apr 8 2017 tracker-store.journal.1.gz

The 776MB meta.db surely is bloated, as I only have around 200GB of personal files. Since they are only meta-data, deletion won’t hurt.

rm -rf ~/.cache/tracker/
rm -rf ~/.local/share/tracker/

tracker daemon -s
Starting miners…
✓ RSS/ATOM Feeds
✓ File System
✓ Extractor
✓ Applications

All good now 🙂