Building Dynamic CI Pipeline with BuildKite

I was inspired by this BuildKite pipeline sample given by the support team:

# .buildkite/pipeline.yml
steps:
  - command: echo building a thing
  - block: Test the thing?
  - command: echo testing a thing
  - wait
  - command buildkite-agent pipeline upload .buildkite/pipeline.deploy.yml

# .buildkite/pipeline.deploy.yml
steps:
  - block: Deploy the thing?
  - command: echo deploy the thing

So in the above case, if the first 2 commands succeed, pipeline.deploy.yml will be loaded into the main CI pipeline. This implementation is just brilliant. I’m not sure if jenkinsfile can do dynamic pipeline like this, but at least jenkinsfile won’t look as elegant as yaml.

Since buildkite-agent pipeline upload .buildkite/pipeline.deploy.yml is just another bash command, I can even use it in a script to put more logic in it, such as git flow implementation like:

#!/bin/bash
export CHOICE=$(buildkite-agent meta-data get "next-section")

case $CHOICE in
deploy)
  buildkite-agent pipeline upload .buildkite/pipeline.qa.yml
  ;;
signoff)
  # feature finish
  if [[ $BUILDKITE_BRANCH == feature* ]]; then
    python .buildkite/scripts/github_ci.py \
      --action pr \
      --repo flow-work \
      --head $BUILDKITE_BRANCH \
      --base develop

  # release start
  elif [[ $BUILDKITE_BRANCH == develop ]]; then
    git checkout -b release/$FULL_VERSION
    git push --set-upstream origin release/$BUILDKITE_BUILD_NUMBER

  # release finish
  elif [[ $BUILDKITE_BRANCH == release* ]]; then
    buildkite-agent pipeline upload .buildkite/pipeline.pass.yml
  fi
  ;;
reject)
  #mark build as failure
  exit -1
  ;;

FYI. example tested with BuildKite agent version 3.2.0.

🙂

Playing with Kubernetes Ingress Controller

It’s very very easy to use Kubernetes(K8s) to provision an external service with AWS ELB, there’s one catch though(at least for now in 2018).

AWS ELB is usually used with an auto scaling group and a launch configuration. However with K8s, EC2 instances won’t get spun directly, only pods will, which is call Horizontal Scaling. K8s will issue AWS API calls to update the ELBs so there’s no need for auto scaling groups or launch configurations.

This worked like a charm until when things got busy. There was a brief down time on one of the ELBs managed by K8s, because all instances at the back of the ELB were marked as unhealthy! Of course they were healthy at that moment. With help from AWS Support team, the culprit seems to be similar to this case: https://github.com/kubernetes/kubernetes/issues/47067.

Luckily for me, I had a gut feel that the simple ELB implementation isn’t the best practice and started to adopt the K8s Ingress Controller. And in this case I believe ingress can avoid the down time because the routing is done internally in K8s cluster which doesn’t involving AWS API calls. Nonetheless ingress can use 1 ELB for many apps and that’s good because ELBs are expensive.

Here are steps to deploy an nginx ingress controller as an http(L7) load balancer:

Deploy the mandatory schema, the default replica number for the controller is 2, I changed it to 3 to have 1 in each availability zone:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

Some customisation for L7 load balancer on AWS, remember to use your SSL cert if you need https termination:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml

Then an ingress for an app can be deployed:

$ cat .k8s/prod/ingress.yaml 
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  namespace: my-prod
  annotations:
    kubernetes.io/ingress.class: prod
spec:
  rules:
    - host: my.domain.elb
      http:
        paths:
          - path: /
            backend:
              serviceName: my-service
              servicePort: 80
    - host: my.domain.cdn
      http:
        paths:
          - path: /
            backend:
              serviceName: my-service
              servicePort: 80

Notes:

  • my-service is a common NodePort service and has port 80 exposed
  • io/ingress.class is for multiple ingress controllers in same k8s cluster, eg. 1 for dev and the other for prod
  • for now I have to duplicate the host block for each domain, because wildcard or regex are not supported by k8s ingress specification
  • at last, find the ELB this ingress controller created, then point my.domain.elb to it, then the CDN domain can use my.domain.elb as origin.

🙂

Profiling Tomcat Remotely with Java Mission Control

I was interested to see why a tomcat app runs very slow. In the tooling stage, I picked Java Mission Control(jmc) because it’s a built-in of Oracle Java 8.

To enable jmc and its flight recorder, I added the following Java switches to tomcat’s setenv.h file:

CATALINA_OPTS="$CATALINA_OPTS -XX:+UnlockCommercialFeatures -XX:+FlightRecorder -Dcom.sun.management.jmxremote=true -Dcom.sun.management.jmxremote.rmi.port=7091 -Dcom.sun.management.jmxremote.port=7091 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"

Having restarted tomcat, double check if the port is open:

netstat -tlnp |grep 7091

Then I can run jmc on my laptop and connect to the tomcat box’s port 7091(default remote jmx port). You need to ensure the port is open to your network though, eg. firewall rules or port forwarding via ssh.

🙂

Don’t Need Ngrok When I Have SSH

I was trying to create a Slack app. In order to let Slack send REST requests to my dev environment, eg. http://localhost:9000, I searched a bit and saw ngrok. Ngrok is very handy for this kind of setup:

Slack -> xyz.ngrok.io -> localhost

However I just don’t want to install anything so I turned to Google and to my surprise SSH can exactly do this(for who knows how many years). I know I can forward a local port to a remote host to connect to a service behind firewall such as databases, this is my first attempt to forward a remote port to local so Slack API can contact my localhost.

Here’s a better article which explained how to do port forwarding in both directions with SSH.

In short, to forward a remote port to my localhost, I need to

1, update the sshd_config on remote host and have GatewayPorts enabled and then restart SSH service

GatewayPorts yes

2, in a local terminal, run the following command replacing my.remote.host with your server’s domain or IP.

ssh -nNT -R 9800:localhost:9000 my.remote.host

Then test it with

curl -i http://my.remote.host:9800

The request should be forwarded to your localhost:9000.

🙂