柳暗花明的2017

对比忙于奔命的2016年, 2017年是收获颇丰但又不失平衡的一年.

在3月份, 我们一家宣誓成为了土澳的公民. 这是我们这几年一直期待的, 当然, 事到临头心情却是复杂的. 另外在公民仪式上我们得到一份礼物, 一盆土澳特有的本地花卉. 我想着本地物种应该很有能耐吧, 就随便把它放在花园里了. 结果冬天没过完它就死了…

有了”身份”后我想着要是有机会去很”土澳”的公司工作一下一定能学到很多”精髓”, 没想到不久之后我很走运的拿到了AFL的工作offer, 没有像以往那样在最后一轮面试时落马. 工作上的收获就不赘述了, 之前的笔记有很多都是这方面的.

二宝在年初时中耳积液, 导致几乎失聪, 之前学会的咿咿呀呀也都忘了. 医生起初认为二宝是自闭儿童, 把我们愁了个不轻. 还是老婆意志力比较强, “毛病再多也得把她养大”, “我们没有别的选择”. 好在给二宝戴助听设备数月之后, 二宝的听力貌似开始恢复了, 逐渐开始响应我们的呼唤. 她并不是自闭, 只是什么都没听到, 生活在静音的世界里.

收获最大的当属大宝笑笑, 终于在老婆的威逼利诱下对家里已经买了二年多的钢琴产生了兴趣. 而且我们在居住区附近找到了大宝的钢琴老师, 让大宝把钢琴+五线谱一起学了. 几个月以后笑笑参加了老师组织的汇报演奏音乐会. 笑笑在绘画方面的进步也很大, 我让她把自己的作品扫描上传到她自己的blog, 但她似乎兴趣不大, 上一次更新停在3月… 笑笑的期末评估也很好, 所有科目都比去年的成绩好(因为去年以及前年我们陪笑笑的时间也少).

今年完成的另一件大事是我帮爸妈提交了移民申请, 希望几年后能顺利团聚. 父母的移民申请基本上是DIY的, 填写的表格可以铺满地板. 感谢老婆的支持以及”过来人”朋友的经验分享. 土澳移民的确越来越难了.

🙂

Internal Service in Kubernetes Cluster

In Kubernetes(K8s) cluster, 1 or more containers form a pod and every container in the pod can access other container’s port just like apps in the same local host. For example:

- pod1
  - nginx1
  - gunicorn1, port:8000

- pod2
  - nginx2
  - gunicorn2, port:8000

So nginx1 can access gunicorn1’s port using localhost:8000 and nginx2 can access gunicorn2, etc… However nginx1 can’t see gunicorn2 using localhost.

When it comes to cache, like redis, I would like a shared redis container(or cluster later) so less memory is wasted and the cache doesn’t need to be warmed for each pod. The structure will look like:

- pod1
  - nginx1
  - gunicorn1
- pod2
  - nginx2
  - gunicorn2
- pod3
  - redis

To grant both gunicorns to access redis, redis needs to be a service:

---
apiVersion: v1
kind: Service
metadata:
  name: redis-svc
  labels:
    app: redis
    role: cache
spec:
  type: NodePort
  ports:
    - port: 6379
  selector:
    app: redis
    role: cache

And the deployment to support the service looks like:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-deploy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: cache
    spec:
      containers:
        - name: redis
          image: redis:4
          resources:
            requests:
              memory: 200Mi
          ports:
            - containerPort: 6379

Then in the settings.py of the django app running in gunicorn container, redis can be accessed via ENVs setup by K8s:

CACHES = {
  "default": {
    "BACKEND": "django_redis.cache.RedisCache",
    "LOCATION": "redis://{0}:{1}/1".format(os.environ['REDIS_SVC_SERVICE_HOST'], os.environ['REDIS_SVC_SERVICE_PORT']),
    "OPTIONS": {
      "CLIENT_CLASS": "django_redis.client.DefaultClient"
    },
  }
}

🙂

Build a Chrome Extension with VueJS

It turns out quite easy to build a Chrome Extension. Basically the extension is a web application containing HTML, JS, images, etc. I tried to build a simple extension using VueJS recommended by colleagues.

There’s just 1 limitation that affected me: The eval JS function is disabled in Chrome Extensions for security reasons. That means there will be no dynamic templates, but only pre-compiled Vue components. Here’s the blog which explained in detail: https://dzone.com/articles/what-i-learned-about-vuejs-from-building-a-chrome

And this blog has very good instructions on how to start a Chrome Extension development with NPM and Vue:  https://blog.damirmiladinov.com/vuejs/building-chrome-extension-with-vue.html#.We7W8OBxW-Y

After all the steps in the above blog are done, I started working on the `app/scripts.babel/popup/Popup.vue` file and adding UI and functions in it while gulp watch is running to rebuild the project when any watched file has been changed. There was 1 issue where gulp can’t watch too many files so here’s the fix:

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

🙂

Kops: Add Policies for Migrated Apps

When migrating some old applications to a Kubernetes(k8s) cluster provisioned by kops, a lot of things might break and one of them is the missing policy for the node.

By default, nodes of a k8s cluster have the following permissions:

ec2:Describe*
 ecr:GetAuthorizationToken
 ecr:BatchCheckLayerAvailability
 ecr:GetDownloadUrlForLayer
 ecr:GetRepositoryPolicy
 ecr:DescribeRepositories
 ecr:ListImages
 ecr:BatchGetImage
 route53:ListHostedZones
 route53:GetChange
 // The following permissions are scoped to AWS Route53 HostedZone used to bootstrap the cluster
 // arn:aws:route53:::hostedzone/$hosted_zone_id
 route53:ChangeResourceRecordSets, ListResourceRecordSets, GetHostedZone

Additional policies can be added to the nodes’ role by

kops edit cluster ${CLUSTER_NAME}

Then adding something like:

spec:
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": ["dynamodb:*"],
          "Resource": ["*"]
        },
        {
          "Effect": "Allow",
          "Action": ["es:*"],
          "Resource": ["*"]
        }
      ]

Then it will be effective after:

kops update cluster ${CLUSTER_NAME} --yes

The new policy can be reviewed in AWS IAM console.

Most lines were copied from here: https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md

🙂