Internal Service in Kubernetes Cluster

In Kubernetes(K8s) cluster, 1 or more containers form a pod and every container in the pod can access other container’s port just like apps in the same local host. For example:

- pod1
  - nginx1
  - gunicorn1, port:8000

- pod2
  - nginx2
  - gunicorn2, port:8000

So nginx1 can access gunicorn1’s port using localhost:8000 and nginx2 can access gunicorn2, etc… However nginx1 can’t see gunicorn2 using localhost.

When it comes to cache, like redis, I would like a shared redis container(or cluster later) so less memory is wasted and the cache doesn’t need to be warmed for each pod. The structure will look like:

- pod1
  - nginx1
  - gunicorn1
- pod2
  - nginx2
  - gunicorn2
- pod3
  - redis

To grant both gunicorns to access redis, redis needs to be a service:

---
apiVersion: v1
kind: Service
metadata:
  name: redis-svc
  labels:
    app: redis
    role: cache
spec:
  type: NodePort
  ports:
    - port: 6379
  selector:
    app: redis
    role: cache

And the deployment to support the service looks like:

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: redis-deploy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: redis
        role: cache
    spec:
      containers:
        - name: redis
          image: redis:4
          resources:
            requests:
              memory: 200Mi
          ports:
            - containerPort: 6379

Then in the settings.py of the django app running in gunicorn container, redis can be accessed via ENVs setup by K8s:

CACHES = {
  "default": {
    "BACKEND": "django_redis.cache.RedisCache",
    "LOCATION": "redis://{0}:{1}/1".format(os.environ['REDIS_SVC_SERVICE_HOST'], os.environ['REDIS_SVC_SERVICE_PORT']),
    "OPTIONS": {
      "CLIENT_CLASS": "django_redis.client.DefaultClient"
    },
  }
}

🙂

Build a Chrome Extension with VueJS

It turns out quite easy to build a Chrome Extension. Basically the extension is a web application containing HTML, JS, images, etc. I tried to build a simple extension using VueJS recommended by colleagues.

There’s just 1 limitation that affected me: The eval JS function is disabled in Chrome Extensions for security reasons. That means there will be no dynamic templates, but only pre-compiled Vue components. Here’s the blog which explained in detail: https://dzone.com/articles/what-i-learned-about-vuejs-from-building-a-chrome

And this blog has very good instructions on how to start a Chrome Extension development with NPM and Vue:  https://blog.damirmiladinov.com/vuejs/building-chrome-extension-with-vue.html#.We7W8OBxW-Y

After all the steps in the above blog are done, I started working on the `app/scripts.babel/popup/Popup.vue` file and adding UI and functions in it while gulp watch is running to rebuild the project when any watched file has been changed. There was 1 issue where gulp can’t watch too many files so here’s the fix:

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

🙂

Kops: Add Policies for Migrated Apps

When migrating some old applications to a Kubernetes(k8s) cluster provisioned by kops, a lot of things might break and one of them is the missing policy for the node.

By default, nodes of a k8s cluster have the following permissions:

ec2:Describe*
 ecr:GetAuthorizationToken
 ecr:BatchCheckLayerAvailability
 ecr:GetDownloadUrlForLayer
 ecr:GetRepositoryPolicy
 ecr:DescribeRepositories
 ecr:ListImages
 ecr:BatchGetImage
 route53:ListHostedZones
 route53:GetChange
 // The following permissions are scoped to AWS Route53 HostedZone used to bootstrap the cluster
 // arn:aws:route53:::hostedzone/$hosted_zone_id
 route53:ChangeResourceRecordSets, ListResourceRecordSets, GetHostedZone

Additional policies can be added to the nodes’ role by

kops edit cluster ${CLUSTER_NAME}

Then adding something like:

spec:
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": ["dynamodb:*"],
          "Resource": ["*"]
        },
        {
          "Effect": "Allow",
          "Action": ["es:*"],
          "Resource": ["*"]
        }
      ]

Then it will be effective after:

kops update cluster ${CLUSTER_NAME} --yes

The new policy can be reviewed in AWS IAM console.

Most lines were copied from here: https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md

🙂

Resolved: Arch Linux WiFi issue

When I connected my laptop running Arch Linux to a new WiFi this morning, it worked for a brief moment then all connections were dropped. Connecting to the same WiFi with phone or Macbook works fine so the problem is at Arch LInux(AL)’s end.

Then I noticed if I do a route it actually showed 2 entries for the LAN. I took a closer look and saw there were 2 IPs for the wireless interface!

Strange enough if I connect to the hotspot of my phone, AL will also have 2 IPs but the connection is still working.

So I googled a bit why there will be 2 IPs, here’s what I got:

https://raspberrypi.stackexchange.com/questions/13359/where-does-my-secondary-ip-come-from

Finally, after I shutdown and disabled the dhcpcd service with

sudo systemctl disable dhcpcd
sudo systemctl stop dhcpcd

and restarted NetworkManager, the problem is fixed. Guess some WiFi AP is more tolerant than others.

🙂