电磁炉

最近在家闲着没事我就把旧的天然气炉子换成了电磁炉,然后我立刻就后悔了,为啥才想起来换电磁炉呢?下面简单对比一下燃气炉灶和电磁炉灶的优缺点供朋友们参考。

燃气炉灶(天然气/煤气)
优点:
– 火力可以开到很大
– 可以使用超大尺码的锅
– 可以颠大勺,很帅很专业
– 对炊具材质没有要求,可以用钢锅也可以用砂锅
缺点:
– 使用燃气带来的危险是很多的,燃气泄漏可能会致命(一定要买有熄火保护的炉灶)
– 正常使用时的明火温度在1000到2000摄氏度,有烧伤烫伤的危险
– 抽油烟机必须给力,室内通风要好,否则有二氧化碳中毒的危险
– 热效率低,因为大量的热气并没有被锅吸收,直接跑掉了
– 火力不宜控制,高峰时段气压可能降低导致火力下降
– 脏,锅底和灶台都被炭黑覆盖了
– 而且不容易清洗
– 碳排放,加速气候变化。十年后可能什么事都没有也可能恶劣天气成为日常

电磁炉
优点:
– 干净,没有炭黑,不锈钢炊具用过之后也是崭新的样子
– 超级容易清洁,其实就是一大块钢化玻璃
– 没有尾气,除非做油烟很大的炒菜基本不需要通风
– 热效率超高(90%左右),因为热量是直接在锅底产生的,开启后瞬间锅就热了
– 没有明火,最高温度在300摄氏度以下
– 多数有空载保护,锅烧干或者被拿走以后电磁炉会自动关闭
– 除了锅和接触锅底的部分玻璃表面,其它部分都不会热
– 火力完全可控,可以多达20档,而且稳定
– 定时器,计时器都很方便
– 如果家里有太阳能发电系统,可以抵消部分烹饪消耗的电力
– 即便家里没有太阳能,随着电网内可再生能源(风/太阳能)的比例逐年增高,还是有利于环保的
缺点:
– 必须搭配铁/钢质炊具,合金平锅或者砂锅是不能在电磁炉上用的
– 一般不支持直径超过30厘米的锅
– 没法颠大勺,因为锅离开电磁炉表面后就得不到能量。电磁炉因为空载会关闭

对我来说,电磁炉对比燃气炉是个很大的进步。你觉得呢?

🙂

Renew Certificates Used in Kubeadm Kubernetes Cluster

It’s been more than a year since I built my Kubernetes cluster with some Raspberry PIs. There was a few times that I need to power down everything to let electricians do their work and the cluster came back online and seemed to be Ok afterwards, given that I didn’t shutdown the PIs properly at all.

Recently I found that I lost contact with the cluster, it looked like:

$ kubectl get node
The connection to the server 192.168.x.x:6443 was refused - did you specify the right host or port?

The first thought came to my mind is the cluster must have got hacked since it’s on auto-pilot for months. But I still could ssh into the master node so it’s not that bad. I saw the error logs from kubelet.service:

Sep 23 15:58:05 kmaster kubelet[1233]: E0923 15:58:05.341773    1233 bootstrap.go:263] Part of the existing bootstrap client certificate is expired: 2020-09-15 10:40:36 +0000 UTC

That makes perfect sense! The anniversary was just a few days ago and the certificate seems only last a year. Here’s the StackOverflow answer which I found very helpful for this issue.

I tried the following command in the master node and the API server was back to life

$ cd /etc/kubernetes/pki/
$ mv {apiserver.crt,apiserver-etcd-client.key,apiserver-kubelet-client.crt,front-proxy-ca.crt,front-proxy-client.crt,front-proxy-client.key,front-proxy-ca.key,apiserver-kubelet-client.key,apiserver.key,apiserver-etcd-client.crt} /tmp/backup
$ kubeadm init phase certs all --apiserver-advertise-address <IP>
$ cd /etc/kubernetes/
$ mv {admin.conf,controller-manager.conf,kubelet.conf,scheduler.conf} /tmp/backup
$ kubeadm init phase kubeconfig all
$ systemctl restart kubelet.service

I’m not sure if all the new certs will be distributed to nodes automatically but at least the API didn’t complain anymore. I might do a kubeadm upgrade soon.

$ kubectl get node
NAME      STATUS     ROLES    AGE    VERSION
kmaster   NotReady   master   372d   v1.15.3
knode1    NotReady   <none>   372d   v1.15.3
knode2    NotReady   <none>   372d   v1.15.3

EDIT: After the certs are renewed, kubelet service couldn’t authenticate anymore and nodes appeared NotReady. This can be fixed by delete the obsolete kubelet client certificate by

$ ls /var/lib/kubelet/pki -lht
total 28K
-rw------- 1 root root 1.1K Sep 23 19:12 kubelet-client-2020-09-23-19-12-52.pem
lrwxrwxrwx 1 root root   59 Sep 23 19:12 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2020-09-23-19-12-52.pem
-rw------- 1 root root 2.7K Sep 23 19:12 kubelet-client-2020-09-23-19-12-51.pem
-rw------- 1 root root 1.1K Jun 17 00:56 kubelet-client-2020-06-17-00-56-59.pem
-rw------- 1 root root 1.1K Sep 16  2019 kubelet-client-2019-09-16-20-41-53.pem
-rw------- 1 root root 2.7K Sep 16  2019 kubelet-client-2019-09-16-20-40-40.pem
-rw-r--r-- 1 root root 2.2K Sep 16  2019 kubelet.crt
-rw------- 1 root root 1.7K Sep 16  2019 kubelet.key
$ rm /var/lib/kubelet/pki/kubelet-client-current.pem
$ systemctl restart kubelet.service

🙂

Use Fluentd and Elasticsearch to Analyse Squid Proxy Traffic

TL;DR This is a quick guide to set up Fluentd + Elasticsearch integration to analyse Squid Proxy traffic. In the example below Fluentd td-agent is installed in the same host as Squid Proxy and Elasticsearch is installed in the other host. The OS is Ubuntu 20.04.

Useful links:
– Fluentd installation: https://docs.fluentd.org/installation/install-by-deb
– Elasticsearch installation: https://www.elastic.co/guide/en/elasticsearch/reference/current/deb.html

The logs of Squid need to be accessible by td-agent, it can be done by adding td-agent user to the proxy group:

$ sudo usermod --groups proxy -a td-agent

The configuration for td-agent looks like

<source>
  @type tail
  @id squid_tail
  <parse>
    @type regexp
    expression /^(?<timestamp>[0-9]+)[\.0-9]* +(?<elapsed>[0-9]+) (?<userIP>[0-9\.]+) (?<action>[A-Z_]+)\/(?<statusCode>[0-9]+) (?<size>[0-9]+) (?<method>[A-Z]+) (?<URL>[^ ]+) (?<rfc931>[^ ]+) (?<peerStatus>[^ ]+)/(?<peerIP>[^ ]+) (?<mime>[^ ]+)/
    time_key timestamp
    time_format %s
  </parse>
  path /var/log/squid/access.log
  tag squid.access
</source>

<match squid.access>
  @type elasticsearch
  host <elasticsearch server IP>
  port 9200
  logstash_format true
  flush_interval 10s
  index_name fluentd
  type_name fluentd
  include_tag_key true
  user elastic
  password <elsticsearch password>
</match>

The key is to get the regex expression to fit the Squid access log, which looks like

1598101487.920 240256 192.168.10.111 TCP_TUNNEL/200 1562 CONNECT www.google.com.au:443 - HIER_DIRECT/142.250.66.163 -

Then I can use the fields defined in the regex, such as userIP or URL in Elasticsearch for queries.

🙂

Use Variable in Kustomize

Variables in Kustomize are handy helpers from time to time, with these variables I can link some settings together which should share the same value all the time. Without variable I probably need to use some template engine like Jinja2 to do the same trick.

Some examples here.

In my case, there’s a bug in kustomize as of now(3.6.1) where configMap object names don’t get properly suffixed in a patch file. The issue is here. I can however use variable to overcome this bug. Imagine in a scenario I have a configMap in a base template and it will be referenced in a patch file:

# common/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

configMapGenerator:
  - name: common
    literals:
      - TEST=YES

# test/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: test
bases:
  - ../base
  - ../common
nameSuffix: -raynix
patchesStrategicMerge:
  - patch.yaml

# test/patch.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  templates:
    spec:
      volumes:
        - name: common
          configMap:
            name: common
            # this should be linked to the configMap in common/kustomization.yaml but it won't be updated with a hash and suffix.

Using variable can get around this bug. Please see the following example:

# common/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configurations:
  - configuration.yaml
configMapGenerator:
  - name: common
    literals:
      - TEST=YES
vars:
  - name: COMMON
    objref:
      apiVersion: v1
      kind: ConfigMap
      name: common
    fieldref:
      # this can be omitted as metadata.name is the default fieldPath 
      fieldPath: metadata.name

# test/kustomization.yaml unchanged

# test/patch.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
spec:
  templates:
    spec:
      volumes:
        - name: common
          configMap:
            name: $(COMMON)
            # now $(COMMON) will be updated with whatever the real configmap name is

Problem solved 🙂