依依不舍的2018

2018年已经过去了,我这才意识到。当事事都很顺利时,时间就过的很快,仿佛是快进模式。

有一件事让我受益匪浅。

2017年年底时,我老板得到更好的职位于是离开了AFL,这让我多少有些失望,因为我觉得我跟他蛮和得来的。和我这有棱有角的脾气能和得来的也确实不多。后来新老板来了我想,这位没准儿跟我就和不来吧。幸运的是事情不是像我预测的那样糟。

老板是意大利裔,在土澳长大。他居然对中国特别是四川颇有了解,因为他曾经在重庆的一家外企带过队伍。当时他面对食品一条街从头到尾的红辣椒真是一筹莫展,庆幸可口可乐里没有加辣椒。

老板最初的几个小项目我都参与并顺利完成了,这是个好的开端。一次我们谈工作时,我提出团队工作流程里一些残留问题。他问:为什么这个问题一直没得到解决呢?我说:困难不在技术层面,而是资深员工不赞成革新,可能需要领导层面介入才行。他没有反对,于是我期待他能在团队会议上提出这个问题并执行革新,但这迟迟没有发生。

后来老板做了团队拓展活动,离开办公室去讨论内部问题,包括工作流程问题,时间是一整天。我有些不解:革新方案是明摆着的,上行下效不就完事了么?为啥用一天?会议中,保守派给出了很多工作流程没有改变的历史原因,作为改良派的牵头人,我则据理力争给出革新后的工作流程的优势。然后全部团队成员投票,改良派的方案以一票之优势胜出!

事后老板跟我说,他当然知道改良派的方案是更好的,但是土澳人民的价值观在这就体现了出来,每个人的意见都很重要,因此得到多数人的支持才是最重要的。老板笑着说,比起中国,土澳做事情的确很慢,一个项目可能要争论很久才能批下来,修一条路,如果要拆迁的居民不同意,那就只能想办法绕开。这对我的启示真是不小,因为我尽管自以为自己支持民主,但事到临头时还是希望把自己的观点灌输给别人。

入乡随俗,还有很多事物需要学习。:)

PS4 SSD Upgrade Made Easy With Linux

PS4 + SSD

Even the latest PS4 Pro model comes with an HDD. I can’t remember when was last laptop shipped with HDD but I can imagine what an SSD upgrade brings to an old PS4.

The only issue is, if I put the new SSD drive in straight away I’ll need to install the PS4 OS and then download everything. Now it’s a good opportunity to show off my Linux skills, is it not?

I plug in the old PS4 HDD and the new SSD to my workstation which runs Ubuntu Linux, then only 1 command is needed to do the disk copy:

dd if=/dev/sdd of=/dev/sdc bs=1M status=progress

This took me about 2.5 hours to finish, but before you start, make sure /dev/sdd is the old drive and /dev/sdc is the new drive in your setup because this is very destructive if /dev/sdc is the old drive by any chance.

Then I put the SSD into the PS4 and it booted up with all my games, except being much faster!

🙂

Fixed tracker-store’s CPU hogging issue on Arch Linux

I think it’s since when I updated Arch Linux a while ago the tracker-storeprocess has become CPU-hogging and it can drain the battery pretty quickly and turn my laptop into a heater.

Obviously there are people experiencing this issue already, but most of them that I found were trying to disable the Gnome tracker. I’d hesitate to do that because the tracker’s purpose is to index stuff so when I hit the super key and type, relevant things will come up quickly. Also as I’m a big fan of Gnome Shell, I trust the team wouldn’t just release a buggy program and left it broken for months.

My trouble-shooting 101: if I want to see what the program is complaining, run it from the command line!

tracker daemon -s
Starting miners…
** (tracker daemon:24398): CRITICAL **: 10:08:51.616: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.RSS: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:09:16.640: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Files: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:09:41.663: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Extract: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Applications: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.RSS'
✗ RSS/ATOM Feeds (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Files'
✗ File System (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Extract'
✗ Extractor (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Applications'
✗ Applications (perhaps a disabled plugin?)

I don’t think those are the root causes though.

Later I found this issue for Gnome 3.12 and I think it could be my case as well. To my surprise this fixed my issue with tracker-store easily.

First, stop tracker-store gracefully with tracker daemon -tThen I found:

ls ~/.cache/tracker/ -lht
total 908M
-rw-r--r-- 1 ray users 32K Dec 26 10:10 meta.db-shm
-rw-r--r-- 1 ray users 132M Dec 26 10:10 meta.db-wal
-rw-r--r-- 1 ray users 776M Oct 4 09:20 meta.db
-rw-r--r-- 1 ray users 11 Oct 4 09:17 locale-for-miner-apps.txt
-rw-r--r-- 1 ray users 22 Aug 2 13:49 parser-version.txt
-rw-r--r-- 1 ray users 354K Apr 13 2018 ontologies.gvdb
-rw-r--r-- 1 ray users 11 Apr 13 2018 db-locale.txt
-rw-r--r-- 1 ray users 6 Apr 8 2017 first-index.txt
-rw-r--r-- 1 ray users 10 Apr 8 2017 last-crawl.txt
-rw-r--r-- 1 ray users 40 Apr 8 2017 parser-sha1.txt
-rw-r--r-- 1 ray users 2 Apr 8 2017 db-version.txt

ls ~/.local/share/tracker/data -lht
total 77M
-rw-r----- 1 ray users 16M Oct 4 09:17 tracker-store.journal
-rw-r--r-- 1 ray users 8.8M Apr 13 2018 tracker-store.journal.7.gz
-rw-r--r-- 1 ray users 8.6M Apr 13 2018 tracker-store.journal.6.gz
-rw-r--r-- 1 ray users 9.1M Apr 13 2018 tracker-store.journal.5.gz
-rw-r--r-- 1 ray users 8.5M Apr 13 2018 tracker-store.journal.4.gz
-rw-r----- 1 ray users 148K Apr 13 2018 tracker-store.ontology.journal
-rw-r--r-- 1 ray users 8.3M Jun 19 2017 tracker-store.journal.3.gz
-rw-r--r-- 1 ray users 9.1M Jun 18 2017 tracker-store.journal.2.gz
-rw-r--r-- 1 ray users 8.8M Apr 8 2017 tracker-store.journal.1.gz

The 776MB meta.db surely is bloated, as I only have around 200GB of personal files. Since they are only meta-data, deletion won’t hurt.

rm -rf ~/.cache/tracker/
rm -rf ~/.local/share/tracker/

tracker daemon -s
Starting miners…
✓ RSS/ATOM Feeds
✓ File System
✓ Extractor
✓ Applications

All good now 🙂

Nicer Deployment with Kubernetes

The default strategy to do rolling update in a Kubernetes deployment is to reduce the capacity of current replica set and then add the capacity to the new replica set. This probably means total processing power for the app could be hindered a bit during the deployment.

I’m a bit surprised to find that the default strategy works this way. But luckily it’s not hard to fine tune this. According to the doc here: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment only a few lines is needed to change the strategy:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
namespace: my-project
spec:
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 40%
revisionHistoryLimit: 3

By maxUnavailable: 0 this means the total capacity of the deployment will not be reduced. and maxSurge: 40% means new replica can reach 40% of the capacity of the current replica set, before the current one become the old one and drained.

Not a big improvement but revisionHistoryLimit: 3 will only keep 3 replica sets for purpose to roll back a deployment. The default for this is unlimited, which is quite over provisioned, from my point of view.

🙂