PS4 SSD Upgrade Made Easy With Linux

PS4 + SSD

Even the latest PS4 Pro model comes with an HDD. I can’t remember when was last laptop shipped with HDD but I can imagine what an SSD upgrade brings to an old PS4.

The only issue is, if I put the new SSD drive in straight away I’ll need to install the PS4 OS and then download everything. Now it’s a good opportunity to show off my Linux skills, is it not?

I plug in the old PS4 HDD and the new SSD to my workstation which runs Ubuntu Linux, then only 1 command is needed to do the disk copy:

dd if=/dev/sdd of=/dev/sdc bs=1M status=progress

This took me about 2.5 hours to finish, but before you start, make sure /dev/sdd is the old drive and /dev/sdc is the new drive in your setup because this is very destructive if /dev/sdc is the old drive by any chance.

Then I put the SSD into the PS4 and it booted up with all my games, except being much faster!

🙂

Fixed tracker-store’s CPU hogging issue on Arch Linux

I think it’s since when I updated Arch Linux a while ago the tracker-storeprocess has become CPU-hogging and it can drain the battery pretty quickly and turn my laptop into a heater.

Obviously there are people experiencing this issue already, but most of them that I found were trying to disable the Gnome tracker. I’d hesitate to do that because the tracker’s purpose is to index stuff so when I hit the super key and type, relevant things will come up quickly. Also as I’m a big fan of Gnome Shell, I trust the team wouldn’t just release a buggy program and left it broken for months.

My trouble-shooting 101: if I want to see what the program is complaining, run it from the command line!

tracker daemon -s
Starting miners…
** (tracker daemon:24398): CRITICAL **: 10:08:51.616: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.RSS: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:09:16.640: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Files: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:09:41.663: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Extract: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: Could not create proxy on the D-Bus session bus, Error calling StartServiceByName for org.freedesktop.Tracker1.Miner.Applications: Timeout was reached
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.RSS'
✗ RSS/ATOM Feeds (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Files'
✗ File System (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Extract'
✗ Extractor (perhaps a disabled plugin?)
** (tracker daemon:24398): CRITICAL **: 10:10:06.687: No D-Bus proxy found for miner 'org.freedesktop.Tracker1.Miner.Applications'
✗ Applications (perhaps a disabled plugin?)

I don’t think those are the root causes though.

Later I found this issue for Gnome 3.12 and I think it could be my case as well. To my surprise this fixed my issue with tracker-store easily.

First, stop tracker-store gracefully with tracker daemon -tThen I found:

ls ~/.cache/tracker/ -lht
total 908M
-rw-r--r-- 1 ray users 32K Dec 26 10:10 meta.db-shm
-rw-r--r-- 1 ray users 132M Dec 26 10:10 meta.db-wal
-rw-r--r-- 1 ray users 776M Oct 4 09:20 meta.db
-rw-r--r-- 1 ray users 11 Oct 4 09:17 locale-for-miner-apps.txt
-rw-r--r-- 1 ray users 22 Aug 2 13:49 parser-version.txt
-rw-r--r-- 1 ray users 354K Apr 13 2018 ontologies.gvdb
-rw-r--r-- 1 ray users 11 Apr 13 2018 db-locale.txt
-rw-r--r-- 1 ray users 6 Apr 8 2017 first-index.txt
-rw-r--r-- 1 ray users 10 Apr 8 2017 last-crawl.txt
-rw-r--r-- 1 ray users 40 Apr 8 2017 parser-sha1.txt
-rw-r--r-- 1 ray users 2 Apr 8 2017 db-version.txt

ls ~/.local/share/tracker/data -lht
total 77M
-rw-r----- 1 ray users 16M Oct 4 09:17 tracker-store.journal
-rw-r--r-- 1 ray users 8.8M Apr 13 2018 tracker-store.journal.7.gz
-rw-r--r-- 1 ray users 8.6M Apr 13 2018 tracker-store.journal.6.gz
-rw-r--r-- 1 ray users 9.1M Apr 13 2018 tracker-store.journal.5.gz
-rw-r--r-- 1 ray users 8.5M Apr 13 2018 tracker-store.journal.4.gz
-rw-r----- 1 ray users 148K Apr 13 2018 tracker-store.ontology.journal
-rw-r--r-- 1 ray users 8.3M Jun 19 2017 tracker-store.journal.3.gz
-rw-r--r-- 1 ray users 9.1M Jun 18 2017 tracker-store.journal.2.gz
-rw-r--r-- 1 ray users 8.8M Apr 8 2017 tracker-store.journal.1.gz

The 776MB meta.db surely is bloated, as I only have around 200GB of personal files. Since they are only meta-data, deletion won’t hurt.

rm -rf ~/.cache/tracker/
rm -rf ~/.local/share/tracker/

tracker daemon -s
Starting miners…
✓ RSS/ATOM Feeds
✓ File System
✓ Extractor
✓ Applications

All good now 🙂

Nicer Deployment with Kubernetes

The default strategy to do rolling update in a Kubernetes deployment is to reduce the capacity of current replica set and then add the capacity to the new replica set. This probably means total processing power for the app could be hindered a bit during the deployment.

I’m a bit surprised to find that the default strategy works this way. But luckily it’s not hard to fine tune this. According to the doc here: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment only a few lines is needed to change the strategy:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy
namespace: my-project
spec:
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 40%
revisionHistoryLimit: 3

By maxUnavailable: 0 this means the total capacity of the deployment will not be reduced. and maxSurge: 40% means new replica can reach 40% of the capacity of the current replica set, before the current one become the old one and drained.

Not a big improvement but revisionHistoryLimit: 3 will only keep 3 replica sets for purpose to roll back a deployment. The default for this is unlimited, which is quite over provisioned, from my point of view.

🙂

Don’t Panic When Kubernetes Master Failed

It was business as usual when I was upgrading our Kubernetes cluster from 1.9.8 to 1.9.10, until it isn’t.

$ kops rolling-update cluster --yes
...
node "ip-10-xx-xx-xx.ap-southeast-2.compute.internal" drained
...
I1024 08:52:50.388672   16009 instancegroups.go:188] Validating the cluster.
...
I1024 08:58:22.725713   16009 instancegroups.go:246] Cluster did not validate, will try again in "30s" until duration "5m0s" expires: error listing nodes: Get https://api.my.kops.domain/api/v1/nodes: dial tcp yy.yy.yy.yy:443: i/o timeout.
E1024 08:58:22.725749   16009 instancegroups.go:193] Cluster did not validate within 5m0s

error validating cluster after removing a node: cluster did not validate within a duation of "5m0s"

From AWS console I can see the new instance for the master is running and the old one has been terminated. There’s 1 catch though, the IP yy.yy.yy.yy is not the IP of the new master instance!

I manually updated the api and api.internal CNAMEs of the Kubernetes cluster in Route 53 and the issue went away quickly. I assume for some reason the DNS update for the new master has failed, but happy to see everything else worked as expected.

🙂