All images Trump supporters uploaded to Parler still have GPS coordinates in them. This could be the best set of evidence in the history
I probably should not post anything meaningful to an SNS but when I still do, here are something I can do to improve protection for privacy.
First the GPS coordinates in photos are added by smart phones and this can usually be turned off. Search for location tags or similar term in camera settings.
For existing images, GPS data can be deleted using following commands in Ubuntu Linux
# install exiftool
sudo apt install libimage-exiftool-perl
# remove GPS data from 1 photo
exiftool -gps:*= my-photo.jpg
# remove GPS data from all .jpg in current directory
find . -name \*.jpg -exec exiftool -gps:*= {} \;
When trying to run gsutil in a kubernetes Job as nobody to backup stuff to Google Cloud Storage(GCS), I encountered simple error messages like
OSError: Permission denied
But it didn’t say where or how the permission was denied! It worked fine if the container was run as root user so the problem is not with Google Cloud. I searched around and there are 2 occasions that gsutil needs disk access
The first one is the gcloud profile on local file system. Before using gsutil I have to authenticate with
The command above will create a CloudSDK configuration directory in current user’s home directory. Obviously nobody doesn’t have a home so this will fail. To fix this, an environment variable can be given like
export CLOUDSDK_CONFIG=/tmp/.gcloud
The next one is harder to find, I suspected an option called state_dir is a place to look at and it turned out I was right. From its source code, the state_dir defaults to .gsutil directory in user’s home directory which is also a problem for nobody user. The fix is to override the option in the gsutil command like this
Snagged a pair of RTX 3070 cards. With only 2 cards this is more like an experiment than an investment.
I’ve done Crypto mining before and since the price is now almost all time high I’ll do that again, but only with my solar energy. Mining with dirty coal power isn’t ethical any more as the climate change has accelerated in the past few years.
To start ETH mining here are some prerequisites:
Energy efficient video cards, in this case I got RTX 3070. 3060TI is also a good choice but it was sold out already
A desktop computer where you can attach multiple video cards to PCI express slots. But I’m not focusing hardware installation here, ie. not showing how to install the card and connect cables, etc.
My OS is Ubuntu 20.04 so I choose t-rex miner which has better support for Nvidia Ampere architecture. The releases can be found here
Here are the steps with which I set up t-rex miner on my Ubuntu 20.04 desktop:
# as root user
sudo -i
# install nvidia 460 driver for Ubuntu
apt install nvidia-driver-460
# install t-rex to /opt/t-rex
mkdir /opt/t-rex
wget https://github.com/trexminer/T-Rex/releases/download/0.19.9/t-rex-0.19.9-linux-cuda11.1.tar.gz
tar -zxvf t-rex-0.19.9-linux-cuda11.1.tar.gz -C /opt/t-rex
# change ownership for security reasons
chown -R nobody:nogroup /opt/t-rex
Now in directory /opt/t-rex there are many shell script(.sh) files. I was using ethermine.org so I had a look at ETH-ethermine.sh, it has:
#!/bin/sh
./t-rex -a ethash -o stratum+tcp://eu1.ethermine.org:4444 -u <ETH wallet address> -p x -w <worker name>
Since I’m proudly an experienced Linux user, I choose to create a systemd service out of that shell script:
# cat /etc/systemd/system/ethminer.service
[Unit]
Description=Ethereum Miner
[Service]
Type=simple
User=nobody
ExecStart=/opt/t-rex/t-rex -a ethash -o stratum+tcp://us2.ethermine.org:4444 -u <my ETH wallet address> -p "" -w <my worker name, can be hostname>
Restart=on-failure
[Install]
WantedBy=multi-user.target
I choose us2 node as it’s geologically close to me. The user is set to nobody so it won’t cause harm to my system if it wants to. Then the service can be enabled and started with systemctl command:
# reload systemd as a new service is added
# following commands run as root user
systemctl daemon-reload
# enable the service so it starts automatically
systemctl enable ethminer
# start the service
systemctl start ethminer
# check status
systemctl status -l ethminer
# watch the logs
journalctl -f |grep t-rex
Jan 24 13:55:30 hostname t-rex[6621]: 20210124 13:55:30 Mining at us2.ethermine.org:4444, diff: 4.00 G
...
According to other miners online, the TDP of 3070 is better set as 50%(130W), because it can run hotter with higher wattage but won’t make it compute faster. Here’s how I use a cronjob to set TDP to 130W except when I’m playing a game(assuming I’ll stop the miner when playing some game on it)
# still as root user, as only root can use nvidia-smi command
# crontab -l
*/10 * * * * /bin/ps -C t-rex && /usr/bin/nvidia-smi -i 0 -pl 130 2>&1 >>/var/log/nvidia.log
This can be verified in t-rex ‘s logs
journalctl -f |grep t-rex
Jan 24 13:55:30 hostname t-rex[6621]: 20210124 13:55:30 GPU #0: Gigabyte RTX 3070 - 52.07 MH/s, [T:53C, P:129W, F:60%, E:404kH/W], 1370/1370 R:0%
# it's running at 129W and temperature is 53 degree and fan speed cruising at 60%
Regarding mining solely with solar energy, there can be 3 approaches:
Switch electricity supplier to the renewable-friendly ones such as Ember, so you can use solar energy generated by community and enjoy the low wholesale price and mine crypto when the sun shines. This requires the API access supplier so you know when the energy is from renewables and cheap
Install and use your own solar energy to mine crypto when the sun shines. This requires API access from your inverter so know when to start mining with enough solar energy.
Install solar and battery so it’s guaranteed to mine with your own solar energy until the battery runs flat of course
Affinity is a great feature in Kubernetes to assign pods to nodes based on labels. In my case, I have a hybrid Kubernetes cluster with half nodes are of X86 architecture and other half of ARM architecture, and I need to deploy the X86 only containers to the X86 nodes. Of course I can build multi-arch containers to get rid of this restriction too, but let’s see how Affinity works first.
All the nodes have labels of their architecture, and those labels can be printed out like this
# the key in jsonpath is to escape the dot "." and slash "/" in the key names, in this example, kubernetes.io/arch
k get node -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.io\/arch}{"\n"}{end}'
kmaster arm
knode1 arm
knode2 arm
knode3 amd64
knode4 amd64
knode5 amd64
To deploy a Pod or Deployment, StatefulSet, etc, the Affinity should be put into the pod’s spec, eg.
# this is only a partial example of a deployment with affinity
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
template:
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
The Deployment above will be scheduled onto a node running on X86 architecture.
Note: requiredDuringSchedulingIgnoredDuringExecution is a hard requirement and if it’s not met the pod won’t be deployed. If it’s a soft requirement, preferredDuringSchedulingIgnoredDuringExecution should be used instead.