Ethereum(Crypto) Mining with Nvidia 3070(Ampere) on Ubuntu 20.04 with Renewable Energy

Snagged a pair of RTX 3070 cards. With only 2 cards this is more like an experiment than an investment.

I’ve done Crypto mining before and since the price is now almost all time high I’ll do that again, but only with my solar energy. Mining with dirty coal power isn’t ethical any more as the climate change has accelerated in the past few years.

To start ETH mining here are some prerequisites:

  • Energy efficient video cards, in this case I got RTX 3070. 3060TI is also a good choice but it was sold out already
  • A desktop computer where you can attach multiple video cards to PCI express slots. But I’m not focusing hardware installation here, ie. not showing how to install the card and connect cables, etc.
  • My OS is Ubuntu 20.04 so I choose t-rex miner which has better support for Nvidia Ampere architecture. The releases can be found here

Here are the steps with which I set up t-rex miner on my Ubuntu 20.04 desktop:

# as root user
sudo -i
# install nvidia 460 driver for Ubuntu
apt install nvidia-driver-460
# install t-rex to /opt/t-rex
mkdir /opt/t-rex
wget https://github.com/trexminer/T-Rex/releases/download/0.19.9/t-rex-0.19.9-linux-cuda11.1.tar.gz
tar -zxvf t-rex-0.19.9-linux-cuda11.1.tar.gz -C /opt/t-rex
# change ownership for security reasons
chown -R nobody:nogroup /opt/t-rex

Now in directory /opt/t-rex there are many shell script(.sh) files. I was using ethermine.org so I had a look at ETH-ethermine.sh, it has:

#!/bin/sh
./t-rex -a ethash -o stratum+tcp://eu1.ethermine.org:4444 -u <ETH wallet address> -p x -w <worker name>

Since I’m proudly an experienced Linux user, I choose to create a systemd service out of that shell script:

# cat /etc/systemd/system/ethminer.service 
[Unit]
Description=Ethereum Miner

[Service]
Type=simple
User=nobody
ExecStart=/opt/t-rex/t-rex -a ethash -o stratum+tcp://us2.ethermine.org:4444 -u <my ETH wallet address> -p "" -w <my worker name, can be hostname>

Restart=on-failure

[Install]
WantedBy=multi-user.target

I choose us2 node as it’s geologically close to me. The user is set to nobody so it won’t cause harm to my system if it wants to. Then the service can be enabled and started with systemctl command:

# reload systemd as a new service is added
# following commands run as root user
systemctl daemon-reload
# enable the service so it starts automatically
systemctl enable ethminer
# start the service
systemctl start ethminer
# check status
systemctl status -l ethminer
# watch the logs
journalctl -f |grep t-rex
Jan 24 13:55:30 hostname t-rex[6621]: 20210124 13:55:30 Mining at us2.ethermine.org:4444, diff: 4.00 G
...

According to other miners online, the TDP of 3070 is better set as 50%(130W), because it can run hotter with higher wattage but won’t make it compute faster. Here’s how I use a cronjob to set TDP to 130W except when I’m playing a game(assuming I’ll stop the miner when playing some game on it)

# still as root user, as only root can use nvidia-smi command
# crontab -l
*/10 * * * * /bin/ps -C t-rex && /usr/bin/nvidia-smi -i 0 -pl 130 2>&1 >>/var/log/nvidia.log

This can be verified in t-rex ‘s logs

journalctl -f |grep t-rex
Jan 24 13:55:30 hostname t-rex[6621]: 20210124 13:55:30 GPU #0: Gigabyte RTX 3070 - 52.07 MH/s, [T:53C, P:129W, F:60%, E:404kH/W], 1370/1370 R:0%
# it's running at 129W and temperature is 53 degree and fan speed cruising at 60%

Regarding mining solely with solar energy, there can be 3 approaches:

  • Switch electricity supplier to the renewable-friendly ones such as Ember, so you can use solar energy generated by community and enjoy the low wholesale price and mine crypto when the sun shines. This requires the API access supplier so you know when the energy is from renewables and cheap
  • Install and use your own solar energy to mine crypto when the sun shines. This requires API access from your inverter so know when to start mining with enough solar energy.
  • Install solar and battery so it’s guaranteed to mine with your own solar energy until the battery runs flat of course

I’m going with the last option 🙂

Kubernetes: How to Use Affinity

Affinity is a great feature in Kubernetes to assign pods to nodes based on labels. In my case, I have a hybrid Kubernetes cluster with half nodes are of X86 architecture and other half of ARM architecture, and I need to deploy the X86 only containers to the X86 nodes. Of course I can build multi-arch containers to get rid of this restriction too, but let’s see how Affinity works first.

All the nodes have labels of their architecture, and those labels can be printed out like this

# the key in jsonpath is to escape the dot "." and slash "/" in the key names, in this example, kubernetes.io/arch
k get node -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.metadata.labels.kubernetes\.io\/arch}{"\n"}{end}'
kmaster	arm
knode1	arm
knode2	arm
knode3	amd64
knode4	amd64
knode5	amd64

To deploy a Pod or Deployment, StatefulSet, etc, the Affinity should be put into the pod’s spec, eg.

# this is only a partial example of a deployment with affinity
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web
spec:
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                - key: kubernetes.io/arch
                  operator: In
                  values:
                    - amd64

The Deployment above will be scheduled onto a node running on X86 architecture.

Note: requiredDuringSchedulingIgnoredDuringExecution is a hard requirement and if it’s not met the pod won’t be deployed. If it’s a soft requirement, preferredDuringSchedulingIgnoredDuringExecution should be used instead.

🙂

Kubernetes and GitOps with Flux CD V2.0

GitOps could be the next big thing in cloud automation so I’d give it a try with my in house hybrid Kubernetes cluster. I was recommended to try Flux CD and there’s a good reference project initiated by my colleage: k8s-gitops.

However, in order to fully understand how to use Flux CD, I chose to start from scratch. Following the official instructions it didn’t take me long to fully enable GitOps on my cluster. Here’s how I did it on my laptop running Ubuntu:

First, create a GitHub PAT(Personal Access Token) with full repository permissions. Details can be read here. Also make sure you can create a private repository in GitHub (everyone gets 1 for free). Export GitHub username and PAT as environment variables as following:

export GITHUB_TOKEN=<your-token>
export GITHUB_USER=<your-username>

Latest Flux2 CLI can be downloaded here. You can also use the installation script from Flux if you fully trust it:

curl -s https://toolkit.fluxcd.io/install.sh | sudo bash

From this step onward, you will need access to a Kubernetes cluster, eg. kubectl cluster-info command works and returns cluster information. Check Flux2’s prerequisites with:

flux check --pre
► checking prerequisites
✔ kubectl 1.18.6 >=1.18.0
✔ Kubernetes 1.18.9 >=1.16.0
✔ prerequisites checks passed

Then the Flux2 command below can be executed to bootstrap a private GitHub repository flux-gitops using your GitHub PAT and the repository will be your cluster-as-code command center for GitOps practice, also the CRD(Custom Resource Definition) and controllers for Flux2 will be installed to the current cluster

flux bootstrap github \
  --owner=$GITHUB_USER \
  --repository=flux-gitops \
  --branch=main \
  --path=home-cluster \
  --personal

In the generated flux-gitops repository, the file structure looks like

flux-gitops
  - home-cluster
    - flux-system

Now you can simply add Helm charts or Kustomization templates into this repository and the changes will be applied to the cluster automatically. The following commands will create a simple namespace in the cluster, then register it with Flux2. After the changes pushed to GitHub, Flux2 controllers will apply the changes and create the new namespace.

cd flux-gitops/home-cluster
mkdir my-test
cd my-test
kustomize create
kubectl create namespace my-test --dry-run=client -o yaml > ns.yaml
kustomize edit add resource ns.yaml
cd .. # in home-cluster
# this step is redundant as any directory with a kustomization.yaml file in it will be automatically applied.
flux create kustomization my-test --source=flux-system --path=home-cluster/my-test --prune=true --validation=client --interval=2m --export > my-test.yaml
# check-in everything to test GitOps
# only add my-test directory if the above step is skipped
git add my-test my-test.yaml
git commit -m "Added my-test"
git push

Then you use a watch command to see how the new change get applied

watch flux get kustomizations
NAME                    READY   MESSAGE                                                         REVISION                                        SUSPENDED
flux-system             True    Applied revision: main/529288eed6105909a97f0d3539bc68e5e934418a main/529288eed6105909a97f0d3539bc68e5e934418a   False
my-test                 True    Applied revision: main/529288eed6105909a97f0d3539bc68e5e934418a main/529288eed6105909a97f0d3539bc68e5e934418a   False

That’s it, the Flux2 Hello-world. 🙂

Hybrid Kubernetes Cluster (X86 + ARM)

My old ASUS 15″ laptop bought in 2014. It has a sub-woofer!

The one in the picture was my old laptop, then my daughter’s for a few years. Now she got a nice new 2-in-1 ultra book the school asked us parents to buy, this clunky one was gathering dust on shelves. I tried to sell it but got no one’s attention despite it has got i7 CPU and 16GB of memory.

So I was thinking, this has same amount of memory as 4 x Raspberry PI 4, but I probably won’t be able to sell it to pay for the PIs. Why not just use it as a glorified Raspberry PI? I measured its power consumption and to my surprise, this one with gen 4 i7 only asks for 10W when idle and screen off, not bad at all. In comparison 4 x PI 4 probably need 20W to stay up.

Let’s do it then!

I re-installed the OS with Ubuntu Server 20.04 LTS and prepared it for kubeadm to run with my ansible playbooks here. Since I’ve updated my playbook to let it handle both Raspbian on ARM and Ubuntu on X86_64 it was fairly easy to get the laptop(calling it knode3 afterwards) ready.

I haven’t locked down versions in my playbook so the installed docker and kubeadm are vastly newer than the ones in my existing Raspberry PI cluster so there will be some compatibility issues if I don’t match them. I used the following commands to downgrade docker and kubeadm:

apt remove docker-ce --purge
apt install docker-ce=5:19.03.9~3-0~ubuntu-focal
apt install kubeadm=1.18.13-00

The kubeadm join command I ran earlier on nodes didn’t work anymore and it complained about the token. Of course the token has expired after a year or so. Here’s command to issue a new token from the master node

kubeadm token create
xxxxxx.xxx...

Grab the new token and replace the one in the join command

kubeadm join <master IP>:6443 --token <new token xxx> --discovery-token-ca-cert-hash sha256:<hash didn't change>

For debugging purpose I ran journalctl -f in the other tab of the terminal to see the output. When the join command finished, I ran kubectl get nodes in my local terminal session to verify the result

kubectl get node
NAME      STATUS   ROLES    AGE    VERSION
kmaster   Ready    master   89d    v1.18.8
knode1    Ready    <none>   89d    v1.18.8
knode2    Ready    <none>   89d    v1.18.8
knode3    Ready    <none>   3m     v1.20.1

The kubernetes version is a bit newer, maybe I will upgrade the old nodes quickly. Now I have a node which has 16GB of memory 🙂

PS. to keep the laptop running when the lid is closed I used this tweak.