Using Nginx to Negate Brute Force Attacks on WordPress Sites

Istio Ingress Requests. Data source: Prometheus

Thanks to the Prometheus – Grafana combo I set up earlier for my Kubernetes cluster I noticed that there was a steep increase of requests to this blog started a few days ago. I checked my Google Analytics dashboard, sadly my blog didn’t become any popular at all. So it must be some sort of bot activity.

Funny though, CloudFlare didn’t think this is brutal enough so it let this attack through.

By the way, stern is a great tool to monitor logs from Kubernetes pods based on selectors. On the other hand the default kubectl logs command only pulls down logs from 1 pod. In this case I used the following command:

stern -l app=wordpress
wordpress-569bf4bd4-7x8wj nginx - - [18/Oct/2021:23:14:58 +0000] "POST //xmlrpc.php HTTP/1.1" 200 236 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" ","
wordpress-569bf4bd4-7x8wj nginx - - [18/Oct/2021:23:15:00 +0000] "POST //xmlrpc.php HTTP/1.1" 200 236 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" ","
wordpress-569bf4bd4-7x8wj nginx - - [18/Oct/2021:23:15:00 +0000] "POST //xmlrpc.php HTTP/1.1" 200 236 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" ","
wordpress-569bf4bd4-7x8wj nginx - - [18/Oct/2021:23:15:01 +0000] "POST //xmlrpc.php HTTP/1.1" 200 236 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" ","

This looks quite obvious: Some bot running at was doing brute force attack on my WordPress blog. It’s been going on for days and I didn’t know! Luckily as a good habit of IT professionals, I don’t use any meaningful password, which effectively might cost this bot several dozen centuries to crack my password, at current rate.

But there’s no reason just to let this bot keep wasting my electricity, is there?

From my experience I’ve used nginx to limit request rates, so for now I’ll just brush up my nginx skill and get this done quickly. I use nginx + php-fpm combo to run WordPress, adding rate limiter can be done in 2 lines in nginx configuration:

# in http scope
http {
  limit_req_zone $http_x_forwarded_for zone=mylimit:20m rate=15r/m;
# note
# 1, I can't use the default $binary_remote_addr as key because the request was passed on by istio so the remote_addr is always as shown in the logs
# 2, with 15r/m this means maximum 15 requests per minute from 1 IP address

# in server scope
  server {
    location ~ \.php$ {
    limit_req zone=mylimit burst=60 nodelay;
# note
# this puts a limiter on every .php request. First 60 requests from a distinguishable source are exempted so normal visitors won't get punished.

After the rules getting deployed, the logs changed to a new pattern:

wordpress-cf9f6959-l5lhv nginx 2021/10/20 02:31:20 [warn] 20#20: *1166 delaying request, excess: 0.320, by zone "mylimit", client:, server: _, request: "POST //xmlrpc.php HTTP/1.1", host: ""
wordpress-cf9f6959-l5lhv nginx 2021/10/20 02:31:21 [warn] 20#20: *1166 delaying request, excess: 0.840, by zone "mylimit", client:, server: _, request: "POST //xmlrpc.php HTTP/1.1", host: ""
wordpress-cf9f6959-l5lhv nginx 2021/10/20 02:31:25 [warn] 20#20: *1166 delaying request, excess: 0.692, by zone "mylimit", client:, server: _, request: "POST //xmlrpc.php HTTP/1.1", host: ""
wordpress-cf9f6959-l5lhv nginx 2021/10/20 02:31:28 [warn] 20#20: *1166 delaying request, excess: 0.668, by zone "mylimit", client:, server: _, request: "POST //xmlrpc.php HTTP/1.1", host: ""

Looks like it’s effective. Grafana agreed as well:

I might like to try other ways to harden my blog against attacks. I meant Istio.


Mining Ethereum with Multiple AMD 6600 XT Cards on Ubuntu Linux

Warning: Ethereum(ETH) will migrate to PoS(Proof of Stake) algorithm in near future, maybe in a year. So jumping into ETH mining now might not be profitable. Also, I encourage crypto mining with renewable energy sources and a Tesla PowerWall 2 is just a few RTX 3090s away :)

Note: This is a follow-up for my previous post on Ethereum mining with AMD 6600 XT on Ubuntu. If you haven’t got your 6600 XT mining on Ubuntu yet, please refer the post first.

I ordered 2 AMD 6600 XTs back in August and tried do Ethereum mining with them. I managed to mine with 1 of them even if both were visible in the system. I raised support ticket with AMD and the answer was far less than satisfying:

“yes, it is expected behavior although you use 2 RX 6600 XT cards clinfo you can use only 1 opencl available platform.”

AMD Global Customer Care

I almost gave up on AMD, planned to sell those 6600 cards on eBay. However, huge thanks to Google search this Reddit thread was brought to my attention.

TL;DR: the ‘culprit’ is not AMD driver itself but the bundled OpenCL libraries and according to the thread bundled OpenCL from an older version(20.40) can support multiple 6600 XTs for mining. Thanks to the OP for this good news!

Definitely I would like to verify this grafting method but it does require a lot of steps:

  • downgrade Linux kernel
  • change grub settings to boot to the older kernel
  • install amdgpu-pro driver version 20.40
  • save the /opt/amdgpu-pro/lib/x86_64-linux-gnu directory
  • uninstall driver 20.40
  • boot back to latest kernel
  • install driver 21.30
  • point /opt/amdgpu-pro/lib/x86_64-linux-gnu directory to the 20.40 backup directory
  • reboot
  • test with clinfo

Looking at my current driver library(version 21.30) directory:

lrwxrwxrwx 1 root root   21 Jul 28 11:02 ->
-rw-r--r-- 1 root root 120M Jul 28 11:02
lrwxrwxrwx 1 root root   16 Jul 28 11:02 ->
lrwxrwxrwx 1 root root   25 Jul 28 11:02 ->
-rw-r--r-- 1 root root 2.8M Jul 28 11:02
-rw-r--r-- 1 root root 1.4M Jul 28 11:02
-rw-r--r-- 1 root root  93M Jul 28 11:02
-rw-r--r-- 1 root root 216K Jul 28 11:02
-rw-r--r-- 1 root root 1.8M Jul 28 11:02
lrwxrwxrwx 1 root root   25 Jul 28 11:02 ->
-rw-r--r-- 1 root root 2.5M Jul 28 11:02
lrwxrwxrwx 1 root root   16 Jul 28 11:02 ->
-rw-r--r-- 1 root root  35K Jul 28 11:02

“What are these .so files?” I can imagine someone might ask. The .so files are shared objects in Linux systems, which are usually pre-compiled binaries. According to the OP swapping these out with the ones from version 20.40 fixed the issue, so I presume these binaries are not tightly coupled with AMD’s kernel modules. In that case if I can simply extract those relevant files from the 20.40 package I might not need to do all those kernel version switcheroos.

Here are my steps(bash commands):

# assuming version 20.40 has been downloaded to current directory
tar xvf amdgpu-pro-20.40-1290604-ubuntu-20.04.tar.xz
# get into the 20.40 directory
cd amdgpu-pro-20.40-1147286-ubuntu-20.04
# there are dozens of .deb packages in a release, I simply extract all of them
for i in *.deb; do dpkg-deb -xv $i ./deb-files; done
# the relevent files are in ./deb-files/opt/amdgpu-pro/lib/x86_64-linux-gnu
ls -lht ./deb-files/opt/amdgpu-pro/lib/x86_64-linux-gnu
total 453M
lrwxrwxrwx 1 root root    21 Oct 15 14:23 ->
lrwxrwxrwx 1 root root    19 Oct 15 14:23 ->
# I believe not all files are needed but saving a few hundred MBs is not the priority here
# move this directory to amdgpu-pro installation directory
sudo mv ./deb-files/opt/amdgpu-pro/lib/x86_64-linux-gnu /opt/amdgpu-pro/lib/x86_64-linux-gnu-20.40
# rename current 21.30 library directory
cd /opt/amdgpu-pro/lib
sudo mv x86_64-linux-gnu x86_64-linux-gnu-21.30
# use 20.40 as current library
sudo ln -s /opt/amdgpu-pro/lib/x86_64-linux-gnu-20.40 /opt/amdgpu-pro/lib/x86_64-linux-gnu
# refresh library cache
sudo ldconfig
# check links
ls -lht  /opt/amdgpu-pro/lib
total 8.0K
drwxr-xr-x 3 ray  ray  4.0K Oct 15 14:23 x86_64-linux-gnu-20.40
lrwxrwxrwx 1 root root   42 Oct 15 14:22 x86_64-linux-gnu -> /opt/amdgpu-pro/lib/x86_64-linux-gnu-20.40
drwxr-xr-x 2 root root 4.0K Sep 12 22:07 x86_64-linux-gnu-21.30

Does this really work? Only need to find out with clinfo command

clinfo -l
Platform #0: AMD Accelerated Parallel Processing
 +-- Device #0: gfx1032
 `-- Device #1: gfx1032

Success!! But to get the teamredminer working properly, I still did a reboot. Here’s the result(I didn’t OC, so laugh at my MHps please)

[2021-10-15 21:02:23] GPU 0 [62C, fan 30%]       ethash: 28.44Mh/s, avg 28.42Mh/s, pool 25.68Mh/s a:122 r:0 hw:0
[2021-10-15 21:02:23] GPU 1 [57C, fan 20%]       ethash: 28.45Mh/s, avg 28.40Mh/s, pool 32.63Mh/s a:155 r:0 hw:0
[2021-10-15 21:02:23] Total                      ethash: 56.89Mh/s, avg 56.82Mh/s, pool 58.32Mh/s a:277 r:0 hw:0

Strangely the rocm-smi command I used to limit power consumption to 55W doesn’t see the second card. So I have to change the power profile for the 2nd card to save some power

# this sets the 2nd card to 64W and same MHps as the 1st card
echo "profile_standard"| sudo tee /sys/class/drm/card1/device/power_dpm_force_performance_level	


Running Minecraft Server in Kubernetes Cluster

My own Minecraft server 🙂

A month ago I had an idea to run a Minecraft server in my garage Kubernetes lab. I though it might interest my little Minecraft player at home with some Kubernetes and GitOps stuff but that failed miserably. But at least I knew how to host a Minecraft server in Kubernetes, with ArgoCD too.

First step, of course the server which is an java app needs to be packed into a Docker image. From the official installation guide, the essential steps are

  • Use OpenJDK 16+ as base image
  • Download the server.jar
  • Run it with java -jar server.jar nogui
  • Make sure port 25565/tcp is open

That’s it! The Dockerfile looks like:

FROM openjdk:16.0.2

WORKDIR /minecraft

RUN curl -O

ADD config/* ./
RUN chown -R nobody /minecraft

USER nobody
CMD ["java", "-jar", "server.jar", "nogui"]

And the GitHub Action pipeline:

    runs-on: [ ubuntu-20.04 ]

      - uses: actions/[email protected]

      - name: build number
        run: |
          echo "build_number=dev-${GITHUB_RUN_ID}" >> $GITHUB_ENV

      - name: Docker build kubecraft-server
        run: |
          docker build -t$build_number .

      - name: Docker push
        run: |
          echo ${{ secrets.GHCR_PAT }} | docker login -u $GITHUB_ACTOR --password-stdin
          docker push$build_number
      # The GitHub Action pipeline will update the Kustomize patch file so the most recent tag will be referred to. 
      - name: Update deployment version
        run: |
          sed -i -E "s|(image:*|\1$build_number|g" kustomize/

      - name: Auto commit & push changes
        run: |
          git config --global '***'
          git config --global '***'
          git commit -am "Automated commit"
          git push

For more information, here’s my GitHub repository containing the docker build pipeline using GitHub Actions. There is also a kustomize directory where all necessary Kustomize templates reside.

The next step is to register the Kustomize templates as an app in ArgoCD. It can be done declaratively like:

kind: Application
  name: kubecraft
  namespace: argocd
    namespace: kubecraft
    server: https://kubernetes.default.svc
  project: default
    path: kustomize/
    targetRevision: HEAD
      prune: true

Here’s what it looks like in ArgoCD:

Since it’s not mentioned in the office document on how to configure session sharing for multiple Minecraft server instances, I doubt the meaning to create more than one pod.

TO-DO: I haven’t included a persistent volume in my Kustomize templates, which means the server will lose the state of the game world when re-deployed.

EDIT: A PersistentVolume is a nice-to-have. A PV sample made with NFS + CSI is here.


My nVidia RTX 3080 ThermalRight Upgrade

Gigabyte Aorus Masters RTX 3080 with Artic MX-4 thermal paste and ThermalRight 2.0mm thermal pads

Recently I traded my 2x Gigabyte RTX 3070 for a Aorus Masters RTX 3080 for a set of various reasons:

  • For Ethereum crypto mining, a 3080 can achieve ~100MHps, which is very close to what 2x 3070 can do
  • 1x 3080 definitely consumes less power than 2x 3070
  • If I play games or VR, only 1 video card is needed

After I installed the 3080 onto my PC running Ubuntu 20.04, I turned on my t-rex miner. Within a few seconds, the 3 fans on the gigantic video card went flat out and it’s quite noisy. I quickly checked GPU temperature using nvidia-smi provided by nVidia, to my surprise the core temperature was just about 40°C.

I turned off the miner just in case something burst into flame and did a bit research. So the fan speed is controlled by a chip on the board which checks GPU, memory and power supply temperatures. Thanks nVidia for treating Linux as 2nd class the nvidia-smi command from its Linux driver doesn’t report memory junction temperature at all. But logically the temperature on the memory junction must have been very hot and the fans were set to maximum.

Later that day my hypothesis was confirmed as my miner friends who run Windows had the same issue when started mining with 3080 cards, and the temperature on memory junction can reach 108°C if the miner left running. My friends had already replaced thermal pads used between memory chips and the heat sink, and after the upgrade temperature on memory junction lowered down to about 80°C.

I quickly ordered ThermalRight 2.0mm thermal pads and Artic MX-4 thermal paste and watched some YouTube tutorials(you should watch this too before doing anything on your card) on how to disassemble the video card and apply new thermal pads, and most importantly, how to put things back into one piece.

Aorus Masters RTX 3080 with heat sink and fans off

I didn’t change the thermal pads for the power supply components, because I don’t think they will emit a lot of heat. But if they are to be upgraded, probably 1mm thermal pads are needed. In addition to replace all thermal pads on the memory chips(the 11 chips surrounding the GPU core), I stuffed thermal pads between the board and the back plate too, just to further improve heat dissipation, also give the circuit board balanced pressure from each side.

Don’t forgot to clean the surface of the GPU core and the little copper plateau and re-apply adequate Artic MX-4 thermal paste between them. I didn’t do this properly in the first go so I had to disassemble and assemble the card again…

After putting everything back where they were, I turned on my PC and started the miner again. Fans didn’t even start to spin until the core temperature reached 60°C, which was a good sign. This card is a bit strange though, the fan only starts when temperature is over 60°C. I tried but there’s no option to get them to spin earlier. I still couldn’t tell exactly how hot the memory chips were, thanks to nVidia again… But the fans constantly cruised at 63% speed, it’s a solid proof that the memory chips were in better environment.

I happen to have an infrared thermal gun and it read about 60°C from the back plate. There’s no need to have any temperature anxiety any more 🙂