Play a bit Kubernetes with Minikube

I’ve just played a bit Kubernetes on my Arch Linux laptop, with Minikube. It’s easier than I thought.

Since I’ve already installed VirtualBox from the start, I can use minikube right after I installed it with

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

The command I used to start minikube is

minikube start --cpus 4 --memory 4096 --insecure-registry "10.0.0.0/8"

It’s obvious that I wanted to allocate 4 CPUs and 4GB memory to the Kubernetes cluster. The –insecure-registry option is useful when I want to use a local Docker Registry without SSL.

Check if the cluster is ready with

minikube status

And launch the Kubernetes Dashboard with

minikube dashboard

After all these, it’s time to play with Kubernetes(kubectl command). However the local Kubernetes cluster doesn’t have DNS management like its cloud counterpart does. I wanted to have a load balancer too so I needed to get the IP and then tell kubectl to use the IP for the load balancer service:

minikube ip
192.168.99.104

Then in the yaml file for kubectl:

---
apiVersion: v1
kind: Service
metadata:
...
spec:
  externalIPs:
    - 192.168.99.104
...
  type: LoadBalancer

For now I found this necessary otherwise the service stuck at pending state.

🙂

Linux and Wake on LAN

The Internet servers are usually on 24×7, probably that’s why I never had the need to use the Wake on LAN feature on a computer.

I’ve just built a home server running Ubuntu Linux, using consumer grade PC parts. To avoid a big surge on my next electricity bill, I plan to only turn on the server when the sun is shining or during off peak when electricity is cheaper. It’s trivial to mention how to shutdown a Linux server via SSH, however to my surprise it’s not any harder to turn on a Linux server using WoL.

First on the server, make sure the line `ethernet-wol g` exists under the interface. eg.

auto enp0s31f6
iface enp0s31f6 inet static
address 192.168.1.51
netmask 255.255.255.0
gateway 192.168.1.1
ethernet-wol g

Save it and restart, run `sudo ethtool enp0s31f6` and if the following line appears in the output then it's a success!

Wake-on: g

Next step is to turn on WoL in BIOS. Different BIOS may call it different names but generally it’s to allow the system to power on by PCI/Network.

On Arch Linux, install wol, the command to wake up a WoL enabled computer.

sudo pacman -Sy wol
sudo wol <MAC ADDRESS>

Reference: https://wiki.archlinux.org/index.php/Wake-on-LAN

That’s it 🙂

Install Fluentd with Ansible

Fluentd has become the popular open source log aggregration framework for a while. I’ll try to give it a spin with Ansible. There are quite some existing Ansible playbooks to install Fluentd out there, but I would like to do it from scratch just to understand how it works.

From the installation guide page, I can grab the script and dependencies and then translate them into Ansible tasks:

---
# roles/fluentd-collector/tasks/install-xenial.yml
- name: install os packages
  package:
    name: '{{ item }}'
    state: latest
  with_items:
    - libcurl4-gnutls-dev
    - build-essential

- name: insatll fluentd on debian/ubuntu
  raw: "curl -L https://toolbelt.treasuredata.com/sh/install-ubuntu-xenial-td-agent2.sh | sh"

Then it can be included by the main task:

# roles/fluentd-collector/tasks/main.yml
# (incomplete)
- include: install-debian.yml
  when: ansible_os_family == 'Debian'

In the log collecting end, I need to configure /etc/td-agent/td-agent.conf to let fluentd(the stable release is call td-agent) receive syslog, tail other logs and then forward the data to the central collector end. Here’s some sample configuration with jinja2 template place holders:

<match *.**>
  type forward
  phi_threshold 100
  hard_timeout 60s
  <server>
    name mycollector
    host {{ fluent_server_ip }}
    port {{ fluent_server_port }}
    weight 10
  </server>
</match>
<source>
  type syslog
  port 42185
  tag {{ inventory_hostname }}.system
</source>

{% for tail in fluentd.tails %}
<source>
  type tail
  format {{ tail.format }}
  time_format {{ tail.time_format }}
  path {{ tail.file }}
  pos_file /var/log/td-agent/pos.{{ tail.name }}
  tag {{ inventory_hostname }}.{{ tail.name }}
</source>
{% endfor %}

At the aggregator’s end, a sample configuration can look like:

<source>
  type forward
  port {{ fluentd_server_port }}
</source>

<match *.**>
  @type elasticsearch
  logstash_format true
  flush_interval 10s
  index_name fluentd
  type_name fluentd
  include_tag_key true
  user {{ es_user }}
  password {{ es_pass }}
</match>

Then the fluentd/td-agent can aggregate all logs from peers and forward to Elasticsearch in LogStash format.

🙂