Section

5 pages

Series

Accurate, Local Home Energy Monitoring: Part 1 - Hardware

This article is part of the Home Energy Monitoring series.

Ever wondered where the energy is going in your house and know exactly when and which circuit is consuming the most electricity? How much is your air conditioning unit costing you each month in kWh?

Home energy monitors are devices that you can use to monitor how much energy you’re using at any given point in time. You can use them to figure out how much each device or circuit you’re using overnight vs the day. If you have differing energy costs at the day vs night, you can use them to ensure devices run at lower cost time of day, you can use it to as part of a smart home automation to automatically notify you when your washing machine is done, or even identify when you need to upgrade a circuit because your server room is pulling too much.

A Wireguard VPN from a home lab to Kubernetes cluster

In addition to my home lab K8s cluster, I have two dedicated servers that I run in the cloud running a separate Kubernetes cluster. This cluster runs my production servers, like this blog, Postfix, DNS, etc. I wanted to add a VPN between my home network and my prod k8s network for two reasons:

  1. All data should be encrypted between these networks. While I use HTTPS when possible, some traffic like DNS isn’t encrypted
  2. My servers outside the NAT should be able to access servers running behind my NAT. I run a Prometheus instance at home that I want my primary Prometheus instance to be able to scrape. Using a VPN can help bypass the NAT and firewall on my router so it can scrape. Additionally, I wanted to be able to access pods directly from my home as needed.

I came across a number of guides for basic Wireguard VPN tunnel configurations which were fine, but they didn’t describe how to solve some of the more advanced issues like BGP routing for MetalLB or how to encrypt traffic to the host itself.

CenturyLink Gigabit service on Mikrotik RouterOS with PPPoE and IPv6

I recently helped my friends configure their CenturyLink Gigabit fiber service so they can use their own hardware instead of the provided hardware. This gave them a lot of flexibility in how the network is configured, however CenturyLink requires you to enable PPPoE and use 6RD to use IPv6 instead of natively supporting IP packets, you have to jump through hoops. I’m sure there’s some reason why their network works like that, but I figured I’d document what needs to be done and explain how it works.

Featured image of post The one where Rancher ruined my birthday

The one where Rancher ruined my birthday

Feb 2026 Update:

This post is from 2021. Since then, I’ve made many changes to my cluster and Rancher has also made many changes to their product. Some better, some still challenging. I haven’t had to rebuild my cluster from scratch since then, which is a positive improvement. Now my issues are mostly due to self-hosted Kubernetes cluster + Longhorn PVC issues instead of Rancher. They deprecated RKE1 without an in-place migration plan, but I figured out how to migrate to NixOS. Every upgrade to Rancher fixes one issue, then add another. I stopped using Rancher Fleet because it was buggy and started using cdk8s+Helm. While it had it’s own issues, I was able to more easily navigate them. Rancher v2.11 broke copy from the view YAML screen. v2.13 got rid of the combined Workloads screen which pulled in deployments, jobs, etc. because of supposed performance issues. I used that feature way too much. I’ve remained on v2.11.

Home Lab - Using the bridge CNI with Systemd

This article is part of the Home Lab series.

After I’ve had time to run my home lab for a while, I’ve started switching to a more up to date Linux distribution (instead of RancherOS.) I’m currently testing Ubuntu Server which leverages Systemd. Systemd-networkd is responsible for managing the network interface configuration and it differs in behavior compared to NetworkManager enough that we need to update the Home Lab Bridge CNI to handle it.

Previously the CNI was creating a bridge network adapter when the first container started up, but this causes problems with systemd because resolved (the DNS resolver component) was eventually failing to make DNS queries and networkd was duplicating IP addresses on both eth0 (the actual uplink adapter) and on cni0 because we were copying it over.