In addition to my home lab K8s cluster, I have two dedicated servers that I run in the cloud running a separate Kubernetes cluster. This cluster runs my production servers, like this blog, Postfix, DNS, etc. I wanted to add a VPN between my home network and my prod k8s network for two reasons:
All data should be encrypted between these networks. While I use HTTPS when possible, some traffic like DNS isn’t encrypted My servers outside the NAT should be able to access servers running behind my NAT.I recently helped my friends configure their CenturyLink Gigabit fiber service so they can use their own hardware instead of the provided hardware. This gave them a lot of flexibility in how the network is configured, however CenturyLink requires you to enable PPPoE and use 6RD to use IPv6 instead of natively supporting IP packets, you have to jump through hoops. I’m sure there’s some reason why their network works like that, but I figured I’d document what needs to be done and explain how it works.Other titles:
You were supposed to bring balance to Kubernetes, Rancher, not destroy it et tu? Rancher? I’ve been maintaining my own dedicated servers for around 7 years now as a way to learn and improve skills and have a place to run my different web sites, mail servers, even this blog. Over the years the hardware has changed and I’ve moved from hosting Rails applications directly on the OS to Docker and finally Kubernetes.This article is part of the Home Lab series.
After I’ve had time to run my home lab for a while, I’ve started switching to a more up to date Linux distribution (instead of RancherOS.) I’m currently testing Ubuntu Server which leverages Systemd. Systemd-networkd is responsible for managing the network interface configuration and it differs in behavior compared to NetworkManager enough that we need to update the Home Lab Bridge CNI to handle it.
Previously the CNI was creating a bridge network adapter when the first container started up, but this causes problems with systemd because resolved (the DNS resolver component) was eventually failing to make DNS queries and networkd was duplicating IP addresses on both eth0 (the actual uplink adapter) and on cni0 because we were copying it over.Long ago, I installed Longhorn onto my Kubernetes cluster using Helm 2. Then eventually Helm 3 was released and helm 2to3 was made available. However, I was not able to use helm 2to3 for whatever reason because Rancher didn’t deploy Tiller in the way that this CLI expected. Additionally, Rancher did not provide an upgrade mechanism to handle this. Eventually Rancher 2.6 was released which entirely dropped Helm 2 support and I was stuck with a cluster where Longhorn was deployed, but not managed by a working Helm installation.