Proxy ARP is broken on Unifi U7 Lite

For several years, I had 2x Unifi U6 Lite Access Points and they worked great. I had a special Wi-Fi network for my phones and laptops with a number of settings enabled but then I upgrade to the U7 Lite and immediately started having issues where my phone would disconnect. I got frustrated enough to break out my handy tool box to figure what was going wrong.

Auto enable user namespaces in Kubernetes

When you run a container, the process IDs are namespaced and different in the container vs the host, the network stack is namespaced, the file system mounts are namespaced, but a process running as root in the container is running as root outside the container. This is risk because many privilege escalation vulnerabilities in Linux can be exploited because of this common user id.

Linux user namespaces aims to mitigate risks with running a process as root or other shared user ID where a vulnerability could allow a containerized process to escape a namespace and have privileges in the host. For example, process running as root in a container, would be marked as root outside the host without user namespaces or a process that has a UID that exists on the host.

User namespaces attempt to fix this by saying UID=0 inside the container actually is UID=12356231 on the host. Thus, a breakout is not as bad as it could be without user namespaces.

In this post, I’m going to walk through how I use Kyverno, a Kubernetes native policy system to easily enable user namespaces in pods where it can be.

Guaranteed Quality of Service in my Home Lab

A few times in my Kubernetes clusters, I’ve encountered situations where some process consumes all the CPU or RAM which starves other services for critical services. For example, in one situation, Longhorn consumed all CPU and RAM and my pi-hole running on the same machine stopped being able to process DNS requests. Other issues have included having to shut down one of my worker nodes and the other nodes not having enough capacity to take on pods and important pods not getting scheduled or even a mistake when I changed the pod selector labels and Kubernetes just spawned thousands of pods.

The graph below shows Disk I/O of a node with excessive disk writes because the OS is swapping RAM out to desk and back. A graph of Disk I/O showing large amounts of disk I/O as the host swaps RAM to disk, then it finally fails

My home lab servers are now running what I consider to be “business critical” services and I don’t want those to be impacted. Kubernetes has several different knobs we can use to improve this such as leveraging Linux’s cgroups to ensure that specific pods get a certain amount of CPU and RAM. It also supports prioritization, so that certain pods get scheduled and others get evicted if there isn’t enough space.

Or even lately, I’ve been hitting the max pod limit of 110 pods on my single-node cluster. Not everything is important and I want to make sure certain cron jobs always run even if I’m running some low-priority jobs. Turns out it is possible to be running 110 different pods.

Better Vault for Postgres access in my Home Lab

In my previous post on Vault, I showed how Hashicorp’s Vault can be used to protect important passwords, static passwords that don’t change frequently. Vault can do much more than this and can even automatically create temporary accounts and rotate passwords for database users.

Today, I’m using long-lived passwords that I generate once when I add a new service, I, along with most people, just insert those passwords into the environment like this:

1
2
3
4
5
6
spec:
  containers:
  - env:
    - name: DATABASE_URL
      value: >-
      postgresql://username:mypassword@postgres:5432/database

That’s not secure at all. While you can store them in Kubernetes Secrets, they’re not encrypted by default. Kubernetes can encrypt secrets, but they’re open to anybody with access to the cluster. The passwords are easily accessible to anybody with access to Kubernetes and are never rotated. This simply won’t do. In this post, I’m going to walk through how I switch to Vault for

Git pushes can be surprising

I was recently working on an open source project (tryfi/hass-tryfi - A Home Assistant integration for pulling data from my dog’s collar using the TryFi API and I found out that Git pushes can behave in a surprising way after I accidentally pushed a bunch of testing commits to the wrong branch.