Migrating from Google Location History to OwnTracks

I’ve been slowly reducing the amount of data shared with Google. I’ve been using Google Location History since 2013. I found it really useful just because I could figure out what restaurant I went to when I was traveling or any number of things.

I found OwnTracks which was an open-source location history storage solution. It’s not nearly as polished as Google Maps where it natively integrates your location history, but step one is owning my data, step 2 can be better UIs.

Content-Security-Policy for Home Assistant

Content-Security-Policy is a security feature (MDN Web Docs) in modern web browsers that restricts the kind of content that helps to protect against certain types of attacks, such as Cross-Site Scripting (XSS) attacks. Since my Home Assistant has significant access to my home network and is reasonably well-known, I wanted to take some steps to protect against malicious actors using XSS or other injection attacks to taking over my network. In addition, there have been a few different CVEs (HA Security Disclosures) in Home Assistant that allowed for XSS),

Auto disable Kubernetes' service LB NodePorts

In a previous post, I noticed that all my Kubernetes services with type=LoadBalancer were exposing some internal services as NodePorts which meant that I might be exposing internal services to the Internet at high ports. I was running Kubernetes directly on my dedicated servers and not behind a load balancer. Kubernetes expected everybody to sit behind a LB which often times required a NodePort.

The solution was to set the Service spec.allocateLoadBalancerNodePorts value to false when the service is created. This works if I can set it while I create the Service, however Helm based templates often wouldn’t allow me to set this and once it was set to true and the node port was allocated it was difficult to deallocate the NodePort.

In this post, I walk through using a Kubernetes mutating webhook to automatically set the value for all Services.

Improving bad on-call with the Snowball Effect

I’ve worked on several different teams over the past 8 years I’ve worked at Amazon. Each one of them had on-call in which the engineers were on-call to keep the system running 24/7 for a week. If something broke at 2am, they’d get paged to fix it.

Now, Amazon’s a big company. On-call varied quite a bit. Some teams had more ops load, others had barely any. I had my fair share of weeks with lots of tickets, but usually I sought out teams where it was more manageable. However, those engineers frequently struggled to get anywhere, playing a bit of on-call hot potato with the next on-call. Sadly, Amazon largely did not leverage SREs or dedicated support groups except for the most critical systems. I do wish they would have leveraged them.