Skip to content

202510121159 GitOps and IaC-ing the Homelab

After a recent mistake of attempting to fix an annoying longhorn-uninstall Job that kept running my k3s cluster, I inadvertently deleted the PersistentVolumeClaims of my cluster. Of course it was quite frustrating and I had backups and yet at the same time, I didn’t feel so compelled to try to do a restore. As most of my services were already using external databases from a prior migration, I thought it wouldn’t be so hard to rebuild the cluster, right?

Indeed, I was right. After the earlier phase of attempting to modularise the cluster through the use of 202501180024 Achieving Portability with Ansible and fundamentally restructuring my homelab through IaC 202501202243 Homelab Portability with IaC, the homelab rebuild took effectively less than 3 days’ worth of work (excluding procrastination).

However, it did reveal a few cracks in my entire processes, and namely, it was the lack of GitOps as the final way of working in piecing this homelab-bing methodology. Prior to this, I manually applied (through tanka’s tk apply) as a way to imperatively apply the manifests. Initially the goal was to get the services up and running in the cluster. After awhile I noticed a few annoyances:

  1. Tanka’s abstraction is an overhead
    1. The abstraction introduces templating through functional programming as a way to create manifests through code. This is great if you have a team (like where I work at) where you’d deploy many microservices and use the same configuration methods.
  2. The abstraction forces me to make templating logic, rather than spending time to Kubernetes abstractions
    1. Earlier this year, most of my time was spent creating these functions that can output a Kubernetes object such as a Service or an EndpointSlice.
  3. The process is entirely imperative
    1. I needed to know (mentally) the set of correct steps needed to deploy my services such as deploying infrastructure services i.e. Tailscale prior to my ingress-controller Traefik. At some point, it felt ridiculous. I thought the role of Kubernetes was to declaratively define my infrastructure? I guess only some.

Better late than never: Flux

With Flux, we can have a declaratively way to making changes and having a pull-based operation.

I won’t share much details about it. I’ve moved over to Flux and use Kustomization overlays to bootstrap manifests - either as raw YAML or Helm resources.

Using Flux has been quite a change. I no longer need to manage changes or introduce new kinds of scripts to apply the changes to my cluster. The cluster, is thus, able to restore itself based on the configuration defined in code.