I did something similar a few years back and wrote a blog series about it[0].
Ultimately, I ripped it apart and stuck to using my x86 servers and now run Talos Linux[1] which is currently my favourite way to do Kubernetes on bare metal.
With Pi4, the cluster services just used too much of the available compute and though it was a fun project, want practical for my home lab.
Now the Pi5 is available (and I have some), I might look at adding them to my existing clusters for some mixed-architecture fun.
I wanted to have a look at that for storage when I was using Pis as it theoretically should be lighter-weight than Ceph, who knows. Didn't get around to it though.
The k8s hater squad loves to neglect the fact that k8s can be, just like Linux, fun. The complexity is endearing in a special way. Kubernetes is Linux underneath, and learning how that manifests is a lovely journey to go on.
k3s is a perfect choice for a home lab. The setup is extremely simple and it removes a lot of bloat ware that comes with k8s (for instance, different storage plugins for cloud providers)
What is the trait in certain people (most here I would guess) that makes these kinds of projects....fun. No, not fun, thats not the right word. Necessary? Appealing? Rewarding?
I went through my own journey to get an understanding with k8s, in a homelab setting. It started with MicroK8s on a VM running on a 2011 macbook pro I had lying around. That was painful, but I eventually had something useful running, until it weirdly self combusted one day and weeks of hard work resulted in ssh failing to connect because the K8s instance seemed to have deleted itself from the VM. I think it was trying to tell me something.
I decided to persist, but this time with k3s, and no VM on an outdated Mac. Instead I went with an outdated HP Thinclient. A bargain on ebay, with extra RAM. That worked much better. Now I have a bunch of great self hosted software for home media, development servers, git repos, docker servers, CI/CD pipelines. With storage managed by a NAS. All managed via simple helm charts. It's really useful. I discovered Tailscale along the way, that opened up a whole new world of self hosting abilities.
I get the nerd sniping, I've seen k8s abused in work situations too, but underneath it, people just wanted to learn.
I had a similar situation recently trying to bake shokupan for the first time. There was a new bread maker. It made perfect rising normal bread. But trying the Shokupan recipe produced a damp brick. A total failure. I persisted, read a ton of recipes and blogs, watched countless youtube videos. I was convinced it was an equipment or method issue. Eventually, 3 damp bricks later, I realised the yeast was out of date. This was the first thing people recommended to check, but I thought I knew better because "it worked on my machine" with the normal bread. Anyway, the 4th attempt with new yeast produced a perfect loaf. And I learnt a ton of other useful info for baking great bread along the way. The elation when it finally worked, and I had amazing fluffy home made bread, was a very similar sense of reward and accomplishment to finally getting some program or system to work.
Is there much to be gained by using physical hardware like this as opposed to a bunch of VMs? I get that plugging in cables and flashing lights is fun and stuff, but let's say you already have a homelab and are over that. Are there lessons about k8s you can only learn on "real" hardware and not VMs?
I thought about this a while back when there was a global shortage of Raspberry Pis. I'd see people with like 8 of them in a toy cluster and thought it was a shame because some people might have a real use for one but couldn't get one.
You can use things like docker-machine or Vagrant to easily spin up a bunch of VMs for things like this. Also you can use Rancher to automatically provision a cluster for you (but I guess you won't learn as much that way).
Lower power if you have idle stuff running in the cluster but need RAM to run it. Mini PCs also work.
These are also portable and quiet so you can stick it in your bedroom and leave it running 24/7 without feeling like you're in a data center (so it's more accessible to people in shared housing)
But what size of a NUC do you need to have the power of 3 RPis, and virtualize on that for learning? I'd expect just about any decent x86 box to be able to pull that off, and they can be silent / fanless / suspend too.
Actually, you might have a better chance of finding a fanless NUC than running latest RPi fanless.
NUCs are a good option but even if they're used they're usually more expensive if you want multi node.
I was thinking more these use cases where it's "quantity" over "quality" where you want multiple nodes to test/learn a topology but they end up sitting idle (well, idle CPU and using some RAM)
> I run the Orange Pi Boards without a Micro SD Card, they boot up via PXE with an NFS root file system, so I cannot use overlayfs2. Therefore during installation I provide an extra parameter to use the native shapshotter. Also some etcd timeouts are raised.
It's funny that so many people feel the need of running a virtualization technology like Kubernetes on bare metal. I did it myself with a Raspi cluster but in retrospect you are wasting a lot of time with little return while there are great solutions like microk8s or kind available that can be set up in minutes. Anyway, have fun, if you think it's worth it :)
I actually think running k8s in beefy dedicated servers is the way to go, rather than using VMs in a shared box, considering the overhead of k8s. (at least 1 GB memory last I checked)
After a certain scale of course, no need for k8s when an app can run on single VM and barely get 1rps.
I think bare metal makes more sense in big setups then you don't have the overhead of a hypervisor. For home setups, I think it's easier to get going with VMs since the acquisition cost and setup time is lower once you have 1 single, larger pieces of hardware to run it all on
Does anyone know of whether k8s is making inroads in "edge"/low resource compute environments? It seems like despite the work of companies like k3s people still think of k8s as too "heavy weight" for the "edge".
Nope, architecturally it's fundamentally unsuited. Way too many moving parts, high resource consumption, specific networking requirements unsuited for many Edge scenarios where you often have flakey or failing networking, surprisingly low limits in terms of total numbers of nodes/pods, etc etc.
Many have tried forcing it (when all you know is Kube, everything looks like a Kube problem) though.
Ahhh I’m familiar with the Chick Fil A use case! But they’re running NUCs — I almost think of that as a different kind of larger edge… distributed/federated but not really resource constrained.
Both single-node (yeah, sue me) and multi-node have been working great, required minimal maintenance and have a super straight forward install procedure
I wonder why k3s doesn't require all the tedious configurations such as disabling swap, ip forwarding, installing a CRI, etc. I was blown away by how it worked perfectly well right out of the box
Ultimately, I ripped it apart and stuck to using my x86 servers and now run Talos Linux[1] which is currently my favourite way to do Kubernetes on bare metal.
With Pi4, the cluster services just used too much of the available compute and though it was a fun project, want practical for my home lab.
Now the Pi5 is available (and I have some), I might look at adding them to my existing clusters for some mixed-architecture fun.
[0]https://2byt.es/post/bantamcloud/01-build/ [1] https://www.talos.dev/