I would agree here - k3s is much better suited for RPi than kubeadm, this looks like a rehash of my original tutorial from several years ago -> https://github.com/teamserverless/k8s-on-raspbian/
Today, I point folks here to get everything up and running with minimal overheads -> https://alexellisuk.medium.com/walk-through-install-kubernet...
The physical cabling, underlying operating system, bootstrapping etc strikes me as the least of the K8s learning experience and the only advantage of a toy running on Pis vs LXD. Not to mention most K8s deployments these days will be in some cloud provider of choice where most of this is handled... In that case LXD on a local machine with software network bridging, etc probably more closely approximates what most people will go to production with anyway.
I'm a little sour on the whole "LXD only officially distributed via snap" thing too but at least on Ubuntu 18.04 and forward getting a toy X node Kubernetes cluster up and running is trivial and costs nothing more than RAM and disk space. As is commonly known LXD doesn't even require a hypervisor so the hardware requirements are (essentially) anything x86_64. It's also fun to spin up/destroy any number of instances at will just using the command line.
MicroK8s even provides an LXD how-to that should work for your flavor of choice (with a little adaptation, of course):
Of course if you're actually doing some kind of edge deployment or whatever on actual Raspberry Pi/armv7/arm64 hardware that's the obvious way to go (or you can just run LXD on your Pi) :).
So you can get a LoadBalancer etc.
In my experience memory is a much bigger bottleneck than CPU, net IO or disk iops.
At the moment my favorite home lab cluster is a bunch of old laptops. They're quad cores too but I've dropped 16 GB of memory in each. It's doing a lot better. Right now the cluster is using 21GB of memory.
I'm able to run everything I've wanted as well as the occasional experiment to try something out.
If I could leave you with something - get more memory than you think you will need. That's been my experience anyway.
Sold all those, recovered most of the money and bought a dismissed laptop from my employer of the time: 3rd gen quad-core i7, 16gb ram (later upgraded to 32), 480gb ssd + 750gb hdd for waaaay less (just the residual price)
Installed proxmox, created small VMS and played with k8s and a lot more stuff in a cheaper and more performant way (full gigabit ethernet when RPI has usb-shared ethernet, ssd-grade I/O performance, core i7 compute performance).
If you want to experiment the raspberry pi is the dumbest thing that you can buy.
Go for old hardware and virtualization, you'll also learn more (containers are here to stay, but VMs aren't going away either).
Power usage was also surely higher than a single RPI, but negligible anyway: laptop Intel processors with speedstep technology that lower down power absorption, and can go down to like 15-20 watts/hour.
Pulled the trigger on that three times now.
My first cluster was Raspis and ODroids. The two issues I had - not everything runs on Arm (yet) and memory constraints were tough to work with.
My second cluster - 4 Atomic Pis. The (Intel Atom) quad cores were fine but the 2GB of memory was a real problem when I started playing with more interesting workloads.
The current "production" home cluster is 4 old laptops. They're earlier gen i7s and I've dropped 16GB of memory and 1TB spinning disks in each one. I've never been happier with a cluster. I've been free to spin whatever I want up and it just works. Using RKE to setup Kubernetes takes about 5 minutes to build a cluster and I'm using Longhorn for replicated container native storage. I've run experiments for work and my own experience. Whether its Redis / Cassandra cluster or a Kubernetes Operator I wrote, it can always handle it.
Also using RKE, it looks like the best cluster 'distro' around.
I used longhorn but found it a little slow, putting it on SSD was a bit of a waste. I tried ceph, it was much faster but i just do not understand how it works, etc. Well enoguh to fix anything if the cluster goes wrong.
I just had to switch to a SFF desktop PC with Ubuntu on it and never looked back. For toying around rPis can be fun but in real life applications, they were getting extremely hot. Had to add extra fans etc.
I still use my rPi4s, one of them connected to my 3D printer with Octoprint installed on it, but that's it.
Now I switched to x86, I’m even sharing my Plex library with a few more people (looking at the Tautulli stats, it’s been transcoding a lot) without any problems.
If it becomes an issue, I’d add a GPU but for now a 6 core AMD is doing a very good job.
Raspberry Pi isn't as laughable as that, but the fact is that what you are getting out of this is a learning opportunity, not a reliable or high-capacity computing platform.
And even there, 4 Pi4Bs with 8GB will cost you $400 and more, once you add in power supplies and SD cards and desiderata. The first hit on Craigslist just now was a $200 4 core i7 with 8GB of RAM (upgradable) and a 256GB SSD in a nice small case. Two of those will get you better overall performance and leave room for expansion.
I run microk8s on KVM running in Mesos with openvswitch and Mikrotik CRS/CCRs.
In fact, you'll not only save yourself time, you'll also save us time too (as then we won't have to read your useless comments).
What hardware are you using?
Concerned about the ARM aspect though. On docker at least the ecosystem felt a lot smaller on ARM.
Gosh, that resonates... :)
Also very excited to read part 2!
Thanks for these articles. I once again realized I should really not do this (for now)