Hacker News new | past | comments | ask | show | jobs | submit login
Raspberry Pi Homelab with Kubernetes (amithm.ca)
116 points by amitpm 11 months ago | hide | past | favorite | 50 comments



I've found k3s (https://k3s.io/) to be extremely easy to use for setting up and running kubernetes on a group of Raspberry Pis. Specifically, I followed this guide to get my clusters up and running quickly and it's worked out pretty well: https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/


Thank you.

I would agree here - k3s is much better suited for RPi than kubeadm, this looks like a rehash of my original tutorial from several years ago -> https://github.com/teamserverless/k8s-on-raspbian/

Today, I point folks here to get everything up and running with minimal overheads -> https://alexellisuk.medium.com/walk-through-install-kubernet...


Its easy, but it uses 50% CPU idle and it would probably use more if it weren’t blocked on SD card writes (it uses 100% SD card bandwidth). I tested this multiple times on a RPI 3B this last week with different individual Pis and different SD cards. There are also many bug reports to the same general sentiment.


I understand the allure of real hardware and doing this on a physical cluster but I'm always surprised not to see K8s/k3s/whatever running under (inside?) LXD as a learning/experimentation tool discussed more often.

The physical cabling, underlying operating system, bootstrapping etc strikes me as the least of the K8s learning experience and the only advantage of a toy running on Pis vs LXD. Not to mention most K8s deployments these days will be in some cloud provider of choice where most of this is handled... In that case LXD on a local machine with software network bridging, etc probably more closely approximates what most people will go to production with anyway.

I'm a little sour on the whole "LXD only officially distributed via snap" thing too but at least on Ubuntu 18.04 and forward getting a toy X node Kubernetes cluster up and running is trivial and costs nothing more than RAM and disk space. As is commonly known LXD doesn't even require a hypervisor so the hardware requirements are (essentially) anything x86_64. It's also fun to spin up/destroy any number of instances at will just using the command line.

MicroK8s even provides an LXD how-to that should work for your flavor of choice (with a little adaptation, of course):

https://microk8s.io/docs/lxd

Of course if you're actually doing some kind of edge deployment or whatever on actual Raspberry Pi/armv7/arm64 hardware that's the obvious way to go (or you can just run LXD on your Pi) :).


That's how I learned and experimented with k8s, I did try VM first but it was too heavy and I used lxd. The idea of building hardware just to run k8 even pi is too much work for me. Some folks say you can power it off, and I say you can lxc stop. I used https://github.com/sebiwi/kubernetes-coreos, it's dated tho.


Rancher has a sibling open source project called k3d that allows you to run single- or multi-node K3S clusters entirely within Docker containers.

https://k3d.io/


As has become my habit lately, I'll chime in and say if you're interested in accessing your self-hosted services from the internet, IMO tunneling is the way to go: https://github.com/anderspitman/awesome-tunneling


Most of my services are only available by using a vpn into my home network. I can understand why you might need a public facing service but I avoid it like the plague.


I have an EC2 instance running caddy that proxies through to my RPI cluster via VPN. It was pretty easy to set up. And Caddy handles HTTPS and HTTP->HTTPS redirection out of the box.


Thank you. Caddy sounds like something I was looking for recently.


I see it as an important step to a more decentralized future. For example, I know a few people who maintain Plex servers for their friends and family. This works quite well, but getting the server on the internet is the trickiest part. You can sink a lot of time into configuring routers, managing certs, NAT, DMZ, LMNOP. Or you can use a tunneling service that manages all of it for you.


The way to a more decentralized future is through yet another third-party service?


Huh? Are you referring to the VPS provider?


I use chisel for dancing across firewalls and across Big Corp's (TM) network policies. Chisel is fantastic. It wraps an SSH tunnel for proxying TCP (or reverse proxying) in HTTP, which I expose over TLS.

https://github.com/jpillora/chisel


I'm the author of inlets and would suggest you take a look at that. It's great for the use-case in question and built as a cloud native application with Docker images, Kubernetes YAML files and an operator available.

So you can get a LoadBalancer etc.

https://blog.alexellis.io/ingress-for-your-local-kubernetes-...


If chisel is working for them why do you think they should switch? Does inlets have any advantages in non-kubernetes environments?


If you’re using Kube, I’m building a service to solve this for home hosting. Checkout https://KubeSail.com (YC S19) - we forward traffic to your ingress controller over a tunnel so that you can host public apps on the internet from home without dynamic DNS or port forwarding . Feedback welcome :)


Quite a coincidence running into you on HN. I recently did need to setup tunneling to access my home cluster and stumbled on your list. I found frp and was up and running in a jiffy. Thanks for the list!


For my personal homelab Nginx Proxy Manager has been great (https://nginxproxymanager.com/). It provides a dead simple ui for configuring Nginx to expose internal services, even websocket servers, and integrates easy https cert managmenet (with Let's Encrypt wildcard subdomain support!).


NPM looks solid, but it still requires you to set up port forwarding and hope your ISP doesn't block ports 80/443, right?


i've been using tailscale for this and am very happy with it


Tailscale seems to be a great option if you only need to access your services from your own devices. Public or shared access looks to be trickier/expensive.


I've been looking into doing an SBC K8s cluster, but had my eyes on the ODroid N2+. Its big.LITTLE design with 4 high-performance cores and 2 low-performance ones would not only boost the compute capacity vs the RPi4, but also perhaps cgroups on the nodes could be configured such that pods only run on the 4 high-performance cores, leaving the 2 low-performance ones available for system daemons, kubelet, etc. The biggest drawback with the N2 is that it maxes out at 4GB of RAM, which might not be sufficient for the cluster master nodes. So maybe the master nodes could be 8GB RPi4s, while all the worker nodes are N2s.


I had a cluster of ODroid C2s. It was nice - quad core, 2GB memory. But honestly, only for some workloads.

In my experience memory is a much bigger bottleneck than CPU, net IO or disk iops.

At the moment my favorite home lab cluster is a bunch of old laptops. They're quad cores too but I've dropped 16 GB of memory in each. It's doing a lot better. Right now the cluster is using 21GB of memory.

I'm able to run everything I've wanted as well as the occasional experiment to try something out.

If I could leave you with something - get more memory than you think you will need. That's been my experience anyway.


I've been using Rock64 boards (4 gig RAM models.) There is plenty of free memory on the master nodes.


I have what I feel is an irrational desire to build an RPi k8s cluster. I feel like I would be better served by a small x86 box, but something about having a real cluster really appeals to me.


Been there, dropped 200+€ on rpis only to find out I then had to spend almost the same amount on microstld cards, power supplies, cables and all the other stuff.

Sold all those, recovered most of the money and bought a dismissed laptop from my employer of the time: 3rd gen quad-core i7, 16gb ram (later upgraded to 32), 480gb ssd + 750gb hdd for waaaay less (just the residual price)

Installed proxmox, created small VMS and played with k8s and a lot more stuff in a cheaper and more performant way (full gigabit ethernet when RPI has usb-shared ethernet, ssd-grade I/O performance, core i7 compute performance).

If you want to experiment the raspberry pi is the dumbest thing that you can buy.

Go for old hardware and virtualization, you'll also learn more (containers are here to stay, but VMs aren't going away either).

Power usage was also surely higher than a single RPI, but negligible anyway: laptop Intel processors with speedstep technology that lower down power absorption, and can go down to like 15-20 watts/hour.


I know that feeling!

Pulled the trigger on that three times now.

My first cluster was Raspis and ODroids. The two issues I had - not everything runs on Arm (yet) and memory constraints were tough to work with.

My second cluster - 4 Atomic Pis. The (Intel Atom) quad cores were fine but the 2GB of memory was a real problem when I started playing with more interesting workloads.

The current "production" home cluster is 4 old laptops. They're earlier gen i7s and I've dropped 16GB of memory and 1TB spinning disks in each one. I've never been happier with a cluster. I've been free to spin whatever I want up and it just works. Using RKE to setup Kubernetes takes about 5 minutes to build a cluster and I'm using Longhorn for replicated container native storage. I've run experiments for work and my own experience. Whether its Redis / Cassandra cluster or a Kubernetes Operator I wrote, it can always handle it.


This sounds like a neat setup - i used to run a set of 3 old compuets, a Nuc, and old laptops. But they were quite low spec, so i sold them and bought a small htpc instead, quadcore, with 16 gigs of ram and 512Gb nvme.

Also using RKE, it looks like the best cluster 'distro' around.

I used longhorn but found it a little slow, putting it on SSD was a bit of a waste. I tried ceph, it was much faster but i just do not understand how it works, etc. Well enoguh to fix anything if the cluster goes wrong.


I think many of us have the same feeling. I have a basic x86_64 home server that I keep dreaming of converting to a redundant cluster (x86_64 machines or rasberry pis, it probably doesn't matter).


Data scientist Holden Karau has done this, posts videos about it on YouTube.


Same! Its all about the cluster! I had a strong desire to build one of those PS3 clusters that were pretty wild years ago, but I couldn't convince my school they needed it.


I really tried! But maybe because my setup was a bit demanding, rPi4 couldn't handle it well. (home-assistant, adguard home, nextcloud, plex, *arr apps, and many more on docker)

I just had to switch to a SFF desktop PC with Ubuntu on it and never looked back. For toying around rPis can be fun but in real life applications, they were getting extremely hot. Had to add extra fans etc.

I still use my rPi4s, one of them connected to my 3D printer with Octoprint installed on it, but that's it.


Why kind of quality did you get running plex? I doubt a cluster of pi’s could keep up with the transcoding.


Very poorly. The cooling is a big problem as well.

Now I switched to x86, I’m even sharing my Plex library with a few more people (looking at the Tautulli stats, it’s been transcoding a lot) without any problems.

If it becomes an issue, I’d add a GPU but for now a 6 core AMD is doing a very good job.


For those like me who are a great deal fainter of heart than the author, there's always k3s [1] (installed e.g. via k3sup [2]) to take care of the "hard" Kubernetes part, at least until something breaks, though I've been lucky so far. Will run great on a Pi 4 and decently well on a 3/3+. The only big issue I've had so far was finding 32-bit armv7 images (I'm running Raspbian; I hear Ubuntu can do 64bit just fine), most projects only ever publish arm64 ones.

[1] https://k3s.io/

[2] https://github.com/alexellis/k3sup


I spent some time getting pi-hole to run on a raspi k3s cluster last year and wrote about it. Hopefully there's something useful in my investigation for your current project.

https://medium.com/@subtlepseudonym/pi-hole-on-kubernetes-87...


This person sets his host names to Foundation Trilogy planet names. This is good stuff.


Once upon a time the cool thing to do was to build RAIDs out of USB sticks. Or SD cards. Or floppy disk drives.

Raspberry Pi isn't as laughable as that, but the fact is that what you are getting out of this is a learning opportunity, not a reliable or high-capacity computing platform.

And even there, 4 Pi4Bs with 8GB will cost you $400 and more, once you add in power supplies and SD cards and desiderata. The first hit on Craigslist just now was a $200 4 core i7 with 8GB of RAM (upgradable) and a 256GB SSD in a nice small case. Two of those will get you better overall performance and leave room for expansion.


Why are people still posting these Pi homelabs they have been done to death.

I run microk8s on KVM running in Mesos with openvswitch and Mikrotik CRS/CCRs.


Because they have a lot of Raspberry Pis, they want to setup kubernetes, and they have fun doing it and what to share that with the rest of the internet?


It's been done since 2014 when K8s came out. Move on please. Still I guess it's better than AKS.


You are always free to simply not read the articles!

In fact, you'll not only save yourself time, you'll also save us time too (as then we won't have to read your useless comments).


So what, scroll to the next article, every article posted may be an improvement, may be there is something done in a new way. Every human has his own way od doing things.


Is this on a single server? The reason is to learn or does it have any practical reason?

What hardware are you using?


Of course it's not a single server.


Awesome. Gonna give this a go. Recently acquired a 3rd Rasp4 to have a quorum of fast-ish ones.

Concerned about the ARM aspect though. On docker at least the ecosystem felt a lot smaller on ARM.


"so that you have something pretty to show your non-technically inclined significant other as the output of your hard work"

Gosh, that resonates... :)

Also very excited to read part 2!


I am going to print that xkcd and pin it above my desk.

Thanks for these articles. I once again realized I should really not do this (for now)


This is the future




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: