Hacker News new | past | comments | ask | show | jobs | submit login

Having a RaspberryPi laying idle in a drawer somewhere, I wonder if others have installed microk8s or k3s on it.

What kind of workloads are you running on it? Whoever has OpenFaas installed on your rpi, what type of functions are you running?




I mostly gave up on containers on Pis due to the overhead (disk space, deployment hassles with private registries, etc.). Built my own mini-Heroku and run all my services/functions there: https://github.com/piku


+1 to piku. I use it on a homelab server (not a Raspberry Pi) and just love how simple it is. It sets up a Python deploy environment for web apps that is exactly how I'd personally hand-configure the same in production (nginx, uwsgi, git). The best part of the project is that the entire thing is 1,000 lines of open source and readable Python code, so there's truly no magic whatsoever.


My blog used to run on it until I turned it into a static site, but the builder “function” now happily lives alongside a Node-RED install and a bunch of other things on a tinier VM.


This is honestly just awesome. I love how it just uses git, no web interface or complex configuration needed.


Thanks! I got fed up with massive manifests and was enamoured with Heroku's approach, so I looked at uwsgi, realised it was a damn good process supervisor, and just did it.


Can you run microk8s or k3s on a single server/device? Even if you can, seems like the wrong tool for the job with unnecessary complexity...


I run k3s on a single node. Used to be two, but I consolidated because this isn't a production usecase for me. Availability isn't the point for me. If I have to turn the thing off for an hour ever year or two to fix something, sure, fine.

The real value I get is Infra as Code, and having access to the same powerful tools that are standard everywhere else. I can drop my current (dedicated) hardware and go publish all my services on a new box in under an hour, which I did recently.

From my point of view, I already pay the complexity by virtue of k8s being The Standard. The two costs of complexity are 1) Learning the Damn Thing 2) Picking up the pieces when it breaks under its own weight. 1) I already pay regardless, and 2) I'm happy to pay for the flexibility I get in return.


Both k3s and MicroK8s support single node and multi node setups. Now, there's an open debate whether running K8s locally is the right approach for container development. Kelsey Hightower and James Strachan actually shared opposite opinions on this in a recent industry report published by Canonical. https://juju.is/cloud-native-kubernetes-usage-report-2021#ku...

(Disclaimer: I work for Canonical)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: