
Stripped-down Kubernetes on the Raspberry Pi - alexellisuk
https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/
======
squarefoot
If network speed and low latency are so important, then probably other boards
should be considered since even the fastest Raspberry PI still implements its
Gigabit Ethernet through USB and is limited to about 300Mbs; also its CPU
performance lags behind many other newer and often cheaper boards.

This list might help.
[https://www.hackerboards.com/home.php](https://www.hackerboards.com/home.php)

------
nergal
I created a k8s on rpi3 once. Wrote a blog about it. In Swedish though.
[https://www.cygate.se/blogg/quake-hur-jag-byggde-ett-
raspber...](https://www.cygate.se/blogg/quake-hur-jag-byggde-ett-raspberry-
pi3-kluster-med-kubernetes-och-docker-hemma/)

------
BinaryArcher
Anyone looked at Rio from rancher yet? It's like a light weight overlay on k3

------
glennpratt
But why? Is this solving any real problem people have with home networks or
software development?

~~~
jjeaff
I have a raspberry pi that runs a great deal of my home automation. When it
has a problem, lots of stuff stops working. It would be nice to have k3s with
more than one raspberry pi as a fail over.

------
twright
Whenever I hear 'cluster' I think of scientific computing applications and
wondering how these fit in there. The author addresses a nice use case in
another post:

[https://blog.alexellis.io/build-your-own-bare-metal-arm-
clus...](https://blog.alexellis.io/build-your-own-bare-metal-arm-
cluster/#35dorealwork)

~~~
colek42
I am about to start work on a data science platform with Argo and K8s for a
client. Really strong use case for embaressenly parrellel applications

~~~
GarMan
Why Argo and not kubeflow Pipelines?

------
Abishek_Muthian
Why SD card when USB boot is available for Raspberry Pi 3BX& RPi 2B v1.2[1].
SD cards are the abomination of single board computers, I understand that it
is a choice taken to limit cost and I'm grateful for that but SD cards are not
designed to run OS.

One might argue that USB 2.0 interface of RPi doesn't add much value when a
USB SSD is connected to it, but benchmarks show that it adds at-least double
the performance[2].

[1]:[https://www.raspberrypi.org/documentation/hardware/raspberry...](https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md)

[2]:[https://jamesachambers.com/raspberry-pi-storage-
benchmarks/](https://jamesachambers.com/raspberry-pi-storage-benchmarks/)

~~~
fpgaminer
I stumbled on that second link awhile ago when I was curious about the best SD
card for Raspberrys. There's a newer post now:
[https://jamesachambers.com/raspberry-pi-storage-
benchmarks-2...](https://jamesachambers.com/raspberry-pi-storage-
benchmarks-2019-benchmarking-script/)

At first I thought the recommendation to use an SSD with a Raspberry was
crazy, but the benchmarks do indeed show an improvement. And, as that article
points out, since the SSD's performance isn't important (hitting the USB 2.0
limit) you can grab the cheapest, decent SSD which ... comes in at just $20!

Though I still like SD cards for the form factor. There's the newer A1 class
SD cards with better minimum IOPS, which I'd be totally fine with. But
reliability is still such a big issue with SD cards. I had one die a few days
ago; always annoying.

~~~
Abishek_Muthian
Just to clarify by USB SSD, I mean a 2.5" SSD with an USB adapter and not the
expensive USB-SSD like Samsung T5.

Reg the cheap SSD, yes also the benchmarks show the cheapest SSD (Crucial BX
500) on the top. I'm not certain, why this is so; I think it is because people
put the cheapest SSD due to RPi limitation but then again Samsung SSD in those
benchmark score less which scored high in actual SSD benchmarks for PC.

I think the conditions in the RPi doesn't bring out the best performance from
Samsung's controller but does so with Crucial/JMicron. SD cards are supposed
to be much more reliable than SSD, it is our improper application that's
causing it to fail especially in the OC RPi.

------
MrBuddyCasino
First time I hear of K3S. Its like Minikube as far as I can see? Is it a drop
in replacement?

~~~
sirn
k3s is basically a stripped down version of Kubernetes where legacy and alpha
features are removed and few components replaced with lighter-weight
alternative (e.g. SQLite instead of etcd) where Minikube is a local deployment
of full Kubernetes in a VM for development.

~~~
polskibus
Is it safe to use for multiple nodes if etcd is removed? Won't the cluster
become incoherent without it?

~~~
sirn
I believe that would be the case if the node that runs SQLite goes down.
There's also an option to run k3s with etcd though.

------
GordonS
Does anyone know how you say "k3s"? I had a look at the docs, but didn't find
it.

~~~
wtmt
It seems like there’s no official pronunciation (yet). There are some options
discussed in a GitHub issue that was opened to ask about this. [1]

[1]:
[https://github.com/rancher/k3s/issues/55](https://github.com/rancher/k3s/issues/55)

------
the_duke
k3s is nice for local development.

It's faster and much more light weight than minikube, since you can quite
easily set it up on your machine and avoid a VM.

~~~
xur17
Are you building custom docker images as part of your local development? I'm
curious if you've found a good way to push locally built images into the
cluster since it doesn't provide a private registry server.

~~~
epynonymous
[https://goharbor.io/](https://goharbor.io/)

------
polskibus
The article mentions that running kubernetes over WiFi is not a good idea.
What's a better alternative for IoT with RPi that can ease deployment in a
similar way but over WiFi? Is docker swarm a better idea?

~~~
pletnes
Take a look at balenaCloud if you want to deploy containers to IoT devices.
(Disclaimer: happy customer). Their free tier is free forever, too.

~~~
fpgaminer
Question I couldn't find an answer to (and haven't used balenaCloud yet). How
are updates to the OS managed? Will, for example, Raspberry Pi's running
belenaOS update themselves and reboot nightly or something like that?

I have a few Raspberry's floating around the house doing odds and ends, and
balena seems like a nice option for reducing their management needs. But I
really want them to be able to get the latest security/etc updates without me
having to manually update each of them from time to time.

~~~
petrosagg
balena founder here. balenaOS comes with all the infrastructure needed for
robust host OS updates. We expose this functionality to our users via a button
in the web dashboard. We don't yet have an automated, rolling upgrade style
mechanism.

The main consideration for a feature like this is that sometimes containers
have dependencies to interfaces exposed by the operating system which are not
always stable. This is especially true for IoT usecases because containers
will typically interface with some device connected to the system.

Tangential to this, we're working on an extended support release schedule (a
la firefox) for balenaOS. I could see us building an automated OS update
mechanism on top of that. We'll definitely think about it, thanks a lot for
your feedback :)

~~~
pletnes
Can you use your API to perform the updates?

Btw balena is a great service, good job!

------
panpanna
First time I hear about k3s. Can it be configured to use lxc/lxd instead of
docker?

(Nothing wrong with docker, its just that some nodes are already running lxc)

~~~
captn3m0
There are a few other options (kubelet doesn't support lxc directly) available
for the container runtimes in kubernetes. Not everything will run on a
raspberry pi (because of ARM), but here's a list:
[https://kubedex.com/kubernetes-container-
runtimes/](https://kubedex.com/kubernetes-container-runtimes/)

------
rcarmo
I've been playing around with k3s myself on Azure:

[https://github.com/rcarmo/azure-k3s-cluster](https://github.com/rcarmo/azure-k3s-cluster)

...as well on my own ARM cluster, with a private registry:

[https://taoofmac.com/space/blog/2019/05/18/2034](https://taoofmac.com/space/blog/2019/05/18/2034)

I find it refreshingly straightforward for personal and testing setups (and
more practical than microk8s for me right now), and am waiting for rio to hit
a couple of stable milestones:

[https://github.com/rancher/rio](https://github.com/rancher/rio)

(I try openFaaS now and then, but after contributing a deployment template
early in the beginning, I lost my enthusiasm for it - it also ran the gateway
and the admin UI on the same process, which I considered a design flaw).

~~~
wiradikusuma
Have you tried minikube? If you have, how's it compared to k3s/microk8s? The
reason I ask is, k3s is not supported on Windows.

Also, Rio sounds like "host your own Google App Engine" — am I right?

~~~
jrockway
I've never seen microk8s work. It depends very heavily on iptables rules, and
I suspect that if you have routes to anything on 172.16.0.0/12 it will work
unpredictably. (I had a similar problem with a VPC that had subnets that
conflicted with what Docker chose to use.) Obviously microk8s works for
someone, but it's never worked for me. But I work at an ISP and our route
table on the corp network is excessively large.

One of my coworkers tried to use microk8s instead of minikube and we debugged
it extensively for a couple days, but ended up baffled. We had to setup some
rules to forward localhost:5000 into the cluster for docker push; instead we
got a random nginx instance that we couldn't figure out where it was running.
Even after uninstalling microk8s, we still had a ton of random iptables rules
and localhost:5000 was nginx... It was weird.

Minikube works great, however. You will still need some infrastructure to push
to its docker container registry in order to run locally-developed code. Out
of the box, you can persuade your local machine to use minikube's docker for
building, but it runs in a VM and unless you use non-default minikube
provisioning settings, it doesn't have access to all the host machine's cores,
which is kind of slow. I ended up making minikube's container registry a
NodePort so that every node (all 1 of them) can get at localhost:5000 to pull
things. I then added some iptables rules to make localhost:5000 port-forward
to $MINIKUBE_IP:5000 so that "docker push localhost:5000/my-container" works.
It's kind of a disaster.

I also had to write an HTTP proxy that produces a proxy.pac that says "direct
*.kube.local at $MINIKUBE_IP" so that you can visit stuff in your k8s cluster
in a web browser and test your ingress controller's routing.

After those two things, I quite like it.

I still don't think minikube is a good platform for developing microservices,
though. The build/deploy times are too long (and things like ksync don't work
reliably, even if you generate a docker container that can reliably hot-reload
your app, which kind of involves a lot of setup). I once again wrote something
that takes a service description and a list of its dependent services,
allocates internal and external ports, puts them in environment variables,
starts Envoy for incoming and service-to-service traffic, and then runs the
apps wired up to receive requests from Envoy and make requests to other
services through Envoy. It took a while but now that I have it, it's great. I
can work on a copy of our entire stack locally, it starts up in seconds, and
is basically identical to production minus the k8s machinery.

I am still surprised I had to solve all these problems myself, but now that
they're solved, I'm very happy.

~~~
milofeynman
I'd like to do this with envoy. I found this blog post
[https://blog.turbinelabs.io/local-development-with-lots-
of-m...](https://blog.turbinelabs.io/local-development-with-lots-of-
microservices-part-1-c2059a1e9906) did you do something similar? Thank you!

~~~
jrockway
There are similarities and differences. The thing I wrote to run everything
locally obviously doesn't call out to external services; it runs everything it
needs locally. I also didn't use the xDS Envoy APIs, instead opting to
statically generating a config file (though with the envoyproxy/go-control-
plane library, because I do plan on implementing xDS at some point in the
future).

What I have is as follows. Every app in our repository is in its own
directory. Every app gets a config file that says how to run each binary that
the app is composed of (we use grpc-web, so there's usually a webpack-dev-
server frontend and a go backend). Each binary names what ports it wants, and
what the Envoy route table would look like to get traffic from the main server
to those ports. The directory config also declares dependencies on other
directories.

We then find free ports for each port declared in a config file, allocating
one for the service to listen on (only Envoy will talk to it on this port),
and one for other services to use to talk to that service. The service
listening addresses become environment variables named like $PORTNAME_PORT,
only bound for that app. The Envoy listener becomes $APPNAME_PORTNAME_ADDRESS,
for other services to use.

Once Envoy has started up, we then start up each app. The order they start in
doesn't matter anymore, because any gRPC clients the apps create can just
start talking to Envoy without caring whether or not the other apps are ready
yet. And, because each app can contribute routes to a global route table, you
can visit the whole thing in your browser and every request goes to the right
backend.

I used Envoy instead of just pointing the apps at each other directly with
FailFast turned off because I needed the ability to send / to a webpack
frontend and /api/ through a grpc-web to grpc transcoder, and would have used
Envoy for that anyway. This strategy makes it feel like you're just running a
big monolith, while getting all the things that you'd expect with
microservices; retries via Envoy, statistics for every edge on the service
mesh, etc. And it's fast, unlike rebuilding all your containers and pushing to
minikube.

It kind of solves the same problems as docker-compose, but without using
Docker.

~~~
milofeynman
Thank you for taking the time to write this up. It is extremely helpful on
getting me on my way.

------
zambal
Maybe it's beating a dead horse and the article itself seems fine, but the
submission title sounds a bit ridiculous.

"Install this software on your server to make it serverless!"

~~~
blueside
I'll never forget one time in a large meeting when the technical experts were
asked by one of the c suites what serverless meant. One brave soul rose to the
challenge and after some fumbling around in their answer, they ended it with
"you basically just upload the code to the server and then the server runs the
code for you".

Took awhile for the cringing to go away on that one.

~~~
pushpop
I don’t think it’s a hard thing to explain though:

Serverless is marketing term that loosely describes a shared tenancy
environment where you don’t manage the host.

~~~
majewsky
This describes e.g. Heroku.

~~~
pushpop
The opening bit “marketing term that loosely...” clearly explains that it’s
pretty arbitrary when a service is called “serverless” and that decision is
usually just a marketing one. Which is precisely true.

There’s no formal definition for “serverless”; it literally is just a
marketing term that loosely describes shared tendency environment and The only
reason Heroku isn’t “serverless” is because they don’t market themselves as
that.

So it’s a pointless exercise nitpicking anyone’s definition since there isn’t
an actual formal definition. Eg AWS uses the term serverless to describe other
services that isn’t lambda.

The whole term is just made up marketing bullshit for shared tendency.

In the 70s we used to call it time sharing. But I doubt many people these days
will remember that term.

~~~
detaro
A commonly made distinction between the two is that Heroku still exposes
instances to you: You buy a number of "dynos". Whereas a "serverless" solution
doesn't, e.g. lambda just spins up workers as needed and bills you for the CPU
time used.

~~~
pushpop
In that case that falls into the latter exception I made where you manage the
host; albeit the management is just “buy an instance”.

