
Domesticating Kubernetes - ClumsyPilot
https://blog.quickbird.uk/domesticating-kubernetes-d49c178ebc41
======
oppositelock
Why would you do this for simple home apps? k8s is complete overkill. At home,
there is little reason to drag in all that complexity to host a blog or
whatever.

Kubernetes drags in a tremendous amount of complexity, background knowledge,
and strange constraints, even on small installations. I really do not see its
benefit for small apps and projects.

Are you hosting your blog? You don't need k8s. Are you running a small app
server at home? Nope, don't need it. Are you running an auto-scaling app with
many hundreds or thousands of worker instances, and need to support non-
disruptive roll-in upgrades across multiple clouds and have many services
which need to scale independently? Maybe you need k8s, but only in some
circumstances.

For the record, I run many k8s clusters across several clouds, using 1,000 -
50,000 cores at any one time, so I've dealt with quite a bit of k8s
complexity, and I'm still on the fence whether it's the right answer for us,
but it's allowed us to standardize our software to k8s and only worry about
getting that running well in each deployment, which put the cross-cloud work
on the infrastructure teams versus the software development teams. The price
you pay for this is that you still need to do non-k8s stuff if you need some
cloud specific resources more than simple compute and routing, so you have
both k8s and cloud specific code.

~~~
g-clef
I'll bite.

One thing I found in running my home lab was that I kept having to burn it to
the ground & rebuild it regularly anyway. For example, dist-upgrade never
really works: every time I try I just end up wasting a couple days wrestling
with it and then giving up and rebuilding the machine from scratch. Even if I
just assumed that I'd have to build from scratch every time, differing app and
library versions meant that I couldn't count on a new build being a simple,
clean install.

So going with the regular "run things as a daemon on a server" model wasn't
actually saving me that much time.

Basically I could to do one of two things:

1\. keep using purpose-configured machines and spend a bunch of time writing
ansible scripts to automatically re-create them when everything goes pear-
shaped and then re-write the scripts when a new version changes stuff.

2\. container-ize all my tasks, and make everything else vanilla and
effectively disposable. I have to spend some initial time to rewrite my stuff
in container-speak, but that's a one-time cost.

Presented like that, option 2 looked like a better option. When a machine has
a problem or needs to be rebuilt, I build it with the completely vanilla setup
(ubuntu lts, conjure-up k8s) and push my k8s jobs and pods up to it. That's 2
hours instead of a day and a half. (Yes, in theory I could docker-ize
everything and run docker-swarm, but it's a small step from there to k8s, and
conjure-up makes installing k8s fairly straightforward.)

Frankly, I have a family now, and fucking with fiddly settings and library
dependencies isn't fun anymore. I'd rather spend that time _doing_ stuff. k8s
lets me divorce my hobby work from the infrastructure, and I like that.

~~~
hddherman
I think you could achieve something similar but by just using simple
standalone containers that you start with Podman or Docker. Once you have the
OS up and running, then all you need to do is push the configuration to the
server (I just rsync systemd service files) and start the services. Persistent
data is stored elsewhere anyway, so moving a service is just a matter of
copying a directory over to another machine and ensuring that the permissions
are correct.

k8s is probably great for many things, but as time goes on I really appreciate
the ability to debug things effectively and by keeping my homelab setup as
simple as possible I avoid the complexity of whatever advanced solution that
is out there.

~~~
g-clef
I could, but with the same effort to set that up, I could have a full k8s
setup...it doesn't actually save me any time to do it the "simple" way.

------
rb808
I've given up running my own k8s at home a couple of times. It does seem to be
getting easier, but then upgrading and maintaining breaks me again. \+ the
other stuff like running your own container registry. Then with new centos
podman breaks everything that worked OK in docker. I hate this stuff, so many
problems its worth ignoring the whole stack.

~~~
tuananh
what do you end up using instead of k8s?

~~~
Bombthecat
I mostly use docker compose. Perfect fit for tinkering but not over
complicated and huge ram eater.

Also:k3s might be an option.

~~~
bradgessler
Compose is great for home use.

If you run your home server on a separate box, like a RaspPi or NUC, docker-
machine via SSH can make provisioning and updating a tad easier.

------
sdan
I can attest to RPI i/o speed: its horrible. Combined with the fact that I
have to build Docker containers on the RPIs themselves(because of arm), its
more of a hassle than a cool add-on (to be clear I'm running Docker Swarm in a
way that isn't too different from the setup of OP).

Only reason I can see RPIs in kubernetes is if you're exclusively using ARM
everywhere and/or are running some distributed cluster among different
locations (like Chick-fil-a).

~~~
wlll
Is there an RPi alternative (ie. similar size, form etc.) that has better
IO/Network speed?

I used RPis for my robot
([https://sendc.at/dl/Kjjbt3dij6T733YgWsyv3p3Vv5GPGKO1q5IEFStV...](https://sendc.at/dl/Kjjbt3dij6T733YgWsyv3p3Vv5GPGKO1q5IEFStVngrkI6EVupm8dtT))
with daughter boards for motor control) and they wre pretty convenient, but
never really considered running anything server-like on them because they have
reportedly woeful network performance.

~~~
dabeeeenster
I have this [https://www.hardkernel.com/shop/odroid-n2-with-4gbyte-
ram/](https://www.hardkernel.com/shop/odroid-n2-with-4gbyte-ram/)

It's pretty powerful for the cost and has no fan etc. Arm64.

~~~
wlll
Those look sweet, thanks!

------
tachion
What I miss from all of these tutorials is one very important piece - how to
handle network routing and dns automation within your home network, that's in
typical scenario is being handled by the ingress/cloud controller. Without
having automated (or easy enough) way of reaching the apps you're deploying
there, each of these clusters is pretty much useless for users except for
maybe learning basics of k8s, what's easier done with minikube.

~~~
davestephens
Few things to do:

\- specify a LAN IP for your ingress controller so it doesn't change.

\- Use ddwrt/dnsmasq to point *.k8s.myhomenetwork.local to said IP

Once that's configured, you just configure the ingress hostname on services as
you would "normally".

~~~
vetinari
Also - do not use .local tld. It is a reserved one (RFC6762), for
mDNS/Bonjour:

> This document specifies that the DNS top-level domain ".local." is a special
> domain with special semantics, namely that any fully qualified name ending
> in ".local." is link-local, and names within this domain are meaningful only
> on the link where they originate. This is analogous to IPv4 addresses in the
> 169.254/16 prefix or IPv6 addresses in the FE80::/10 prefix, which are link-
> local and meaningful only on the link where they originate.

Microsoft used to recommend .local TLD for AD deployment as a best practice,
and nowadays there are companies stuck with this decision. Do not make the
same mistake; unlike companies, you probably want your zeroconf stuff to work.

~~~
pc86
So what breaks if you use "*.[lastname].local" for your home network?

~~~
vetinari
On zeroconf aware systems, it is still expected to be resolved via multicast;
service discovery works by looking up srv/ptr/txt records on
__$service.__$protocol.$hostname.local.

How it will behave will depend on your specific stack. Zeroconf aware (Macs,
iOS devices, Linux with Avahi - i.e. most modern distributions) one will use
multicast, zeroconf unaware (Windows) will use your DNS resolver. Devices
(printers, etc) are a toss of a coin.

~~~
chupasaurus
I'd like to note that a default behavior of Avahi in Debian/Ubuntu/RH/SUSE
prevents resolving *.local via unicast DNS to avoid this collision.

------
rtempaccount1
I think what type of k8s environment you use very much depends on what you're
looking to get out of it.

If it's experience deploying applications into containerized environments,
then micro-k8s and k3s seem like reasonable choices, you don't really care
about the setup of the underlying components, just that they present the k8s
API.

If you're looking for experience of managing k8s clusters, then either the
distribution you're looking to run in prod. or something like kubeadm are
perhaps a better option. kubeadm is very "vanilla" in terms of how it's
deployed so it's quite representative of production (on-prem) deployments,
perhaps unlike k3s which makes changes to how k8s works.

If you're looking to quickly test things in k8s, I'd recommend kind as the
easiest way to stand up and remove clusters quickly.

And if you're looking for something to run your home services long term, I
would recommend not using Kubernetes :) (unless you have a really complex home
network which might justify adding k8s to the mix)

~~~
charlieegan3
> if you're looking for something to run your home services long term, I would
> recommend not using Kubernetes

If we disregard one’s experience with Kubernetes as a factor, are there any
other reasons you see to not use k8s at home?

~~~
vetinari
Do you really have that many computers at home, so you need container
orchestration for that? A single machine is capable of handling most home
workloads.

So outside of playing with Kubernetes for experience, why would you do that?

~~~
tuananh
i have to agree with you on this. i've been using k8s at work for over 4 years
but operating k8s for home workload is still too much; unless, it's for
tinkering/testing.

------
nojvek
I’ve spent a decent amount of time fighting k8s at work. The GCP k8s which
abstracts away a bunch of things, but still it can be a bit crazy.

Docker compose is really great but only works on a single machine. Also docker
compose makes rolling updates a PITA.

What I really want is something like Cloud Run but locally tunable.

Google Cloud Run is essentially: here is a docker image, when you get a http
request, spin up a container, when there are no requests for 10 minutes, spin
it down. If there are more than 80 req/s, replicate to handle the spike.

Now I want to say, here’s a bunch of Ubuntu machines with X cores, Y RAM and Y
SSD storage. Go run these images on it and auto scale it up and down.

I know k8s was meant to solve this problem but the layers of abstraction on
top of abstractions are insane.

I just want to specify my compute/storage pool of machines, my docker images,
how they should connect to each other and scale up/down. Boom! It just works.

Is there something that does this?

------
sandGorgon
you can run a kubernetes stack using k3s on a rpi stack within minutes. and it
runs far better than all of these.

[https://blog.alexellis.io/test-drive-k3s-on-raspberry-
pi/](https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/)

here's a live walkthrough -
[https://www.youtube.com/watch?v=DjpVtNjiXSU](https://www.youtube.com/watch?v=DjpVtNjiXSU)

~~~
rkachowski
The author mentions the differences between k3s and micro k8s in the post and
concludes that due to the quirks of k3s (subset of features running in a
single binary) it's only really preferable if you want to run rancher.

~~~
sandGorgon
That is not true. We have personally run k3s on AWS including cluster-scaler
and spot instances.

Last I checked, K3s is a fully certified distro of k8s.

------
soulnothing
I recently gave up on home network clustering/kubernetes. I've since moved to
digital ocean. I was working on data heavy apps for testing. I had several SSD
zfs pools which was the driving reason, and a lot of RAM to work with.

My setup did work initially. I had a dedicated server acting as the metal lb,
about 20$ a month. That was then connected through a wireguard tunnel to my
home dmz network. Which backed to a dual xeon v2 work station. The latency was
very good under 20ms, and really good speeds. I'm lucky to have fios in my
area.

The Xeon workstation died, I then backed up to several think pads. Those were
not performant either. So I got several HP t610 nodes, raspberry pi speed with
sata 3. Rook took all CPU, and to boot they were running at 90c constantly.
Even after re paste and with fan mods. I didn't want five little space heaters
next to my desk.

After all this I ditched the home setup. I had gotten parts for a local Epyc
server after my xeon's died. But sold them due the current situation.

In the end I had wanted to start a series of blogs on Kubernetes and micro
service development. To help me learn and flesh out my under standing of
Kubernetes.

I don't feel I wasted several months setting up Kube. I now know about under
the cover stuff. Having deployed OkD, Rancher, kubespray, and kube adm. The
initial wire guard set up helped cement a lot of the internal networking
model. I was mildly acquainted with having worked on openvswitch and open
stack before.

If you're doing local. I would really recommend a Ryzen 1600af (6c/12t zen+
65w), either asrock rack x470, or a basic b450. Both can take ECC, and should
land a little node under 500ish or so. There are also SFF PCs Lenovo
ThinkCentre M93, comes to mind. But at that price 90$ a node, I'd rather move
up the stack a little bit.

If you're waiting to buy local. DigitalOcean has been very well priced. If you
want to learn the internals, grab a dedicated server and set up KVM, to get
familiar with the internals. On the upper end you can get a new ryzen 3rd gen
dedicated for 90$ a month. I look at it as a 90$ class I take once.

I'm not saying Kube is always right. But I avoided it for a long time, backing
up to docker swarm. Now, I feel at a base it's not much more complicated than
docker swarm, now that I've done this deep dive. Kube adm is on ease of use
with docker swarm to me. Add in Metal LB, and Gitlab to help manage the
cluster, you have a personal little cloud. It's also good to know for future
job searches.

------
zerpelin68
All of these trap your mind in the k8s spiral arm and watch how adroit I am at
building something no one would want to troubleshoot articles are hipster
reminders of why I only create occasional throwaways for this site.

------
maztaim
Curios how the benchmark compares when adding SSD on RPI 4?
[https://jamesachambers.com/raspberry-pi-4-usb-boot-config-
gu...](https://jamesachambers.com/raspberry-pi-4-usb-boot-config-guide-for-
ssd-flash-drives/)

I don't think it beats anything, but I am sure IO improves fairly
significantly.

One of the things I found to be a problem is that most container images found
on different registries are built for x86_64. You would need to rebuild those
containers yourself on the RPI.

------
tr33house
Happy to see this. I think more orgs should be running more on-prem or
colocated in a data center somewhere. Public clouds can be a rip off for
larger projects

~~~
mrweasel
There’s a massive difference between running a Kubernetes setup, like the one
in the article, for you home lab, and the attempting to run Kubernetes on-prem
for a business. I would recommend against the latter, unless your sure you can
afford it, in term of staffing.

------
tuananh
question: if not k8s, what everyone has been using to orchestrate containers
at home? Ideally, I just want something easy to setup, low maintenance cost
and easy to backup and restore if sth went wrong.

i've been using k8s at work for over 4 years but for home usage, it's just too
much.

~~~
dantheman0207
Nomad

~~~
StavrosK
Can you post a short, high-level overview of how you use Nomad? What does it
do for you exactly?

