Hacker News new | past | comments | ask | show | jobs | submit login
MicroK8s – Low-ops, minimal Kubernetes, for cloud, clusters, Edge and IoT (microk8s.io)
357 points by punnerud 7 days ago | hide | past | favorite | 136 comments





Deploying k8s has gotten a lot easier these days -- some alternatives in this space:

- https://docs.k0sproject.io (https://github.com/k0sproject/k0s)

- https://k3s.io (https://github.com/k3s-io/k3s/)

k0s is my personal favorite and what I run, the decisions they have made align very well with how I want to run my clusters versus k3s which is similar but slightly different. Of course you also can't go wrong with kubeadm[0][1] -- it was good enough to use minimally (as in you could imagine sprinkling a tiny bit of ansible and maintaining a cluster easily) years ago, and has only gotten better.

[0]: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

[1]: https://github.com/kubernetes/kubeadm


k3s is brilliant. we run production clusters on it.

The problem with k3s is that the architecture level libraries are a bit outdated. Early on, it was for a particular reason - ARM64 (raspberry pi) support. But today, like everyone is on ARM - even AWS.

For example the network library is Flannel. Almost everyone switches to Calico for any real work stuff on k3s. it is not even a packaged alternative. Go-do-it-urself.

The biggest reason for this is a core tenet of k3s - small size. k0s has taken the opposite approach here. 50mb vs 150mb is not really significant. But it opens up alternative paths which k3s is not willing to take.

In the long run, while I love k3s to bits....I feel that k0s with its size-is-not-the-only thing approach is far more pragmatic and open for adoption.


Agreed on 100% of your points -- you've hit on some of the reasons I chose (and still choose) k0s -- Flannel is awesome but it's a little too basic (my very first cluster was the venerable Flannel setup, I've also done some Canal). I found that k0s's choice of Calico is the best -- I used to use kube-router (it was and still is amazing, great all-in-one tool) heavily but some really awesome benchmarking work[0] caused me to go with Calico.

Most of the other choices that k0s makes are also right up my alley as well. I personally like that they're not trying to ride this IoT/Edge wave. Nothing wrong with those use cases but I want to run k8s on powerful servers in the cloud, and I just want something that does it's best to get out of my way (and of course, k0s goes above and beyond on that front).

> The biggest reason for this is a core tenet of k3s - small size. k0s has taken the opposite approach here. 50mb vs 150mb is not really significant. But it opens up alternative paths which k3s is not willing to take.

Yup! 150MB is nothing to me -- I waste more space in wasted docker container layers, and since they don't particularly aim for IoT or edge so it's perfect for me.

k3s is great (alexellis is awesome), k0s is great (the team at mirantis is awesome) -- we're spoiled for choice these days.

Almost criminal how easy it is to get started with k8s (and with a relatively decent standards compliant setup at that!), almost makes me feel like all the time I spent standing up, blowing up, and recreating clusters was wasted! Though I do wonder if newcomers these days get enough exposure to things going wrong at the lower layers as I did though.

[0]: https://itnext.io/benchmark-results-of-kubernetes-network-pl...


actually k3s has a cloud deployment startup - civocloud also using it. I would say that the production usage of k3s is outstripping the raspberry pi usage...but however the philosophical underpinnings remain very rpi3 centric.

Things like proxy protocol support (which is pretty critical behind cloud loadbalancers), network plugin choice, etc is going to be very critical.


> For example the network library is Flannel. Almost everyone switches to Calico for any real work stuff on k3s.

What's the tradeoff? Why not flannel for Real Work™?


You could certainly use Flannel in production (Canal = Flannel + Calico) but I like the features that Calico provides, in particular:

- network policy enforcement

- intra-node traffic encryption with wireguard

- calico does not use VXLAN (sends routes via BGP and does some gateway trickery[0]), so it has slightly less overhead

[0]: https://stardomsolutions.blogspot.com/2019/06/flannel-vs-cal...


Is developing locally with one of these k8s implementations a good option? My current workflow is to develop locally with a combination of bare (non-containerized) servers and Docker containers, but all of my deployment is to a hosted k8s cluster.

If developing locally with k8s would likely be a better workflow, are any of these options better than the others for that?


The best solution I have found for developing locally on k8s is k3d [0]. It quickly deploys k3s clusters inside docker. It comes with a few extras like adding a local docker registry and configuring the cluster(s) to use it. It makes it super easy to setup and tear down clusters.

I usually only reach for it when I am building out a helm charm for a project and want to test it. Otherwise docker-compose is usually enough and is less boilerplate to just get an app and a few supporting resources up and running.

One thing I have been wanting to experiment with more is using something like Tilt [1] for local development. I just have not had an app that required it yet.

[0] https://k3d.io/ [1] https://tilt.dev/


The simplest way to bring up a local k8s cluster on your machine for development is to use Kind (https://kind.sigs.k8s.io/).

The best time I had so far was with dockertest[0] in go it allows you to spin up containers as part of your test suite which then allows you to test against them. So we have one go pkg that has all our containers that we need regularly.

The biggest benefit there is no need to have a docker compose or have other resources running locally you just can run the test cases if you have docker installed. [0] https://github.com/ory/dockertest


We deploy with k8s but few of us develop with it. Nearly our whole department uses docker compose to get out dependencies running and to manage our acceptance tests locally. Some people will leverage our staging k8s cluster via kubectl and others just leverage our build pipeline (buildkite + argo cd) that also takes you to stage as it will also take you into production.

I use Minikube, I run `eval $(minikube docker-env)` and push my images straight into it - after patching imagePullPolicy to "IfNotPresent" for any resources using snapshot images - as K8s defaults to IfNotPresent, unless the image ends with "snapshot", then it defaults to "Always"...

I had a good time with Kubespray. Essentially you just need to edit the Ansible inventory and assign the appropriate roles.

Sure, if it works, upgrades are somewhat fraught though (I mean, upgrading a 20 node cluster is an hour long ansible run, or it was when we were using it)

We switched to rke, it’s much better.


An hour to upgrade a 20 node cluster doesn't seem unreasonable for me - when you are doing a graceful upgrade that includes moving workloads between nodes. I don't know anything about rke. Might be interesting but it seems different enough from upstream Kubernetes that you have to learn new things. Seems to me a bit similar to Openshift 4 where the cluster manages the underlying machines.

RKE is minutes on the same cluster, and a one click rollback too. It’s just K8s packaged as containers really.

I tried a few incarnations of self-hosted k8s a few years ago, and the biggest problem I had was persistent storage. If you are using a cloud service they will integrate k8s into whatever persistent storage they offer, but if you are self-hosting you are left on your own, it seems most people end up using something like nfs or hostPath - but that ends up being a single point of failure. Have there been any developments on this recently, aimed at people wanted to run k8s on a few RaspberryPi nodes?

Have you tried using a CSI driver to help you do this? https://kubernetes-csi.github.io/docs/drivers.html

A brief description of what CSI is - https://kubernetes.io/blog/2019/01/15/container-storage-inte...


I've had good experiences using the Rook operator for creating a CephFS cluster. I know that you can run it on k3s, but I don't know whether RaspberryPi nodes are sufficient. Maybe the high RAM Raspi 4 ones.

We do this at Twilio SendGrid

I've had good experiences with Rook on k3s in production. Not on raspis though.

I'm a bit biased but Rook[0] or OpenEBS[1] are the best solutions that scale from hobbyist to enterprise IMO.

A few reasons:

- Rook is "just" managed Ceph[2], and Ceph is good enough for CERN[3]. But it does need raw disks (nothing saying these can't be loopback drives but there is a performance cost)

- OpenEBS has a lot of choices (Jiva is the simplest and is Longhorn[4] underneath, cStor is based on uZFS, Mayastor is their new thing with lots of interesting features like NVMe-oF, there's localpv-zfs which might be nice for your projects that want ZFS, regular host provisioning as well.

Another option which I rate slightly less is LINSTOR (via piraeus-operator or kube-linstor[6]). In my production environment I run Ceph -- it's almost certainly the best off the shelf option due to the features, support, and ecosystem around Ceph.

I've done some experiments with a reproducible repo (Hetzner dedicated hardware) attached as well[7]. I think the results might be somewhat scuffed but worth a look maybe anyways. I also have some older experiments comparing OpenEBS Jiva (AKA Longhorn) and HostPath [8].

[0]: https://github.com/rook/rook

[1]: https://openebs.io/

[2]: https://docs.ceph.com/

[3]: https://www.youtube.com/watch?v=OopRMUYiY5E

[4]: https://longhorn.io/docs

[5]: https://github.com/piraeusdatastore/piraeus-operator

[6]: https://github.com/kvaps/kube-linstor

[7]: https://vadosware.io/post/k8s-storage-provider-benchmarks-ro...

[8]: https://vadosware.io/post/comparing-openebs-and-hostpath/


Distributed minio[1] maybe? Assuming you can get by with S3-like object storage.

[1] https://docs.min.io/docs/distributed-minio-quickstart-guide....


I'm using longhorn, but it's been cpu-heavy.

I really liked longhorn but the CPU usage was ultimately too high for our use case.

seaweedfs seems pretty great for a cloud storage: http://seaweedfs.github.io

Thanks! I am working on SeaweedFS. https://github.com/chrislusf/seaweedfs

There are also SeaweedFS CSI Driver: https://github.com/seaweedfs/seaweedfs-csi-driver


I guess easiest would be Longhorn on top of k3s

I've found Ceph is more tolerant to failures and staying available. Longhorn was certainly easier to setup and has lower operating requirements, but we encountered outages.

(Full disclaimer - I'm an engineer at Talos, but I believe it's pretty relevant here)

If folks are interested in this kind of K8s deployment, they might also be interested at what we're doing at Talos (https://talos.dev). We have full support for all of these same environments (we have a great community of k8s-at-home folks running with Raspberry Pis) and a bunch of tooling to make bare metal easier with Cluster API. You can also do the minikube type of thing by running Talos directly in Docker or QEMU with `talosctl`.

Talos works with an API instead of SSH/Bash, so there's some interesting things about ease of use when operating K8s that are baked in like built-in etcd backup/restore, k8s upgrades, etc.

We're also right in the middle of building out our next release that will have native Wireguard functionality and enable truly hybrid K8s clusters. This should be a big deal for edge deployments and we're super excited about it.


A few questions if I may. I administer a few thousand machines which are provisioned using Puppet, and I've had my eye on a more immutable style of administration integrating K8S.

  - How does Talos handle first getting on to the network? For example, some environments might require a static IP/gateway for example to first reach the Internet. Others might require DHCP.
  - How does Talos handle upgrades? Can it self upgrade once deployed?
  - What hardware can Talos run on? Does it work well with virtualisation?
  - To what degree can Talos dynamically configure itself? What I mean by this is if that a new disk is attached, can it partition it and start storing things on it?
  - How resilient is Talos to things like filesystem corruption? 
  - What are the minimum hardware requirements?
Please forgive my laziness but maybe other HNers will have the same questions.

Hey, thanks for the questions. I'll try to answer them in-line:

- How does Talos handle first getting on to the network? For example, some environments might require a static IP/gateway for example to first reach the Internet. Others might require DHCP.

For networking in particular, you can configure interfaces directly at boot by using kernel args.

But that being said, Talos is entirely driven by a machine config file and there are several different ways of getting Talos off the ground, be it with ISO or any of our cloud images. Generally you can bring your own pre-defined machine configs to get everything configured from the start or you can boot the ISO and configure it via our interactive installer once the machine is online.

We also have folks that make heavy use of Cluster API and thus the config generation is all handled automatically based on the providers being used.

- How does Talos handle upgrades? Can it self upgrade once deployed?

Upgrades can be kicked off manually with `talosctl` or can be done automatically with our upgrade operator. We're currently in the process of revamping the upgrade operator to be smarter, however so it's in flux a bit. As with everything in Talos, upgrades are controllable by the API.

Kubernetes upgrades can also be performed across the cluster directly with `talosctl`. We’ve tried to bake in a lot of these common operations tasks directly into the system to make it easier for everyone.

- What hardware can Talos run on? Does it work well with virtualisation?

Pretty much anything ARM64 or AMD64 will work. We have folks that run in cloud, bare metal servers, Raspberry Pis, you name it. We publish images for all of these with each release.

Talos works very well with virtualization, whether that's in the cloud or with QEMU or VMWare. We've got folks running it everywhere.

- To what degree can Talos dynamically configure itself? What I mean by this is if a new disk is attached, can it partition it and start storing things on it?

Presently, the machine configuration allows you to specify additional disks to be used for non-Talos functions, including formatting and mounting them. However, this is currently an install-time function. We will be extending this in the future to allow for dynamic provisioning utilizing the new Common Operating System Interface (COSI) spec. This is a general specification which we are actively developing both internally and in collaboration with interested parties across the Kubernetes community. You can check that out here if you have interest: https://github.com/cosi-project/community

- How resilient is Talos to things like filesystem corruption?

Like any OS, filesystem corruption can indeed occur. We use standard Linux filesystems which have internal consistency checks, but ultimately, things can go wrong. An important design goal of Talos, however, is that it is designed for distributed systems and, as such, is designed to be thrown away and replaced easily when something goes awry. We also try to make it very easy to backup the things that matter from a Kubernetes perspective like etcd.

- What are the minimum hardware requirements?

Tiny. We run completely in RAM and Talos is less than 100MB. But keep in mind that you still have to run Kubernetes, so there's some overhead there as well. You’ll have container images which need to be downloaded, both for the internal Kubernetes components and for your own applications. We're roughly the same as whatever is required for something like K3s, but probably even a bit less since we don’t require a full Linux distro to get going.


From all of the low-ops K8s distributions, k3s[0] is the best from perspective of inital setup, maintenance and usage on less powerful hardware.

There are even now higher-level tools such as k3os and k3sup to further reduce the initial deployment pains.

MicroK8s prides with 'No APIs added or removed'. That's not that positive in my book. K3s on the other hand actively removes the alpha APIs to reduce the binary size and memory usage. Works great if you only use stable Kubernetes primitives.

[0] https://k3s.io/


Thanks for mentioning K3sup [0]

I used Microk8s on a client project late last year and it was really painful, but I am sure it serves a particular set of users who are very much into the Snap/Canonical ecosystem.

In contrast, K3s is very light-weight and can be run in a container via the K3d project.

If folks want to work with K8s upstream or development patches against Kubernetes, they will probably find that KinD is much quicker and easier.

Minikube has also got a lot of love recently, and can run without having a dependency on Virtual Box too.

[0] https://k3sup.dev/ [1] https://kind.sigs.k8s.io/docs/user/quick-start/


This looks interesting, what control plane do the nodes usually connect to? I'm trying to see the use case for me, where I have a main NAS in my house and a few disparate Raspberry Pis, but I'm not sure if I would run the control plane on the NAS or if I would use a hosted one somewhere else.

I've had a number of issues with k3s on very low spec hardware (typically ARM), where it would take up to 25-50% of CPU just sitting idle with no pods. Stopped using it for those scenarios a year ago, wonder if that's fixed.


I had the same issue, it wasn't fixed on my last upgrade. I just let it eat some cpu : my pi is somewhat busy anyway

Plain k8s has a fearsome reputation as being complex to deploy, which I don't think is quite deserved. It isn't totally straightforward, but the documentation does tend to make it sound a bit worse than it actually is.

I run a couple of small clusters and my Ansible script for installing them is pretty much:

  * Set up the base system. Set up firewall. Add k8s repo. Keep back kubelet & kubeadm.
  * Install and configure docker.
  * On one node, run kubeadm init. Capture the output.
  * Install flannel networking.
  * On the other nodes, run the join command that is printed out by kubeadm init.
Running in a multi-master setup requires an extra argument to kubeadm init. There are a couple of other bits of faffing about to get metrics working, but the documentation covers that pretty clearly.

I'm definitely not knocking k3s/microk8s, they're a great and quick way to experiment with Kubernetes (and so is GKE).


I remember about 5 years ago I tried to deploy it on CoreOS using the available documentation and literally couldn't get it working.

I haven't done a manual deployment since. I hope it got significantly better and I may be an idiot but the reputation isn't fully undeserved.

The problem back then was also that this was usually the first thing you had to do to try it out. Doing a complicated deployment without knowing much about it doesn't make it any easier.


Same here. I just wanted to play with it for my toy projects and personal services, so I didn't really push a whole lot, but it just felt like there were too many moving parts to figure out. I didn't need autoscaling or most of the advanced features of k8s, so I just went back to my libvirt-based set of scripts.

I run kubernetes on a home server, but it took me a couple weeks of testing and trial and error to arrive at a setup I was happy with, and I already had experience of K8S in the cloud. At the time I was stuck without a work laptop, so had time to self-educate, but normally I wouldnt have that kind of time to sink in.

Deploying a Kubernetes cluster isn't really to complex, it doesn't even take that long. It's the long term maintenances that concerns me.

This concerns me too. What should I be worrying about? The main maintenance problem that I have experienced so far is that automatic major version updates can break the cluster (which is why I now keep back those packages). Are there other gotchas that I'm likely to experience?

Version updates don't normally break the cluster, in my experience, but it might break things like Helm charts.

The thing that concerns me the most is managing the internal certificates and debugging networking issues.


> managing the internal certificates

I haven't yet set it up, but https://github.com/kontena/kubelet-rubber-stamp is on my list to look at.

> debugging networking issues

In this regard, I have had much more success with flannel than with calico. The BGP part of calico was relatively easy to get working, but the iptables part had issues in my set-up and I couldn't understand how to begin debugging them.


TBH, I am not that a big fan of microk8s. I have it deployed on VPS and it's far from stable.

The server itself, is, probably, overprovision, but I still struggle with responsiveness, logging, and ingress/service management. What's also funny, that using the Ubuntu's ufw service is not that seamless together with microk8s.

I am think of moving now to k3s. The only thing that's holding me back is that k3s doesn't use nginx for ingress so I'll need to change some configs.

Also, the local storage options are not that clear.


You can easily deploy the Nginx ingress controller on k3s.

I moved from k3s to microk8s for local development. I gave up on k3s because I needed calico CNI and it was a pain to set up, on microk8s it's just `microk8s enable calico`. I also found k3s a bit too opinionated with the default Traefik ingress and service-lb.

Traefik is probably the best Ingress out there capability wise for now I think. I've written a bit on it[0] before, but IMO that choice is a good one. Even used it to do some fun external-in SSH connections[1]. I also use it to run a multi-tenant email setup (haraka @ the edge + maddy). It's not like NGINX can't run SMTP expose other ports, but Traefik is easier to manage -- CRDs instead of a ConfigMap update.

[0]: https://vadosware.io/post/ingress-controller-considerations-...

[1]: https://vadosware.io/post/stuffing-both-ssh-and-https-on-por...


Maybe, but I prefer to decide myself and in my case I need to test out different ingress controllers with our product. I had troubles getting Nginx ingress to work on k3s. FWIW project contour is also quite nice and has dynamic config reload https://projectcontour.io/

Is Traefik difficult to initially configure? I use the default ingress-nginx controller on my home lab setup (separate from the “official”, non default & non-free nginx-ingress, absolutely terrible name choices) and it seems to be ok for smaller use cases. It’s not without its idiosyncrasies though.

At previous employment with large scale clusters (thousands of nodes) Traefik seemed to be heavily preferred by the SRE’s in my org.


I personally think it's not that bad -- the documentation is a bit overwhelming because there are so many ways to configure it (some people are running with only docker, others using the CRDs, some people are using the built-in Ingress support, some using the new Gateway stuff). As with most things, if you read and mostly digest the documentation, you won't feel lost when it comes to setting it up -- I haven't run into any corners that were too hard to figure out or inconsistent (which is even worse).

You could get Traefik working inside your existing cluster actually by just letting NGINX route to it and seeing how easy it is to use that way -- though that may be more difficult than just spinning up a cluster on a brand new machine (or locally) and feeling your way around.

I can say that Traefik's dashboard makes it much easier to debug while it's running as it gives you a fantastic amount of feedback, Prometheus built in, etc.


I think they've moved off the Traefik ingress in favor of others out of the box FWIW.

Looking forward to this being decoupled from snapd eventually. Until then, not the fientist hope I'd touch this when any alternative exists where snapd can be avoided

I feel like snapd must be hindering the uptake of microK8s, surely? Canonical is pushing it hard, but snapd is a hard no.

Every time I see it mentioned, I check to see if they ditched snapd yet, alas, today is not the that day.


Is K8S still eating so many CPU cycles while idle?

Last year I checked and every physical machine in a K8S cluster was burning CPU at 20-30% - with zero payload, just to keep itself up!

Don´t you feel that this is totally inacceptable in a world with well understood climate challenges?


It still sucks unfortunately

I thought Kubernetes is not great for environments with poor network connectivity, which is quite common when dealing with Edge and IoT scenarios. Has that changed?

Yes it's changed massively, but it's recent - only since 2019.

I am the creator of the k3sup (k3s installer) tool that was mentioned and have a fair amount of experience with K3s on Raspberry Pi too.

You might also like my video "Exploring K3s with K3sup" - https://www.youtube.com/watch?v=_1kEF-Jd9pw

https://k3sup.dev/


What has been changed to improve k8s for applications with poor connectivity between workers on the edge and the control plane in the cloud?

Other than bugs around bad connections causing hangs (the kubelet is less vulnerable to pathological failures in networking causing it to stall), nothing significant.

Kube is designed for nodes to have continuous connectivity to the control plane. If connectivity is disrupted and the machine restarts, none of the workloads will be restarted until connectivity is restored.

I.e. if you can have up to 10m of network disruption then at worst a restart / reboot will take 10m to restore the apps on that node.

Many other components (networking, storage, per node workers) will likely also have issues if they aren’t tested in those scenarios (i’ve seen some networking plugins hang or otherwise fail).

That said, there are lots of people successfully running clusters like this as long as worst case network disruption is bounded, and it’s a solvable problem for many of them.

I doubt we’ll see a significant investment in local resilience in Kubelet from the core project (because it’s a lot of work), but I do think eventually it might get addressed in the community (lots of OpenShift customers have asked for that behavior). The easier way today is run edge single node clusters, but then you have to invent new distribution and rollout models on top (ie gitops / custom controllers) instead of being able to reuse daemonsets.

We are experimenting in various ecosystem projects with patterns would let you map a daemonset on one cluster to smaller / distributed daemonsets on other clusters (which gets you local resilience).


Your reply provides a lot of context. Thanks!

Do you mean connectivity to the outside, or inside the cluster? The examples of Kubernetes and similar things in such scenarios I've seen usually had stable connectivity between nodes. E.g. an edge scenario would be one tiny well-connected cluster per location, remote-controlled over the (bad) external link through the API.

I meant intra-cluster communication between nodes, when some nodes are on the Edge, some are inside the datacenter. The Edge may have pretty good overall connection to DC, but have to work with intermittent connectivity problems like dropping packets for several minutes, etc., without going crazy.

> I meant intra-cluster communication between nodes, when some nodes are on the Edge, some are inside the datacenter.

Don't do this. Have two K8s clusters. Even if the network were reliable you might still have issues spanning the overlay network geographically.

If you _really_ need to manage them as a unit for whatever reason, federate them(keeping in mind that federation is still not GA). But keep each control plane local.

Then setup the data flows as if K8s wasn't in the picture at all.


Yeah, spanning a cluster from DC to Edge is probably not a good idea, but also generally not what I've seen suggested.

Edge<->DC "dropping packets for several minutes"?

Where have you been suffering from this?


Real case - shops have "terminals" installed on site, various physical locations. Some of them have networking glitches 1-2 times a month on average, usually lasting a couple of minutes.

I don't want to have to restart the whole thing on each site every time it happens. I'd like a deployment/orchestration system that can work in such scenarios, showing a node as unreachable but then back online when it gets network back.


> showing a node as unreachable but then back online when it gets network back.

Isn't that exactly what happens with K8s worker nodes? They will show as "not ready" but will be back once connectivity is restored.

EDIT: Just saw that the intention is to have some nodes in a DC and some nodes in the edge and the intention is to have a single K8s cluster spanning both locations with unreliable network in between. No idea how badly the cluster would react to this.


> I thought Kubernetes is not great for environments with poor network connectivity,

No, it's ok. What you don't want to have is:

* Poor connectivity between K8s masters and ETCD. The backing store needs to be reliable or things don't work right. If it's an IOT scenario, it's possible you won't have multiple k8s master nodes anyway. If you can place etcd and k8s master in the same machine, you are fine.

You need to have a not horrible connection between masters and workers. If connectivity gets disrupted for a long time and nodes start going NotReady then, depending on how your cluster and workloads are configured, K8s may start shuffling things around to work around the (perceived) node failure(which is normally a very good thing). If this happens too often and for too long time it can be disruptive to your workloads. If it's sporadic, it can be a good thing to have K8s route around the failure.

So, if that is your scenario, then you will need to adjust. But keep in mind that no matter what you do, if network is really bad, you would have to mitigate the effects regardless, Kubernetes or not. I can only really see a problem if a) network is terrible and b) your workloads are mostly computing in nature and don't rely on the network (or they communicate in bursts). Otherwise, a network failure means you can't reach your applications anyway...


Microk8s is great if you're lazy. I've recently built a small-ish 3 node cluster hosting internal apps for a few hundred users and pretty much the only setup I needed to do was: install it with Snap*, enable a few plugins (storage, traefik, coredns), run the join command on each node and set up a basic loadbalancer using haproxy** and keepalived***.

* I don't like Snap. Like, a lot. But unfortunately there aren't any other options at the moment.

** I have HAProxy load-balancing both the k8s API and the ingresses. Both on L4 so I can terminate TLS on the ingress controller and automatically provision Let's Encrypt certs using cert-manager[1].

*** Keepalived[2] juggles a single floating IP between all the nodes so you can just run HAProxy on the microk8s nodes instead of having dedicated external loadbalancers.

[1] https://cert-manager.io/docs/

[2] https://github.com/acassen/keepalived


Is Snap the only way to install this on linux? This is a non-starter for me.


You probably meant "it looks like it" or "it doesn't look like there's another (supported) way". Really sad, and it's a "classic" snap so you don't get any isolation benefits but performance penalties and lots of mounted snap filesystems.

The MicroK8s team is actively working on a "strict" snap https://github.com/ubuntu/microk8s/issues/2053. There you will get all the isolation benefits and security enhancements. https://snapcraft.io/docs/snap-confinement

Now with Windows installer. This could make life a lot easier in many dev environments.

Having a RaspberryPi laying idle in a drawer somewhere, I wonder if others have installed microk8s or k3s on it.

What kind of workloads are you running on it? Whoever has OpenFaas installed on your rpi, what type of functions are you running?


I mostly gave up on containers on Pis due to the overhead (disk space, deployment hassles with private registries, etc.). Built my own mini-Heroku and run all my services/functions there: https://github.com/piku

+1 to piku. I use it on a homelab server (not a Raspberry Pi) and just love how simple it is. It sets up a Python deploy environment for web apps that is exactly how I'd personally hand-configure the same in production (nginx, uwsgi, git). The best part of the project is that the entire thing is 1,000 lines of open source and readable Python code, so there's truly no magic whatsoever.

My blog used to run on it until I turned it into a static site, but the builder “function” now happily lives alongside a Node-RED install and a bunch of other things on a tinier VM.

This is honestly just awesome. I love how it just uses git, no web interface or complex configuration needed.

Thanks! I got fed up with massive manifests and was enamoured with Heroku's approach, so I looked at uwsgi, realised it was a damn good process supervisor, and just did it.

Can you run microk8s or k3s on a single server/device? Even if you can, seems like the wrong tool for the job with unnecessary complexity...

I run k3s on a single node. Used to be two, but I consolidated because this isn't a production usecase for me. Availability isn't the point for me. If I have to turn the thing off for an hour ever year or two to fix something, sure, fine.

The real value I get is Infra as Code, and having access to the same powerful tools that are standard everywhere else. I can drop my current (dedicated) hardware and go publish all my services on a new box in under an hour, which I did recently.

From my point of view, I already pay the complexity by virtue of k8s being The Standard. The two costs of complexity are 1) Learning the Damn Thing 2) Picking up the pieces when it breaks under its own weight. 1) I already pay regardless, and 2) I'm happy to pay for the flexibility I get in return.


Both k3s and MicroK8s support single node and multi node setups. Now, there's an open debate whether running K8s locally is the right approach for container development. Kelsey Hightower and James Strachan actually shared opposite opinions on this in a recent industry report published by Canonical. https://juju.is/cloud-native-kubernetes-usage-report-2021#ku...

(Disclaimer: I work for Canonical)


How is persistent storage handled on microk8s (or k3s)?

If you want more than just local disk (directory/path on the node the pods currently reside on) you will have to deploy your own storage provider to handle PVCs (on k3s at least). According to Microk8s docs, it's the same there.

OpenEBS is a plug-in too which opens up lots of options

ok. thanks. Do you have any recommendations for lightweight solutions? I ran a ceph cluster a few years ago but remember it being quite a bit of work.

It hadn't occurred to me that there were multiple different kubernetes. I thought it was essentially just a single app, or rather collection of apps.

Are these essentially different choices available for people who wish to self host their k8s clusters?

Are there any good resources that summarise the different advantages of each choice? What is the same between all the choices, what are the differences?


Tons of choices, k8s is really just a definition of an API and there lots of things you can plug in at various points to implement parts of the API (or even completely extend it). Read the official docs--they'll walk you through the design and capabilities. As far as what's available, well it's a bit like trying to understand "what's the newest programming languge?" You probably want to plug yourself into the k8s community and start looking at subreddits, SIG meeting notes, community news, etc. New things are popping up literally every day.

They're Kubernetes distributions. So think of Kubernetes like Linux, and kubeadm, OpenShift, k3s like Ubuntu, Red Hat etc.

What would be the equivalent of the kernel (the bit they all share)?

The k8s API: https://kubernetes.io/docs/concepts/overview/kubernetes-api/

Most distributions will use the stock kubelet process (what runs containers on nodes), but you don't have to--there are kubelet compatible processes to run VMs instead of containers, run webassembly code (krustlet), etc.

Everything in k8s can be swapped out and changed, the API spec is the only constant.


How does it compare against minikube? Seems to be based on Snaps vs minikube's VM-based approach. Any other major pros/cons?

From my experience of using microk8s for mostly offline development and trying minikube once:

Microk8s pros:

- bind-mount support => it is possible to mount a project including its node_modules and work on it from the host while it hot-reloads in the pod.

- The addons for dns, registry, istio, and metallb just work.

- Feels more snappy than minikube.

Microk8s cons:

- Distributed exclusively via snap => can't be easily installed on nix/nixos.


microk8s also makes it super easy to have a multi-node cluster, even with HA.

Thanks!

I used Minikube constantly for development. The best way to run it is directly on Linux, in which case it does not require a VM and is very stable. VMs on the other hand seem to cause a lot of problems--it's not especially stable on Mac OS and does not seem to survive reboots very well. (YMMV.)

I thought minikibe would not cluster multiple machine ?

Yes, it seems not possibile. You can EMULATE a multi mode cluster but inside a big host machine.

Minikube is very effective on my huble opinion to STUDY K8s becauese it is always strong aligned with K8s releases.

I do not think minikube is a good choice for production but hey, I could be wrong... someone want to share any experience?


Yes, definitely not production. Using them (MicroK8s, Minikube, K3s) for local testing mostly.

We evaulated K3S vs MicroK8S and sided with K3S. MicroK8S seems little complicated to me.


I see nothing of particular utility to iot here. I will call that marketing.

After shipping a number of MQTT backends for deployments of few million devices each, I'd say the most troublesome part was getting the anycast network setup, uplinks, routing, and getting the "messaging CDN" second to it.

It's very hard to do without having an own ASN in which you have complete freedom.

Even in the case of 8 million shipped units fire/burglary alarm client, with ~4M constantly on units, which all have to send frequent pings to signal that they are online, I haven't seen any need in any kind of sophisticated clustering.


If anybody seriously believe that Kubernetes is good on Edge and IoT, our industry is in deep trouble. I usually like Canonical, but this is next level bullshit.

Care to explain why? And what industry you're talking about?

Kubernetes has made efforts for nodes to work with poor network connectivity.

The node requirements aren't that big.

A lot of use cases are relatively easy to containerise.

And edge / IoT devices are getting more powerful as well.

They aren't talking about consumer IoT here as well.

It has become a meme that Kubernetes is complicated. But it solves a lot of orchestration problems that would need to be implemented in other ways.


IDK about OP, but when I think IoT, I think small single purpose devices: a ring door bell, a "smart" thermostat or fire alarm, a security camera. There at most a couple of processes running, what is there to orchestrate on an IoT doorbell?

On the backend where you have services being fed, processing, and presenting all that data sure Kubernetes that part up, but that doesn't seem to need a special distro of Kubernetes.


    when I think IoT, I think small single purpose devices: a ring door bell, a "smart" thermostat or fire alarm, a security camera. There at most a couple of processes running, what is there to orchestrate on an IoT doorbell?
Exactly!

Don't worry, as evidenced by recent posts eg [1] k8s has already entered the trough of disillusionment in its hype cycle for some time now, though Stockholm's is particularly strong on this one given its labyrinthic complexity.

[1]: https://news.ycombinator.com/item?id=27903720


The hype cycle is still strong - the king is naked and you are getting downvoted for saying it.

Edge is a very overloaded term that just means a server not in a datacenter imo, it could be a thick edge with 8 esx, hundreds or cores and gigabytes of memory in a shop or it could be a little single node box in a closet at someone’s home or it could be a light switch.

Technically they’re all edge, but nobody thinks K8s can run on the latter, but it might work on the prior two.


Canonical is just one of several players in this area. There's also k3s and k0s, for example.

I suppose it might depend on what you count as "edge", but we're using kubernetes to distribute a complex product to customers onprem. The product has multiple databases, services, transient processes, scheduled jobs, and machine learning. It needs to be able to run on a single machine or a cluster depending on customer requirements. It needs to support whatever Linux variant the customer allows. Using Kubernetes solves a lot of problems for us.


on-prem is not edge IMO. Edge is something small, far away from datacenters, close to customers with limited compute and storage capacity. Did I get this wrong?

For example, SQLite advertise itself as "database on edge".


"Edge" is kind of a relative term. E.g. relative to a public cloud, anything not in that cloud could be considered edge. Or, if a company's systems are in a shared datacenter somewhere, then the systems in their actual offices might be considered "edge".

Just as an example (not saying it's authoritative):

> "Edge computing is often referred to as 'on-premise.'"

   -- https://dzone.com/articles/demystifying-the-edge-vs-cloud-computing
But these days, people even refer to systems hosted on a company's own cloud account as "on-premise", so these terms get increasingly fuzzy over time.

Btw since you mention SQLite, the k3s system uses SQLite instead of the default etcd used by Kubernetes, for the reason you mention. These systems really are intended to support true edge scenarios. K3s is distributed as a single 40 MB binary, and you can run it on non-PC edge hardware.

For anyone who runs a system that involves multiple containers on a single machine, it can be worth looking at systems like k3s as an alternative to e.g. Docker Compose. There's not much downside other than some learning curve, and it gives you a wealth of capabilities that you otherwise tend to end up hacking together with scripts or whatever.


I guess that depends on your definition of "Edge". There are quite few a "hybrid" (cloud plus on-prem) use cases where running K8s on-prem can be a good option. If you check https://azure.microsoft.com/en-us/products/azure-stack/edge/... those devices can easily run a K8s cluster.

Why? In its smallest form k8s is a few go processes handling a proxy and some container scheduling/runtime logic.

Sure if you run an entire control plane on the edge you're adding more complexity... but you don't have to do that, and control planes are complex beasts by their nature.


IoT devices doesn't even have the memory required for running a k8s worker process. They should run a low-cost, very small embedded system. Most of them doesn't even have a Linux kernel.

If you want "k8s but less" I highly recommend Nomad from Hashicorp and the Hashi-stack in general. It's so damn good.

I use MicroK8S for local testing of Kubernetes clusters on my laptop and it works pretty well. I like that I can just run microk8s reset to clear out the state so I can redeploy everything without fear of some lingering configuration laying behind. I have yet to deploy it to an actual server though, though I would definitely be interested if it could do what EKS can do to some capacity (mainly creating EBS volumes and load balancers).

Are you serious? Microk8s reset takes ages. It's faster to uninstall the whole thing. You also can just delete whole workspaces, that should normally be enough.

Yeah man, I'm serious. It took 5 minutes 30 seconds on my machine to run microk8s reset. This is well within the tolerance window for something I choose to run every 2 to 6 weeks. This is not part of my normal development strategy. Deleting whole namespaces is not a good idea for things like kube-system, ingress, etc. for obvious reasons.

Indeed, sometimes it can be faster to nuke the cluster but this is also unsafe. microk8s reset asks Kubernetes to delete the resources created, and that can take time.

Interesting that this is supported and maintained by Canonical (Ubuntu). Maybe it will be shipped inside Ubuntu one day.

In some sense, it already is. Run `snap install microk8s` and it's installed.

Would be nice too if it could run non containerized applications like Nomad can.

Another vote for K3s. Easy to set up on a Pi cluster.

Pity Istio doesn’t offer an ARM build


There are unofficial builds at https://github.com/querycap/istio, and it seems like it's got to a point (with Envoy now supporting arm64) that it could be upstreamed. I'll keep track of it for you!

- istio SC member


their tutorial [1] is a bit unclear on which Pi's can be used - I have a stack of pi zeroes not doing anything at the moment. Do you happen to know if they could handle it (just for a learning project at home).

The docs say recommended 4gb memory, but could I run a huge swap partition for that?

[1] https://ubuntu.com/tutorials/how-to-kubernetes-cluster-on-ra...


Can it replace Docker on my local dev machine?

The classic "it depends" answer applies here, along two axes: container assembly, and container execution.

If you mean "docker" the binary used to build container images, you don't even need that right now -- there are multiple projects that will build container images without involving docker or dockerd.

If you mean "dockerd" the container management engine that one controls via "docker" to start and stop containers, then yes microk8s will help as they appear to use containerd inside the snap (just like kind and likely all such "single binary kubernetes" setups do): https://github.com/ubuntu/microk8s/blob/master/docs/build.md...


Do you think Docker will disappear and everybody will some kind of "single binary kubernetes" on their machine + A tool to build the images.

Unlikely, although I for sure don't trust dockerd for my actual clusters anymore. I still use docker, and its dind friend, to build images because the whole world uses Dockerfile and the compatibility story between those alternative tools and the sea of Dockerfiles out there is ... tricky

I would love for something with more sanity to catch on, because Dockerfile as a format has terrible DX, but "wishes horses etc etc."


Is this a minikube alternative?

It is, and it doesn't require a VM to run the K8s cluster.



Applications are open for YC Winter 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: