Hacker News new | past | comments | ask | show | jobs | submit login

Deploying k8s has gotten a lot easier these days -- some alternatives in this space:

- https://docs.k0sproject.io (https://github.com/k0sproject/k0s)

- https://k3s.io (https://github.com/k3s-io/k3s/)

k0s is my personal favorite and what I run, the decisions they have made align very well with how I want to run my clusters versus k3s which is similar but slightly different. Of course you also can't go wrong with kubeadm[0][1] -- it was good enough to use minimally (as in you could imagine sprinkling a tiny bit of ansible and maintaining a cluster easily) years ago, and has only gotten better.

[0]: https://kubernetes.io/docs/reference/setup-tools/kubeadm/

[1]: https://github.com/kubernetes/kubeadm




k3s is brilliant. we run production clusters on it.

The problem with k3s is that the architecture level libraries are a bit outdated. Early on, it was for a particular reason - ARM64 (raspberry pi) support. But today, like everyone is on ARM - even AWS.

For example the network library is Flannel. Almost everyone switches to Calico for any real work stuff on k3s. it is not even a packaged alternative. Go-do-it-urself.

The biggest reason for this is a core tenet of k3s - small size. k0s has taken the opposite approach here. 50mb vs 150mb is not really significant. But it opens up alternative paths which k3s is not willing to take.

In the long run, while I love k3s to bits....I feel that k0s with its size-is-not-the-only thing approach is far more pragmatic and open for adoption.


Agreed on 100% of your points -- you've hit on some of the reasons I chose (and still choose) k0s -- Flannel is awesome but it's a little too basic (my very first cluster was the venerable Flannel setup, I've also done some Canal). I found that k0s's choice of Calico is the best -- I used to use kube-router (it was and still is amazing, great all-in-one tool) heavily but some really awesome benchmarking work[0] caused me to go with Calico.

Most of the other choices that k0s makes are also right up my alley as well. I personally like that they're not trying to ride this IoT/Edge wave. Nothing wrong with those use cases but I want to run k8s on powerful servers in the cloud, and I just want something that does it's best to get out of my way (and of course, k0s goes above and beyond on that front).

> The biggest reason for this is a core tenet of k3s - small size. k0s has taken the opposite approach here. 50mb vs 150mb is not really significant. But it opens up alternative paths which k3s is not willing to take.

Yup! 150MB is nothing to me -- I waste more space in wasted docker container layers, and since they don't particularly aim for IoT or edge so it's perfect for me.

k3s is great (alexellis is awesome), k0s is great (the team at mirantis is awesome) -- we're spoiled for choice these days.

Almost criminal how easy it is to get started with k8s (and with a relatively decent standards compliant setup at that!), almost makes me feel like all the time I spent standing up, blowing up, and recreating clusters was wasted! Though I do wonder if newcomers these days get enough exposure to things going wrong at the lower layers as I did though.

[0]: https://itnext.io/benchmark-results-of-kubernetes-network-pl...


actually k3s has a cloud deployment startup - civocloud also using it. I would say that the production usage of k3s is outstripping the raspberry pi usage...but however the philosophical underpinnings remain very rpi3 centric.

Things like proxy protocol support (which is pretty critical behind cloud loadbalancers), network plugin choice, etc is going to be very critical.


> For example the network library is Flannel. Almost everyone switches to Calico for any real work stuff on k3s.

What's the tradeoff? Why not flannel for Real Work™?


You could certainly use Flannel in production (Canal = Flannel + Calico) but I like the features that Calico provides, in particular:

- network policy enforcement

- intra-node traffic encryption with wireguard

- calico does not use VXLAN (sends routes via BGP and does some gateway trickery[0]), so it has slightly less overhead

[0]: https://stardomsolutions.blogspot.com/2019/06/flannel-vs-cal...


Is developing locally with one of these k8s implementations a good option? My current workflow is to develop locally with a combination of bare (non-containerized) servers and Docker containers, but all of my deployment is to a hosted k8s cluster.

If developing locally with k8s would likely be a better workflow, are any of these options better than the others for that?


The best solution I have found for developing locally on k8s is k3d [0]. It quickly deploys k3s clusters inside docker. It comes with a few extras like adding a local docker registry and configuring the cluster(s) to use it. It makes it super easy to setup and tear down clusters.

I usually only reach for it when I am building out a helm charm for a project and want to test it. Otherwise docker-compose is usually enough and is less boilerplate to just get an app and a few supporting resources up and running.

One thing I have been wanting to experiment with more is using something like Tilt [1] for local development. I just have not had an app that required it yet.

[0] https://k3d.io/ [1] https://tilt.dev/


The simplest way to bring up a local k8s cluster on your machine for development is to use Kind (https://kind.sigs.k8s.io/).


The best time I had so far was with dockertest[0] in go it allows you to spin up containers as part of your test suite which then allows you to test against them. So we have one go pkg that has all our containers that we need regularly.

The biggest benefit there is no need to have a docker compose or have other resources running locally you just can run the test cases if you have docker installed. [0] https://github.com/ory/dockertest


We deploy with k8s but few of us develop with it. Nearly our whole department uses docker compose to get out dependencies running and to manage our acceptance tests locally. Some people will leverage our staging k8s cluster via kubectl and others just leverage our build pipeline (buildkite + argo cd) that also takes you to stage as it will also take you into production.


I use Minikube, I run `eval $(minikube docker-env)` and push my images straight into it - after patching imagePullPolicy to "IfNotPresent" for any resources using snapshot images - as K8s defaults to IfNotPresent, unless the image ends with "snapshot", then it defaults to "Always"...


I had a good time with Kubespray. Essentially you just need to edit the Ansible inventory and assign the appropriate roles.


Sure, if it works, upgrades are somewhat fraught though (I mean, upgrading a 20 node cluster is an hour long ansible run, or it was when we were using it)

We switched to rke, it’s much better.


An hour to upgrade a 20 node cluster doesn't seem unreasonable for me - when you are doing a graceful upgrade that includes moving workloads between nodes. I don't know anything about rke. Might be interesting but it seems different enough from upstream Kubernetes that you have to learn new things. Seems to me a bit similar to Openshift 4 where the cluster manages the underlying machines.


RKE is minutes on the same cluster, and a one click rollback too. It’s just K8s packaged as containers really.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: