
K3s – Lightweight Kubernetes - kadel
https://k3s.io
======
erulabs
This is extremely welcome. Kube needs to glom on to all available
architectures to fulfill its destiny, to some extent, and the memory/cpu usage
of kubeapi et al. can be prohibitive for small setups. Having to revert to
ansible/systemd in "some cases" really weakens the story of a universal
datacenter O/S.

My hope is for more competition for development k8s (or k3s!) - minikube and
docker-for-desktop is plagued with high CPU usage problems, and OSX-wielding
developers are still doubtful. It will require a lot of work and elbow grease
to convince them. I've built a hosted platform that tries to address this, but
it's a complex education problem, which I am not well suited to solve :(

~~~
geerlingguy
Agreed. Kubernetes (installed via kubeadm) on 1 GB of RAM is a shaky
proposition at best; I've been managing a cluster on Raspberry Pis and the
master often hits swap managing three Nodes with a dozen or so Pods.

~~~
vhost-
K8s doesn’t support swap, you have to disable swap to even run kubelet.

~~~
phire
There is an override flag.

It works, but it kind of invalidates all assumptions k8s makes about free
memory on a node

------
sandGorgon
Oh nice - this is by the Rancher guys. Would love to compare this not with
k8s, but with Docker Swarm.

Docker Swarm on Raspberry Pi is a very common thing. So, from a performance
perspective, these are on par.

Another question is around ingress and network plugin - is it a seamless
"batteries included" experience ? Because these are two of the biggest pains
in k8s to decide and setup .

~~~
pas
Based on the README [0], quickly comparing to Swarm:

\- this has the same great K8S API, that Swarm lacks. (Deployment is a first
class citizen in kube land, but you only have to make do with the service YMLs
in the Swarm sphere.)

\- k3s lacks some in-tree plugins, that swarm might have (mount cloud provider
managed block device), but there are out of tree addons

\- sqlite instead of etcd3 [but available], so out of the box k3s is not HA
ready (if I interpret this[1] right, there's one APIserver, and it's something
in development )

\- this seems more lightweight than a docker setup in some sense (it uses
containerd, not full docker), but of course this needs a proper evaluation

\- ingress plugins: good question. it should work with traefik [2]

[0] [https://github.com/rancher/k3s/blob/master/README.md#what-
is...](https://github.com/rancher/k3s/blob/master/README.md#what-is-this)

[1]
[https://github.com/rancher/k3s/blob/master/README.md#server-...](https://github.com/rancher/k3s/blob/master/README.md#server-
ha)

[2]
[https://github.com/rancher/k3s/search?q=ingress&type=Commits](https://github.com/rancher/k3s/search?q=ingress&type=Commits)

~~~
sheeshkebab
> great K8S API, that Swarm lacks

Hmm... nothing against k8s, but it’s deployment api is an abomination on par
with aws cloudformation.

You need teams of yaml engineers to manage these things.

~~~
013a
[https://twitter.com/kelseyhightower/status/93525292372179353...](https://twitter.com/kelseyhightower/status/935252923721793536)

That's the exact problem Rancher itself is trying to solve, and it does a
pretty fantastic job at it.

Though suggesting its "on par" with CloudFormation suggests that you either
know a ton about CloudFormation (anything becomes second nature if you're
skilled in it) or you don't know much about either of them. Kubernetes isn't
_that_ bad.

~~~
pcr0
It's definitely bad when you have thousands of lines of YAML configuration to
maintain.

On the other hand, a thought out declarative language like Hashicorp's HCL is
much saner thanks to IDE code completion/refactoring and static typing.

~~~
pas
K8s YMLs are the baseline, and there are projects to have a lot more ergonomic
description interface. They are ugly because they are also static typed and
very extensible.

YAML is 1-1 mappable to HCL - because both map to JSON, but the Hashicorp
interface seems nicer because it's simpler.

------
zoobab
I used k3s a while ago on my laptop to do some CI/CD with gitlab and arduino-
cli to flash an ESP8266 device:

[https://fosdem.org/2019/schedule/event/hw_gitlab_ci_arduino/](https://fosdem.org/2019/schedule/event/hw_gitlab_ci_arduino/)

------
Jedi72
"Any sufficiently complicated concurrent program in another language contains
an ad hoc informally-specified bug-ridden slow implementation of half of
Erlang." \- Virding's Law

~~~
cdoxsey
Are you talking about Go, the language this is written in?

Go's concurrency features are formally specified, are not ad-hoc (they are a
major feature of the language, and the product of multiple language design
iterations) and are certainly not bug-ridden.

Or are you talking about using Kubernetes to manage workloads on machines?

How does Erlang solve running my python webapp or my SQL database on my ec2
nodes, or in the case of this project, on an IoT device.

~~~
Jedi72
I am not talking about Go. I dont care what your language is, the true
underlying reasoning for your engineering choices is independent of language.
It just so happens that these problems have been around for ~30 years, and
there has been significant progress in developing tooling to face them, in the
domains that the BEAM languages sphere of influence. Just like how Ecto,
despite being a great tool, does not free us of knowing about Relational
Databases. If you approach the tooling infependently though, as a true
knowledge expert does, you will see the progress made by other giants in these
fields who are not afraid of specialised tools.

~~~
johnmarcus
When I came across Erlang last year at a new company, I thought it was
interesting how many problems Erlang solved that were also in Kubernetes. Self
registering named services, processes that crash and restart automatically,
health checks, etc.

Unfortunately to use all those features the services needed to be written in
Erlang of course, which isn't bad in itself, but unfortunately not the reality
in todays multi-lingual atmosphere. Speaks to how highly Erlang was designed
and written. It took a lot of real life operational concerns into account in
it's design, a concept few other languages even attempt to address.

~~~
jimmy1
"Who cares that these problems are solved in Erlang/BEAM if hardly anyone is
writing end user software in Erlang/BEAM/Elixir", basically.

My number one reason for not using Erlang is lack of static typing. Dynamic
type systems are great for small teams, not so great for "enterprise-wide"
development. (I know this horse has been beaten to death here, trust me, I
have heard the arguments for/against millions of times, I remain unconvinced).

Also the answer to this question is interesting to me as well
[http://erlang.org/pipermail/erlang-
questions/2008-October/03...](http://erlang.org/pipermail/erlang-
questions/2008-October/039261.html) \-- (granted this is from Oct 2008)
apparently if you were to attempt to add static type checking in Erlang, it
would have to fundamentally change some of these benefits erlang brings such
as message passing and handling process death. Haven't been part of the erlang
scene for a while, but would be interested to hear a rebuttal to that.

------
tecleandor
I've been laughing myself of for some minutes, not because the project is
uninteresting (I'll be testing it next week on ARM, probably), but because
I've decided to call this "Kubernetres" (Being 'TRES' three in Spanish)

~~~
dajohnson89
why isn’t it called k7?

~~~
dankohn1
I believe it goes:

Kubernetes -> K8s -> "kates" -> k3s

~~~
caffeineaholic
This is correct we shortened it to "kates" -> k3s (Rancher employee)

------
exabrial
> Only Uses only 512 MB of RAM.

One can literally run a full fledged Java EE server in 64mb of Ram. What?

~~~
jcadam
> One can literally _ruin_ a full fledged Java EE server in 64mb of Ram. What?

Most definitely.

~~~
exabrial
That was hilarious. Thanks

------
diehunde
The "Great for" section doesn't mentioned dev environment. Could this replace
stuff like minikube or microk8s?

------
buckhx
Would love this or something similar for local development. Currently using
the Docker for Mac k8s and it shreds my machine and end up with an hour of
battery life. Had similar experience with minikube.

~~~
xur17
I would as well. Curious if anyone has tried this for local dev.

Also curious if this can run on osx.

~~~
Ramiro
I tried it on my mac with the docker-compose from master and works pretty
well.

------
comboy
Not trying to be snarky, just a sincere question - why would you use
kubernetes for something like IoT?

~~~
outime
I attended KubeCon China last year and there was a very good example of a K8s
setup for IoT that scales pretty well. Check these links if you’re curious:

\-
[https://schd.ws/hosted_files/kccncchina2018english/18/ShengL...](https://schd.ws/hosted_files/kccncchina2018english/18/ShengLiang-
En.pdf)

\- [https://thenewstack.io/rancher-takes-kubernetes-
management-t...](https://thenewstack.io/rancher-takes-kubernetes-management-
to-china/)

Disclaimer: not affiliated with any of the companies above. Edit: formatting.

~~~
perone
Is there any link with the talk ?

------
dankohn1
Here's the recording of the webinar announcement:
[https://www.youtube.com/watch?v=5-5t672vFi4](https://www.youtube.com/watch?v=5-5t672vFi4)

------
meddlepal
Technically speaking: cool

Practically speaking: I worry this will become a parallel but subtlety
different implementation of Kubernetes with it's own quirks.

~~~
dankohn1
That's why CNCF is thrilled that Rancher passed our Certified Kubernetes
conformance tests for k3s prior to the public announcement.

[https://landscape.cncf.io/category=certified-kubernetes-
dist...](https://landscape.cncf.io/category=certified-kubernetes-
distribution,certified-kubernetes-hosted,certified-kubernetes-
installer&format=card-mode&grouping=category&selected=k3s)

~~~
meddlepal
Ah, interesting! Cool. Thanks.

------
detaro
Looks quite interesting for small setups, curious to read more about the
limitations (E.g. " _Added sqlite3 as the default storage mechanism. etcd3 is
still available, but not the default._ " \- what availability promises can
this make?)

~~~
omeid2
In terms of etcd3 vs sqlite3, it is as reliable as most airplane systems that
depend on it.

[https://www.sqlite.org/famous.html](https://www.sqlite.org/famous.html)

I think the "high availability by redundancy" story is oversold.

~~~
philips
Alternatively the K3s authors could have embedded a single node etcd process
into Kubernetes using the embed package instead of introducing sqlite.

[https://godoc.org/github.com/etcd-
io/etcd/embed](https://godoc.org/github.com/etcd-io/etcd/embed)

This is something the Kubernetes community might consider as well.

~~~
smarterclayton
Yeah, OpenShift did this from the very first version. It worked pretty well.
Memory use was very reasonable from etcd 3.0 on.

------
rb808
I'm going to try this - I really hope its good. I've spent a lot of days
trying to get k8s working with a cheap home server. With VMs and docker
conflicting networks its a nightmare. Upgrading is worse.

~~~
geerlingguy
I've tuned a Kubernetes setup to run pretty well on a Pi cluster with 1 GB RAM
per node here: [http://www.pidramble.com](http://www.pidramble.com) — there's
a configuration for testing in Vagrant/VirtualBox machines as well.

~~~
rb808
Hey that looks pretty cool. I bought an old Xeon workstation for $200 off ebay
that I think is more flexible. My own rack of blades does sound awesome. :)

------
johnmarcus
"Added sqlite3 as the default storage mechanism. etcd3 is still available, but
not the default."

I want to use and support this product for this reason alone. Etcd was always
an un-necessary complexity added for god knows what reason. Later cluster
management solutions have abstracted etcd creation and management away
(thankfully), but it's always irksome that it is there. Thank you to the K3s
development team for taking on that challenge!

~~~
geofft
AIUI it gets you away from a single point of failure, right? Unless you have a
reliable NFS server (and non-SPOF NFS servers are rare and pricy), running k8s
on SQLite sounds like you can only have one master.

Of course that's totally fine for many k8s deployments and might even
_increase_ reliability for some use cases, but still, moving from a
distributed system to a local one is a significant change.

~~~
johnmarcus
meh. Practically speaking, we're talking about a single point of disk failure
- which certainly happens but at a rate that is sufficiently low. Plus, the
actual amount of data stored is tiny, you can replicate it in seconds. Amazon
has solutions for this if it's truly of concern, I would guess google does as
well. IMHO, the operation of etcd - and the fact the data became unreadable if
you lost quorum - was a much higher risk factor than possible disk failure. It
was impractical to backup as well, you either have quorum or you don't. Even
without NFS, I could backup that sqlite db every 5 minutes via a cron job and
have most of my cluster state perfectly preserved.

~~~
geofft
Disk failure happens quite frequently for me at scale, but so do other things
like RAM going bad or network cards dying or entire mainboards just acting
weird or top-of-rack switches silently dropping packets because of memory
corruption (all of these have happened to machines I'm responsible for in the
last six months). Again I think this is a matter of scale. If you've got
enough machines that disk failure is a concern, you can also run a 9-node etcd
cluster and have a big enough pager rotation that keeping 5 of them up 99.999%
if not 100% of the time isn't a challenge. If you have less than a rack of
machines and you're not at the point where you're worried about having a SPOF
in your network switches or power supplies, running etcd a bunch of overhead
for a problem you don't have and you are genuinely better served by a robust
non-distributed system whose availability is the availability of your
hardware.

------
rcarmo
This is very nicely done. It annoys me a bit to have to set up a registry and
have a build cycle prior to actually deploying things, but gitkube might help
there...

~~~
jpeeler
I hadn't heard of gitkube before. These blog posts helped me compare and
contrast with similar solutions I had heard of (mainly skaffold and draft):
[https://blog.hasura.io/draft-vs-gitkube-vs-helm-vs-
ksonnet-v...](https://blog.hasura.io/draft-vs-gitkube-vs-helm-vs-ksonnet-vs-
metaparticle-vs-skaffold-f5aa9561f948/)
[https://kubernetes.io/blog/2018/05/01/developing-on-
kubernet...](https://kubernetes.io/blog/2018/05/01/developing-on-kubernetes/)

------
tasubotadas
If it is as easy to deploy, as they say, it is bound to become the most used
implementation as Ubuntu became the most popular Linux distro for servers.

I am looking forward to this.

------
pictur
Do you think kubernetes are necessary for small businesses?

~~~
sliken
Necessary, no. But with the hardware almost free, a 3 x Raspberry Pi cluster,
switch, access pointer, and a small UPS would be a few $100 which allows for
some interesting possibilities.

Pretty much any business will want inventory tracking, tracking staff hours,
monitoring cameras, sensors for detecting flooding/doors left open/freezers
dying, running point of sales terminals, keeping the supply chain running
smoothly, handling customer loyalty cards, handling returns, etc.

An on site cluster can help with much of that, even if there's a hardware
failure or the network is out.

Question is how will these business find the apps they need that run turn key
on some small k8s cluster?

~~~
vinceguidry
Hardware was never the bottleneck there. The systems could cost a hundred
times that and it would still be a rounding error on the capex of even the
smallest "real" business.

The obstacle to using custom hardware and software in local biz is and has
always been, the fact that by relying on it, the one guy who knows how to
administer it all becomes the single point of failure for the whole business.

No amount of tooling can save them from that. So local biz will always be
beholden to the Oracles of the world, because smart individuals just can't be
relied on.

Businesses need to rely on other businesses. A business with a customer
service departments and spare stock on hand. K3s or k8s _might_ be used in the
technology product offered by such a business, but this by itself won't change
the economics of that market.

~~~
sliken
Dunno. I've know several business owners that didn't take credit cards until
recently, and even then only did so because the newer companies have
significantly lowered the investment cost. Maybe you don't consider small
family owned restaurants "real".

Seems like there's an opportunity for a cheap 3 node cluster, sold as a value
add to businesses and supported a platform that application writers could
target. They might well come with a support contract, some basic
functionality, and then users could pay for inventory, staff scheduling, point
of sales terminal support, integration with food delivery services, etc.

Much like how some home/small business NASs these days have support for 100s
of integrations to various online services.

~~~
peterwwillis
"Cluster" doesn't really square with "small business". Like, why have 3 nodes?
HA? They're all probably plugged into the same power run and switch and
breathing the same A/C. They're not actually redundant but they are more
complicated. So it makes no sense unless you have strangely huge/complicated
computational requirements.

~~~
sliken
Why not? Small business can lose significant money from a few hours of
downtime and support that's onsite within an hour is quite expensive.

I'm not thinking cluster for performance, just for HA.

For small business why not 3 x Raspberry Pi to enable as much functionality as
possible without network and/or power? A cheap UPS would likely run a few Pi
for days. Chick-fil-a (3 NUC in a k8s cluster) seems pretty proud of their
setup, why not something similar for any similar size restaurant? 3x Pi for
smaller businesses seems like a good fit.

Oh, and a Pi cluster isn't going to need any more AC than a human, even if
it's uncomfortable. I have one in my attic and it regularly gets above 110 in
the summer, no problems so far.

If the network is out, have the Pi fall over to WAN, this is pretty common
these days. Some consumer routers support this (insert a sim car), and it's
fairly common for raspberry Pis used for home security to support similar.
Handling credit card transactions over WAN is reasonably practical... even
falling over to a modem+POTS could do for an emergency.

So between UPS (even UPS + solar + battery would be reasonable for a Pi) and
fallover to WAN or modem a 3 way cluster could help keep business up during
power outages, storms, earthquakes, fires, and of course node failures.

Question is can the right combination of hardware standardization, software
standardization, support, and application store get together to enable chick-
fil-a like functionality at a price point acceptable to small businesses?

~~~
peterwwillis
It's still a bad idea.

First of all, do you know what k8s is for? Bin packing. Does your small
business have a bin packing problem? (And I don't mean crates) Does your small
business even need containers at all?

Second, 3 nodes is more than you need. You only need 2 nodes for HA. There is
no universe in which two nodes in the closet of a coffee shop would go down,
but three would not.

Third, it's too complicated. It takes teams of people loads of time and money
to get it working properly, and then they have to keep supporting it, because
release cycles and changing standards, etc. Distributed systems are the most
complex and the most costly, unless you're at a huge scale, and then it _can_
be cheaper.

Four, it's unnecessarily expensive, because again, you don't need 3, and it's
too complicated.

Five, you don't need 3 nodes to have redundant network paths. A DSL line and a
cell modem are pretty easy to plug into one machine.

But six, the real reason this wouldn't work: small businesses do not buy HA.

> Question is can the right combination of hardware standardization, software
> standardization, support, and application store get together to enable
> chick-fil-a like functionality at a price point acceptable to small
> businesses?

Yes, it's called Windows on a Dell.

------
zoobab
Welcome to 2019 where an "embedded" system has 1GB RAM.

The problem of GO is that all those unused libraries are loaded in RAM instead
of staying on disk, and loaded when needed.

An helloworld in GO (around 1.6MB) is still too large for my openwrt router.

~~~
znpy
> Welcome to 2019 where an "embedded" system has 1GB RAM.

Thirty years ago someone could have said the same when seeing laptop computers
and remembering when computer literally used to occupy whole large rooms.

Come on, let's stop doing this kind of jokes, they add nothing to the
discussion.

~~~
rkangel
The problem is that the bottom end of 1k RAM still exists. In fact, because
the main change is that it gets cheaper every year, it's even more prevalent.
Just because there aren't lots of blog posts, doesn't mean that isn't a large
proportion of the industry.

The word 'embedded' is not well defined, but if you use it on things with 1GB
of RAM, what do you call a PIC with 1K?

My personal definition of embedded (which is of course still flawed) is
anything without an MMU. A Raspberry Pi isn't embedded for instance, it's just
a small computer, just like a phone.

~~~
geofft
Then what do you call a server with a CPU+MMU for its remote management or an
iPhone with a CPU+MMU for its secure enclave? A cluster?

There is certainly a lot of technical merit in calling the five-inch thing in
my hand a distributed system of multiple computers, but in practice it's not
the common definition.

------
lostmsu
Would it work on Android? That would be interesting.

~~~
smw355
Darren mentioned on the CNCF webinar that he has plans to test it on chromeOS,
which is similar to android, but that it hasn’t been done yet.

------
dividedbyzero
This looks great, playing around with it right now. They seem to have removed
PersistentVolumes, too. I'd like to run a MySQL database on K3S, how would I
go about making sure that my database's data survives when its pod dies?

------
antranigv
I mean, I've built a Kubernetes-y thing for my own needs, mostly "scheduling"
LXC and FreeBSD Jails, and it needs hella less than 256MB of RAM and it's
around 20MBs. Maybe I should clean the code and publish it.

~~~
biggestlou
That's literally not Kubernetes, though. Does it support custom resource
definitions? Services? Deployments? ConfigMaps? Multiple containers on a
single host, i.e. Pods?

Your project does sound very cool but I don't think provides a fair basis of
comparison re: size and RAM usage.

~~~
antranigv
very good question! source definition -> no. services -> yes. deployments ->
yes. configmaps -> no. multiple containers on single host -> yes. I should add
the most common features I think. I also haven't integrated it with any
reverse-proxy service yet, but will try that too.

------
crawshaw
It is nice to see people working on making software smaller!

It leaves me wondering though, what is Kubernetes doing with all of these
resources? When I saw "lightweight" in the title I expected 5.12MB of RAM, not
512MB.

------
captn3m0
@mods: This should be merged with
[https://news.ycombinator.com/item?id=19257675](https://news.ycombinator.com/item?id=19257675)

------
thinkersilver
This looks great, we're planning to test with the intention of moving our
CI/CD orchestration onto this.

We use Calico internally, is there a plan in the future to allow other SDNs?

~~~
granra
[https://github.com/rancher/k3s#flannel](https://github.com/rancher/k3s#flannel)

------
mleonard
Anyone know if this will run on a Pixelbook? (inside the Crostini linux
virtual machine in ChromeOS). Thanks

~~~
excieve
Got excited about this too, but unfortunately no, it doesn't run on Crostini.
Fails with querying netfilter.

Also tried through docker-compose from the GH repository. It starts the k3s
server fine but fails on starting the nodes as they require privileged
container mode, which doesn't work on Crostini at the moment.

~~~
smw355
Mentioned this elsewhere, but if you create an issue, Darren mentioned on the
CNCF webinar that they would be looking into ChromeOS soon.

~~~
excieve
There's also a feature request in ChromeOS bugtracker about adding the missing
bits for minikube, part of which seem to be similar to what k3s is using:
[https://bugs.chromium.org/p/chromium/issues/detail?id=878034](https://bugs.chromium.org/p/chromium/issues/detail?id=878034)

------
linuxdude314
I find it humorous how many people in this thread seem so willing to use this
in production.

If you can't make Vanilla upstream Kubernetes work, it is very ill advised to
think you will be met with success using a heavily modified fork that has a
fraction of the support of K8S.

~~~
oskapt
In an environment where only a fraction of support is needed, it doesn't make
sense to run a fat distro with features that you'll never use. K3s is designed
for low-power, low-resource environments that don't need things like EBS or
NFS volume support or alpha/beta API features. Imagine edge environments like
traffic management networks, wind farms, telco substations, or other places
where environmental constraints make it more reliable to run something fanless
with flash storage and 1GB of RAM. Stock Kubernetes will choke under those
circumstances, but does that mean that those environments are forbidden from
using the benefits of Kubernetes?

------
lprd
A bit off topic, but has anyone used K8s along with Proxmox?

~~~
Nux
Not a k8s user, but no reason it shouldn't work from what I read about it.
Spawn a few CoreOS or RancherOS VMs and off you go.

Some people even used it with LXC, though it seems a bit of a work. e.g.
[https://medium.com/@kvaps/run-kubernetes-in-lxc-
container-f0...](https://medium.com/@kvaps/run-kubernetes-in-lxc-
container-f04aa94b6c9c)

~~~
lprd
Gonna check this out, thank you! I tried installing CoreOS a couple years ago
and I recall encountering some issues using LXC, but maybe it'll work now :)

------
mbrumlow
And we have done it! Solutions for the solutions!

------
milkers
kubes

