Hacker News new | past | comments | ask | show | jobs | submit login
K3s – Lightweight Kubernetes (k3s.io)
458 points by kadel 27 days ago | hide | past | web | favorite | 188 comments



This is extremely welcome. Kube needs to glom on to all available architectures to fulfill its destiny, to some extent, and the memory/cpu usage of kubeapi et al. can be prohibitive for small setups. Having to revert to ansible/systemd in "some cases" really weakens the story of a universal datacenter O/S.

My hope is for more competition for development k8s (or k3s!) - minikube and docker-for-desktop is plagued with high CPU usage problems, and OSX-wielding developers are still doubtful. It will require a lot of work and elbow grease to convince them. I've built a hosted platform that tries to address this, but it's a complex education problem, which I am not well suited to solve :(


Agreed. Kubernetes (installed via kubeadm) on 1 GB of RAM is a shaky proposition at best; I've been managing a cluster on Raspberry Pis and the master often hits swap managing three Nodes with a dozen or so Pods.


K8s doesn’t support swap, you have to disable swap to even run kubelet.


There is an override flag.

It works, but it kind of invalidates all assumptions k8s makes about free memory on a node


What workloads are you running? With only 1gb, have you looked into setting resource limits for your pods?


I really want to do this as well - we've just started looking at minikube for our group, and I'd love to use k3s instead, but does anyone know if it works on OSX? I was able to get a cluster running on my laptop in under a minute, which is pretty amazing.


Have you seen or used Microk8s ? Seems like it might be what you are looking for. Can be installed and removed via snap package too


Microk8s looks interesting as well, but it doesn't support osx as far as I can tell.


multipass + microk8s on macos https://github.com/CanonicalLtd/multipass


Oh nice - this is by the Rancher guys. Would love to compare this not with k8s, but with Docker Swarm.

Docker Swarm on Raspberry Pi is a very common thing. So, from a performance perspective, these are on par.

Another question is around ingress and network plugin - is it a seamless "batteries included" experience ? Because these are two of the biggest pains in k8s to decide and setup .


Based on the README [0], quickly comparing to Swarm:

- this has the same great K8S API, that Swarm lacks. (Deployment is a first class citizen in kube land, but you only have to make do with the service YMLs in the Swarm sphere.)

- k3s lacks some in-tree plugins, that swarm might have (mount cloud provider managed block device), but there are out of tree addons

- sqlite instead of etcd3 [but available], so out of the box k3s is not HA ready (if I interpret this[1] right, there's one APIserver, and it's something in development )

- this seems more lightweight than a docker setup in some sense (it uses containerd, not full docker), but of course this needs a proper evaluation

- ingress plugins: good question. it should work with traefik [2]

[0] https://github.com/rancher/k3s/blob/master/README.md#what-is...

[1] https://github.com/rancher/k3s/blob/master/README.md#server-...

[2] https://github.com/rancher/k3s/search?q=ingress&type=Commits


> great K8S API, that Swarm lacks

Hmm... nothing against k8s, but it’s deployment api is an abomination on par with aws cloudformation.

You need teams of yaml engineers to manage these things.


https://twitter.com/kelseyhightower/status/93525292372179353...

That's the exact problem Rancher itself is trying to solve, and it does a pretty fantastic job at it.

Though suggesting its "on par" with CloudFormation suggests that you either know a ton about CloudFormation (anything becomes second nature if you're skilled in it) or you don't know much about either of them. Kubernetes isn't that bad.


It's definitely bad when you have thousands of lines of YAML configuration to maintain.

On the other hand, a thought out declarative language like Hashicorp's HCL is much saner thanks to IDE code completion/refactoring and static typing.


K8s YMLs are the baseline, and there are projects to have a lot more ergonomic description interface. They are ugly because they are also static typed and very extensible.

YAML is 1-1 mappable to HCL - because both map to JSON, but the Hashicorp interface seems nicer because it's simpler.


Is YAML engineer a real thing? I always considered YAML to be a slightly hacky configuration DSL that I needed to be familiar with, not an actual core, career, competence.

For example, can I become an INI engineer?


I think that's the joke :)

Of course k8s's YAML API is just a clunky interface, because it's evolving very rapidly, and it's a rather low-level thing. So basically if you work a lot with k8s, it seems like all you do all day is to copy paste [or generate, or parse] YAMLs.


I think that's the first time I've seen anyone advocate cloudformation above anything.


I cannot read what your parent says as an advocation for cloudformation, let alone above anything; only that sth is as bad as cloudformation.


The syntax is ugly, the features are amazing. And it's easy to abstract over ugly syntax, but it's hard to work around Swarm's missing features.


>this has the same great K8S API, that Swarm lacks. (Deployment is a first class citizen in kube land, but you only have to make do with the service YMLs in the Swarm sphere.)

With all due respect, some (like me) consider that a feature of Swarm.

If k3s comes with the same API (and implicit yaml complexity) of k8s, but just reduces RAM usage...well that's not very interesting for me.

Docker Swarm is very lean - there are zillions of videos of how people are building interesting raspberry Pi stacks using it (https://youtu.be/mMpZpa7uUSk).

But for most people using Swarm - they are using it for one reason only : simplicity.


I use Docker Compose on a small, cloud-hosted VM to run all my development dependencies and stuff I need for testing (I've got RabbitMQ, Postgres and Splunk running at present). It was really simple to setup, and "just works" - and the nodes don't have to compete with a greedy K8s orchestrator for resources!


Ditto. For a single machine you don't need "orchestration" - you just need something cleaner than a bash script to start/stop and potentially restart your containers.

I set up my home server with minikube more as a learning exercise, but went back to docker-compose for running all the things I actually rely on (Plex for instance) just because it's so much simpler.


Agreed. k8s makes no sense on 1 node. But in my experience Swarm is just not enough when you have more than one node. You have to work around a few missing key features. (Like the management of deployments. With Swarm you have stacks, and you can update them, but you can't easily roll back the update, you can't easily manage/configure the update. And if the update gets stuck, it just gets stuck, it's not managed.)


The stuck issue is not a limitation of Swarm, but was a bug that has gotten fixed in 18.06 - https://github.com/moby/moby/issues/37493

Rollback has existed for a while now - https://docs.docker.com/engine/reference/commandline/service... . you can actually tune the parallelism and the delay


It includes the flannel network plugin by default (although you can change this) and a basic service load balancer (technically not an ingress, but providing the same functionality). From the README:

> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.


Traefik is deployed for layer-7 ingress.


I used k3s a while ago on my laptop to do some CI/CD with gitlab and arduino-cli to flash an ESP8266 device:

https://fosdem.org/2019/schedule/event/hw_gitlab_ci_arduino/


"Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang." - Virding's Law


Are you talking about Go, the language this is written in?

Go's concurrency features are formally specified, are not ad-hoc (they are a major feature of the language, and the product of multiple language design iterations) and are certainly not bug-ridden.

Or are you talking about using Kubernetes to manage workloads on machines?

How does Erlang solve running my python webapp or my SQL database on my ec2 nodes, or in the case of this project, on an IoT device.


I am not talking about Go. I dont care what your language is, the true underlying reasoning for your engineering choices is independent of language. It just so happens that these problems have been around for ~30 years, and there has been significant progress in developing tooling to face them, in the domains that the BEAM languages sphere of influence. Just like how Ecto, despite being a great tool, does not free us of knowing about Relational Databases. If you approach the tooling infependently though, as a true knowledge expert does, you will see the progress made by other giants in these fields who are not afraid of specialised tools.


When I came across Erlang last year at a new company, I thought it was interesting how many problems Erlang solved that were also in Kubernetes. Self registering named services, processes that crash and restart automatically, health checks, etc.

Unfortunately to use all those features the services needed to be written in Erlang of course, which isn't bad in itself, but unfortunately not the reality in todays multi-lingual atmosphere. Speaks to how highly Erlang was designed and written. It took a lot of real life operational concerns into account in it's design, a concept few other languages even attempt to address.


"Who cares that these problems are solved in Erlang/BEAM if hardly anyone is writing end user software in Erlang/BEAM/Elixir", basically.

My number one reason for not using Erlang is lack of static typing. Dynamic type systems are great for small teams, not so great for "enterprise-wide" development. (I know this horse has been beaten to death here, trust me, I have heard the arguments for/against millions of times, I remain unconvinced).

Also the answer to this question is interesting to me as well http://erlang.org/pipermail/erlang-questions/2008-October/03... -- (granted this is from Oct 2008) apparently if you were to attempt to add static type checking in Erlang, it would have to fundamentally change some of these benefits erlang brings such as message passing and handling process death. Haven't been part of the erlang scene for a while, but would be interested to hear a rebuttal to that.


I'm not afraid of specialized tools. For example I've used RabbitMQ which I believe is written in Erlang.

We run it in Kubernetes in both our CI environment and locally so we can use the same configuration for both. It runs a dozen microservices and saves us a lot of devops time.

In our CI environment it's nice for a developer to roll out a new copy of the services from a different branch for testing, and because of the autoscaler nodes are dynamically provisioned when needed, and then removed when the test environment is destroyed. That's pretty nice functionality you get basically out of the box. On EKS anyway. There are some occasional rough edges, but I think the decision to use k8s for this makes sense...

At some point we'll migrate prod too.

Kubernetes has a lineage as well... Based on the Borg project at Google.

I guess I'm not seeing why these things are being compared. Aren't they solving different problems?


I think (s)he was talking about BEAM. Using RabbitMQ makes no difference if you're not programming in Erlang. If you look at the BEAM VM you will see isolated processes, supervisors, restart strategies etc. Wikipedia[1] lists Erlang's runtime system characteristics: "Distributed, Fault-tolerant, Soft real-time, Highly available, non-stop applications, Hot swapping, where code can be changed without stopping a system." Does that ring any bells?

[1] https://en.wikipedia.org/wiki/Erlang_(programming_language)


So you're suggesting we use BEAM instead of Kubernetes?

So just rewrite all our code in Erlang, including off the shelf products we didn't write?

Then use some other technology to deploy the software, configure the machine, come up with a way to do service discovery, attach block devices to nodes, and automatically provision new machines based on resource usage?

It doesn't make sense yet it's seemingly repeated on every thread about Kubernetes.

The Erlang community should celebrate technologies like Kubernetes, Mesos, or any other number of resource scheduling systems because they're finally the industry taking seriously the problems addressed by Erlang.


The biggest difference is BEAM is Erlang which imo is what's done wrong, Kubernetes is language neutral, it's a platform not a language runtime.

Similar to what Envoy is when you do service mesh, you don't need to integrate libraries into your code to get service mesh features using Envoy as side car proxy.

I have that conversation with Erlang users on every k8s topics, they seem to miss the point that decoupling features outside of the language / runtime is the way to go.


Not sure what point you’re making specifically. Could you elaborate?


What are "these problems"?


Erlang is a terrific platform, but it isn't very accessible to the wider population.

Timely, here's antirez on this topic when Redis was announced on HN ten years ago [0]:

> There are a lot of projects similar to Redis in Erlang actually: http://www.metabrew.com/article/anti-rdbms-a-list-of-distrib...

> Apart from speed what I like of C for this kind of projects is the self-contain-ness and the fact that most developers that may help in the development are probably able to read/write C but not Erlang.

[0] https://news.ycombinator.com/item?id=494903


IMO any program with a sufficiently high number of entities in a low level language contains an ad hoc informally-specified bug-ridden slow implementation of a garbage collector


Not exactly, a garbage collector is by definition a collector (there is sweeps rather than per-reference and more importantly has runtime cost). I think, manual memory management is a lot more similar to a poorman's ARC implementation.


A poorman's ARC implementation might not collect cycles, but it does not do double free() or use-after-free().


I've been laughing myself of for some minutes, not because the project is uninteresting (I'll be testing it next week on ARM, probably), but because I've decided to call this "Kubernetres" (Being 'TRES' three in Spanish)


That is amazing branding for the Spanish / Portuguese market. Well done.


why isn’t it called k7?


I believe it goes:

Kubernetes -> K8s -> "kates" -> k3s


This is correct we shortened it to "kates" -> k3s (Rancher employee)


for some reason i thought it would be short for "kubes" (pronounced coobs)


Not to be confused by Qubes (pronounced koobs)


I'm guessing they sliced the "8" down the middle which makes a "3"


They have a "5 less than K8S" tagline. I don't know if that means 5 fewer features, 5 fewer dependencies, or just "less than half the size".


K8


> Only Uses only 512 MB of RAM.

One can literally run a full fledged Java EE server in 64mb of Ram. What?


> One can literally ruin a full fledged Java EE server in 64mb of Ram. What?

Most definitely.


That was hilarious. Thanks


The site was updated to reflect that you can run K3s on hardware with only 512MB of RAM. You'll get to use most of that for your workloads.


My thought exactly. I did a double take initially thinking they must mean 512kB.


The "Great for" section doesn't mentioned dev environment. Could this replace stuff like minikube or microk8s?


Would love this or something similar for local development. Currently using the Docker for Mac k8s and it shreds my machine and end up with an hour of battery life. Had similar experience with minikube.


I would as well. Curious if anyone has tried this for local dev.

Also curious if this can run on osx.


I tried it on my mac with the docker-compose from master and works pretty well.


Not trying to be snarky, just a sincere question - why would you use kubernetes for something like IoT?


I attended KubeCon China last year and there was a very good example of a K8s setup for IoT that scales pretty well. Check these links if you’re curious:

- https://schd.ws/hosted_files/kccncchina2018english/18/ShengL...

- https://thenewstack.io/rancher-takes-kubernetes-management-t...

Disclaimer: not affiliated with any of the companies above. Edit: formatting.


Is there any link with the talk ?


Because I want my doors, windows, garage door, smoke alarms, security system, and any other smart thing in my house even when the IoT controller dies. I also want as much functionality as possible if my ISP (Comcast) decides on another "maintenance" window.

So that basically means local smarts and fall over in the case of a hardware failure.


Same. What does it have to do with kubernetes?


Well if you want a nice cheap, small cluster, with HA, and wide community support, and a container model so you have to trust the applications less. What else are you going to use? What else is going to provide a nice easy to use HA raspberry pi that's $100 ish?


Can containers really improve security in case of IoT where there's a single application that pretty much needs access to the whole device? I think they may only increase the attack surface. High availability seems like an orthogonal problem in case of IoT. It's not like you can spawn more devices. But I acknowledge that it may be just me getting old.

Still happy to learn more arguments for using k8s in IoT (if you are already using k8s then yes, seems like k3s would make sense).


IoT clients consume the entire device. Not necessarily the IoT server. k3s supports containerd on arm7 and provides good isolation. With efficient code you can easily support a web portal, a light weight database, and plenty of logic for IoT type needs. Seems ideal for trusting multiple 3rd parties, like say one company with your thermostat, one for your cameras, and a 3rd for your lights or similar.


I don't think the suggestion for Kubernetes in IoT is to run it on the Things, but for small local servers running services that work with the devices, instead of pushing all that in the cloud.


Rancher Employee here: One of the use cases we have been seeing is for stores, factories, oil wells, base stations, wind farms, etc. the apps people are trying to deploy to these now look much more like data center/micro services apps. Kafka, Redis, logstash, etc. K8s is really good for running these in a reliable and consistent way on potentially divergent hardware.


Not necessarily. I've seen some k8s on IOT industrial use cases where it made sense. Solar power stations were one example, IIRC.


I like your username.


https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at...

Of all things, the Chik-fil-A tech blog gives a pretty good explanation of why you might want this.


We use it to orchestrate agents, most of which, but not all, tunnel into VPN concentrators scattered across the globe to monitor and control remote devices. Admittedly ours is in the M2M IoT space for now (Digital Cinema) -- we don't deploy our own sensor network (yet) though our agent could broker that integration as well.

After toying with k8s, we realized just how much tinker time our current docker-n-script approach takes to optimize and scale optimally. We're close to moving to k8s after almost a year of on-again/off-again dev and I'm very excited to roll it out. Not as excited as DevOps, maybe, as it's going to lift a lot of weight off their shoulders.

TL;DR: IoT means different things in different industries.


Technically speaking: cool

Practically speaking: I worry this will become a parallel but subtlety different implementation of Kubernetes with it's own quirks.


That's why CNCF is thrilled that Rancher passed our Certified Kubernetes conformance tests for k3s prior to the public announcement.

https://landscape.cncf.io/category=certified-kubernetes-dist...


Ah, interesting! Cool. Thanks.


> Practically speaking: I worry this will become a parallel but subtlety different implementation of Kubernetes with it's own quirks.

Hello, we're already there. Despite the CNCF certification process, there are many flavours of k8s, all with their own quirks. Being on the product side, it is quite painful to support.


I am aware of this problem (getambassador.io dev as well as infra engineer). The differences usually manifest at the edges rather than the internals right now. This has the potential to be even more disruptive problem.

I agree the CNCF certification process isn't comprehensive enough.


Well, even the internals have issues.

Take Openshift as the most extreme example - it's a fork of K8s, with a lot of custom functionality like scc and routes, and also a lot of custom behavior like its admission controllers that manage pod security contexts.

It's almost like a different version of k8s - things that will run on GKE or vanilla k8s won't work on Openshift. Yet, it is CNCF compliant.


Here's the recording of the webinar announcement: https://www.youtube.com/watch?v=5-5t672vFi4


Looks quite interesting for small setups, curious to read more about the limitations (E.g. "Added sqlite3 as the default storage mechanism. etcd3 is still available, but not the default." - what availability promises can this make?)


In terms of etcd3 vs sqlite3, it is as reliable as most airplane systems that depend on it.

https://www.sqlite.org/famous.html

I think the "high availability by redundancy" story is oversold.


> I think the "high availability by redundancy" story is oversold.

More generally, an awful lot of systems that'd be fine with a SQL mirror (non-failover), some real backups on top, and a restore-from-backup event every 3-5 years, plus the occasional 1hr maintenance window, are instead made "HA" and "redundant" at 10+x the ops cost and, unless very well done, with even more down-time anyway because they're so damn complex.


Alternatively the K3s authors could have embedded a single node etcd process into Kubernetes using the embed package instead of introducing sqlite.

https://godoc.org/github.com/etcd-io/etcd/embed

This is something the Kubernetes community might consider as well.


Yeah, OpenShift did this from the very first version. It worked pretty well. Memory use was very reasonable from etcd 3.0 on.


If someone wanted to do it, the lxc/lxd team has a distributed sqlite, so you could adapt that to k3s. Though I suppose that's not a good match for the stated purpose of k3s.

https://github.com/lxc/lxd/blob/master/doc/database.md

https://github.com/CanonicalLtd/dqlite


The reliability of sqlite3 is not in question, the reliability of the system running it is. When running in any of the cloud providers you will get systems that briefly lose network connection or just go away. Sometimes with notification, sometimes without. You will have down time if you don't plan for HA. So the question is whether it is worth the complexity trade off for that period of downtime or how many 9s do you need in your SLA.


Not worried about reliability of sqlite, but the system running it, and what happens if it goes down. E.g. if it relies on a single node, but the cluster just can't make changes anymore but continues to run just fine and cleanly recovers once the main DB is back, that's probably a tradeoff that works often. If stuff starts breaking quickly, not so much.


Airplanes also contain expensive hardware. My own desire for reliability via redundancy is that commodity hardware (which is what's in most datacenters) likes to fail.


Interesting enough, FoundationDB currently uses sqllite's storage engine for persistence.


pretty sure critical airline systems have redundancy


If it is as easy to deploy, as they say, it is bound to become the most used implementation as Ubuntu became the most popular Linux distro for servers.

I am looking forward to this.


This is very nicely done. It annoys me a bit to have to set up a registry and have a build cycle prior to actually deploying things, but gitkube might help there...


I hadn't heard of gitkube before. These blog posts helped me compare and contrast with similar solutions I had heard of (mainly skaffold and draft): https://blog.hasura.io/draft-vs-gitkube-vs-helm-vs-ksonnet-v... https://kubernetes.io/blog/2018/05/01/developing-on-kubernet...


Do you think kubernetes are necessary for small businesses?


Necessary, no. But with the hardware almost free, a 3 x Raspberry Pi cluster, switch, access pointer, and a small UPS would be a few $100 which allows for some interesting possibilities.

Pretty much any business will want inventory tracking, tracking staff hours, monitoring cameras, sensors for detecting flooding/doors left open/freezers dying, running point of sales terminals, keeping the supply chain running smoothly, handling customer loyalty cards, handling returns, etc.

An on site cluster can help with much of that, even if there's a hardware failure or the network is out.

Question is how will these business find the apps they need that run turn key on some small k8s cluster?


Hardware was never the bottleneck there. The systems could cost a hundred times that and it would still be a rounding error on the capex of even the smallest "real" business.

The obstacle to using custom hardware and software in local biz is and has always been, the fact that by relying on it, the one guy who knows how to administer it all becomes the single point of failure for the whole business.

No amount of tooling can save them from that. So local biz will always be beholden to the Oracles of the world, because smart individuals just can't be relied on.

Businesses need to rely on other businesses. A business with a customer service departments and spare stock on hand. K3s or k8s might be used in the technology product offered by such a business, but this by itself won't change the economics of that market.


Dunno. I've know several business owners that didn't take credit cards until recently, and even then only did so because the newer companies have significantly lowered the investment cost. Maybe you don't consider small family owned restaurants "real".

Seems like there's an opportunity for a cheap 3 node cluster, sold as a value add to businesses and supported a platform that application writers could target. They might well come with a support contract, some basic functionality, and then users could pay for inventory, staff scheduling, point of sales terminal support, integration with food delivery services, etc.

Much like how some home/small business NASs these days have support for 100s of integrations to various online services.


"Cluster" doesn't really square with "small business". Like, why have 3 nodes? HA? They're all probably plugged into the same power run and switch and breathing the same A/C. They're not actually redundant but they are more complicated. So it makes no sense unless you have strangely huge/complicated computational requirements.


Why not? Small business can lose significant money from a few hours of downtime and support that's onsite within an hour is quite expensive.

I'm not thinking cluster for performance, just for HA.

For small business why not 3 x Raspberry Pi to enable as much functionality as possible without network and/or power? A cheap UPS would likely run a few Pi for days. Chick-fil-a (3 NUC in a k8s cluster) seems pretty proud of their setup, why not something similar for any similar size restaurant? 3x Pi for smaller businesses seems like a good fit.

Oh, and a Pi cluster isn't going to need any more AC than a human, even if it's uncomfortable. I have one in my attic and it regularly gets above 110 in the summer, no problems so far.

If the network is out, have the Pi fall over to WAN, this is pretty common these days. Some consumer routers support this (insert a sim car), and it's fairly common for raspberry Pis used for home security to support similar. Handling credit card transactions over WAN is reasonably practical... even falling over to a modem+POTS could do for an emergency.

So between UPS (even UPS + solar + battery would be reasonable for a Pi) and fallover to WAN or modem a 3 way cluster could help keep business up during power outages, storms, earthquakes, fires, and of course node failures.

Question is can the right combination of hardware standardization, software standardization, support, and application store get together to enable chick-fil-a like functionality at a price point acceptable to small businesses?


It's still a bad idea.

First of all, do you know what k8s is for? Bin packing. Does your small business have a bin packing problem? (And I don't mean crates) Does your small business even need containers at all?

Second, 3 nodes is more than you need. You only need 2 nodes for HA. There is no universe in which two nodes in the closet of a coffee shop would go down, but three would not.

Third, it's too complicated. It takes teams of people loads of time and money to get it working properly, and then they have to keep supporting it, because release cycles and changing standards, etc. Distributed systems are the most complex and the most costly, unless you're at a huge scale, and then it can be cheaper.

Four, it's unnecessarily expensive, because again, you don't need 3, and it's too complicated.

Five, you don't need 3 nodes to have redundant network paths. A DSL line and a cell modem are pretty easy to plug into one machine.

But six, the real reason this wouldn't work: small businesses do not buy HA.

> Question is can the right combination of hardware standardization, software standardization, support, and application store get together to enable chick-fil-a like functionality at a price point acceptable to small businesses?

Yes, it's called Windows on a Dell.


Yes, this is what's exciting to me about K3s. This is a business opportunity for Consultants. There are big companies already paying good money to implement LoFi clusters at scale https://medium.com/@cfatechblog/bare-metal-k8s-clustering-at...


No, for the same reason that Martin Fowler argues you should start with a monolith first before moving to a microservices architecture.


What is currently a good alternative for a small business who needs to distribute containers on a handful of nodes? Docker swarm mode?


A single baremetal server with just docker containers. It's much easier to migrate from just containers running on nodes to an orchestration system (marathon, nomad OR k3s/k8s; there are other options) than to start with the orchestration system.


In my case it's a single bare metal server with KVM guests and ansible.

Docker is on my long term list (maybe) but the current system is fast, stable and requires little intervention so docker would have to offer a massive change not a step change to make it worth it.

I always default to things I know until the benefits are clear and it's worked out well so far.


This is the situation we're in. We're beginning to reach the limits of what can be handled on a single instance, mostly regarding memory.


If you need to distribute containers on a handful of nodes, kubernetes is great (and far better than Docker swarm imho). However, most small businesses probably don't need to have a microservice setup and can simply deploy a simple monolith while they focus on getting their business off the ground.


Thank you. I was thinking that maybe swarm mode would be simpler to set up and maintain.

Just to give more context, our use case is spawning Jupyter notebook in containers using JupyterHub and DockerSpawner so that each data scientist gets a personal ready-to-use environment to work with. With 4 data scientists we're beginning to reach the limits of the largest instance provided by our cloud provider and we're expecting to have more users soon. Actually our current cloud provider has just released a managed k8s service so that might be the way to go.


Gotcha. Yeah, I would much rather use a managed k8s service over deploying and maintaining my own. GCP's GKE has been great for us so far and very reasonably priced.


Docker swarm, Consul+Nomad, AWS Fargate, Azure ACS, Google GCP, etc. Even DC/OS+Marathon is probably simpler. If you really think you need K8s, use a managed service provider like GKE, AKS, or a platform manager like Rancher. You need to pay a specialized company to build and operate it for you, or you aren't going to have a fun time.

I highly recommend leveraging AWS, Google, Azure, etc services. Especially if you're a small business, spending a little for off the shelf tools that someone else operates and supports is going to help you big time.


Kube is fine for this, but don't roll your own, use GKE.

Perhaps you'd be better served using more managed services, though?


What do you mean by more managed services?


I'm guessing Heroku, Google App Engine, AWS Beanstalk and such.


Kubernetes isn't even necessary for medium and large businesses. It's more a question of what problem you're trying to solve.

Kubernetes is only relevant if you have the need to scale your service quickly and/or often. If you run it on your own hardware the value is even more tricky to work out, it can still be a benefit. Whether or not it help you save on hardware cost is extremely dependent on your usage.


I see it as decoupling the system administrator tasks from the software tasks. Sys admins can setup the OS, volumes, kubernetes, and monitoring processes. Application developers can do their own thing and deploy apps without having to care about the details of which mount their app needs for disks or the intricacies of whichever process runner they use.


Nope but it solves problems you have to fix yourself otherwise anyway:

- Rolling update? Build it yourself or get it with k8s - Infrastructure as code? Do it somehow yourself with ansible, awx or other setup OR just use k8s - Running different workloads on a bunch of hardware with resource scheduler -> some trickery or x different kind of vms

I like it very much and i hope and guess that the k8s way will take over a lot. And since i'm working with k8s i really wanna have it at home. k3s sounds like a very good way forward.

If you have a good working little setup, you don't need to replace it with k8s.


It wasn't before it existed and isn't now it's convenient though


No, in capital letters, with exclamation marks. (We're not allowed to be visibly dramatic on HN)


Most likely not, as it adds quite a bit of complexity. It might be easier to start with simpler abstractions such as e.g. AWS ECS or similar alternatives (cloud or on-prem) if you want to use containers.


I think maybe eventually with cheaper ARM CPUs.


I'm going to try this - I really hope its good. I've spent a lot of days trying to get k8s working with a cheap home server. With VMs and docker conflicting networks its a nightmare. Upgrading is worse.


I've tuned a Kubernetes setup to run pretty well on a Pi cluster with 1 GB RAM per node here: http://www.pidramble.com — there's a configuration for testing in Vagrant/VirtualBox machines as well.


Hey that looks pretty cool. I bought an old Xeon workstation for $200 off ebay that I think is more flexible. My own rack of blades does sound awesome. :)


kubespray is great for multi-vms on home server. Just install base OS, run their ansible playbook (though might need to tell it the OS type in config, it will install and configure node requirements like docker).


Welcome to 2019 where an "embedded" system has 1GB RAM.

The problem of GO is that all those unused libraries are loaded in RAM instead of staying on disk, and loaded when needed.

An helloworld in GO (around 1.6MB) is still too large for my openwrt router.


Like every process, the text segment is demand paged so it doesn't consume ram unless it's actually needed. What you probably mean is that multiple copies of the same library cannot share the same physical memory pages.


Nope, they mean there's no flash memory available on the resource constrained device to store megabytes of useless runtime. Routers/other IOT Linux devices regularly use SPI ROM chips sized 4MB..16MB, you can't just install a 100MB Ubuntu/Arm rootfs on those.


I think the days of the small SPI flash chip are numbered.

A a few hundreds of megabytes of flash will soon cost less than the plastic and manufacturing costs of the chip itself. At that point, manufacturers will be putting in 128megabyte flash chips simply because 16 megabyte flash chips cost more.

The only place that argument won't apply is in microcontrollers where the flash is on the same die as the CPU.


I think you're seriously underestimating both cost sensitivity, and the degree to which chip packaging evolves.

I worked on a bit of premium consumer electronics. Retail would be about $80. That allowed an electronics BOM cost of $20. When selecting the processor, the size of the flash was a more dominant effect than RAM, clock speed or chip peripherals, because we're in the $1 vs $1.20 region.

There is a lot of software running in some very cheap electronics, all around you all the time. Everyone just forgets because it's all designed in corporations and not talked about on the internet so much.


SPI chips are unreasonably expensive per MB compared to NAND, some reasons on why: https://electronics.stackexchange.com/questions/32200/why-is...


and why would you run k3s on these ...


LXC runs fine on openwrt, and uses very few amount of RAM compared to docker.

I still do not understand why you need hundreds of MBs of RAM for a container orchestrator.


> Welcome to 2019 where an "embedded" system has 1GB RAM.

Thirty years ago someone could have said the same when seeing laptop computers and remembering when computer literally used to occupy whole large rooms.

Come on, let's stop doing this kind of jokes, they add nothing to the discussion.


The problem is that the bottom end of 1k RAM still exists. In fact, because the main change is that it gets cheaper every year, it's even more prevalent. Just because there aren't lots of blog posts, doesn't mean that isn't a large proportion of the industry.

The word 'embedded' is not well defined, but if you use it on things with 1GB of RAM, what do you call a PIC with 1K?

My personal definition of embedded (which is of course still flawed) is anything without an MMU. A Raspberry Pi isn't embedded for instance, it's just a small computer, just like a phone.


Then what do you call a server with a CPU+MMU for its remote management or an iPhone with a CPU+MMU for its secure enclave? A cluster?

There is certainly a lot of technical merit in calling the five-inch thing in my hand a distributed system of multiple computers, but in practice it's not the common definition.


> The word 'embedded' is not well defined

> ...

> My personal definition of embedded (which is of course still flawed) is anything without an MMU

Probably "embedded" would better be defined as an electronic device capable of performing digital computations small enough to be put literally into other stuff, usually everyday things.


And nobody has suggested to use it on such tiny devices. "Embedded" spanning a large range of devices isn't new. 20 years ago, mainboards for high-end x86 CPUs of the day that were made to be put into industrial devices were already called "embedded". Because they're embedded into machines etc.

If you mean tiny devices, why not say "microcontroller" or something?


>The word 'embedded' is not well defined, but if you use it on things with 1GB of RAM, what do you call a PIC with 1K?

_very_ embedded


1K ram embedded device surely doesn't even exist in compute scenarios where kubernetes operates. There's very different fields


We're still in a place where there are millions of embedded devices out there in actual real world use where these resource constraints are real.


Would you put or even need Kubernetes for those devices?


Using MicroEJ, RTOS, Zephyr, mbed and other OSes to target Cortex-M4/ESP32 devices already allows for lots of possiblities with 512KB.

Yes 1GB is huge, my Windows Phones are capable of so much with a tiny portion of it.


At some point in time, the physical realities of electrons and metal wires will kick in. Then the growth will stop and you'll start kicking your own ass about performance and pointer arithmetic and cache policy.

Maybe that point in time is now, maybe it isn't. But it is coming nonetheless, that is one thing we can be sure of.


True, but not for all developers.

For at least 50% of developers, the "physical reality" is that of gathering client requirements quickly and accurately and implementing them correctly and on budget.

Not of micro-optimizing CPU cache hits :)


You missed the point.

A day will come when CPU speeds and RAM sizes will stop increasing. When that day comes you don't be able to ignore performance, because a lack of performance will mean actual, real-world dollar costs for the client, and so performance will be the #1 requirement of the client.


CPU speeds haven't changed much in 10 years, and to be honest, I would argue that performance has been getting worse and is not a priority for many/most developers.

The client doesn't seem to care, and just sees the 5-20 second webpage load times as totally normal (at my company/industry at least). Our applications are horrifically slow, but we're still the market leaders in our segment


> The client doesn't seem to care, and just sees the 5-20 second webpage load times as totally normal

Yes, but will they tolerate 2 minute page load times? 10 minute?

At some point reality kicks in and you start to have to care about the nuts-and-bolts hardware.


Most of the time those things are caused by gross development errors, not by cache hits. Stuff like looping while sending an individual SQL query for each record.


I didn't miss it. For the vast majority of applications, which are Line Of Business (LOB) applications, performance doesn't really matter. It's good enough.

LOB applications are made to replace humans and they need to be a multiple of times faster than human operations (+ the extra benefits of automation). And the vast majority of them are. And most programmers are working on those applications.

So there's 0 incentive for what you're saying.

The real heavy lifting will be done, as usual, by a handful of people: OS devs, DB devs, game engine devs, runtime devs, etc.


Implying nobody would bat an eyelid at performance problems in LOB applications...


...and where a 40MB binary is called "small".

And it has to run on every node even if the worker nodes are just running workloads.


Go has support for plugins (OS X and Linux only currently).

However unlike Modula-3, Mesa/Ceda or Oberon(-2)/Active/Component Pascal they are load only instead of being possible to unload them, and they require manual binding of symbols.

For this kind of system work a more feature rich dynamism is certaly a big missing feature.


Are you saying this because the page mentions IoT, or what brought this on? To me, the page doesn't suggest that they want it to be used on many embedded devices...


"Added sqlite3 as the default storage mechanism. etcd3 is still available, but not the default."

I want to use and support this product for this reason alone. Etcd was always an un-necessary complexity added for god knows what reason. Later cluster management solutions have abstracted etcd creation and management away (thankfully), but it's always irksome that it is there. Thank you to the K3s development team for taking on that challenge!


As a someone that used etcd during v1 times (CoreOS + Fleet) I can only agree. I am not that familiar with etcd3 but v1 and v2 were horrible: difficult to tune and find a working set of "timeout" parameters, picky to CPU availability, developers giving zero f*cks to bugs/docs, crappy documentation, and good luck if you lose quorum.


AIUI it gets you away from a single point of failure, right? Unless you have a reliable NFS server (and non-SPOF NFS servers are rare and pricy), running k8s on SQLite sounds like you can only have one master.

Of course that's totally fine for many k8s deployments and might even increase reliability for some use cases, but still, moving from a distributed system to a local one is a significant change.


meh. Practically speaking, we're talking about a single point of disk failure - which certainly happens but at a rate that is sufficiently low. Plus, the actual amount of data stored is tiny, you can replicate it in seconds. Amazon has solutions for this if it's truly of concern, I would guess google does as well. IMHO, the operation of etcd - and the fact the data became unreadable if you lost quorum - was a much higher risk factor than possible disk failure. It was impractical to backup as well, you either have quorum or you don't. Even without NFS, I could backup that sqlite db every 5 minutes via a cron job and have most of my cluster state perfectly preserved.


Disk failure happens quite frequently for me at scale, but so do other things like RAM going bad or network cards dying or entire mainboards just acting weird or top-of-rack switches silently dropping packets because of memory corruption (all of these have happened to machines I'm responsible for in the last six months). Again I think this is a matter of scale. If you've got enough machines that disk failure is a concern, you can also run a 9-node etcd cluster and have a big enough pager rotation that keeping 5 of them up 99.999% if not 100% of the time isn't a challenge. If you have less than a rack of machines and you're not at the point where you're worried about having a SPOF in your network switches or power supplies, running etcd a bunch of overhead for a problem you don't have and you are genuinely better served by a robust non-distributed system whose availability is the availability of your hardware.


Alternatively the K3s authors could have embedded a single node etcd process into Kubernetes using the embed package instead of introducing sqlite.

https://godoc.org/github.com/etcd-io/etcd/embed

This is something the Kubernetes community might consider as well.


Would it work on Android? That would be interesting.


Darren mentioned on the CNCF webinar that he has plans to test it on chromeOS, which is similar to android, but that it hasn’t been done yet.


This looks great, playing around with it right now. They seem to have removed PersistentVolumes, too. I'd like to run a MySQL database on K3S, how would I go about making sure that my database's data survives when its pod dies?


I mean, I've built a Kubernetes-y thing for my own needs, mostly "scheduling" LXC and FreeBSD Jails, and it needs hella less than 256MB of RAM and it's around 20MBs. Maybe I should clean the code and publish it.


That's literally not Kubernetes, though. Does it support custom resource definitions? Services? Deployments? ConfigMaps? Multiple containers on a single host, i.e. Pods?

Your project does sound very cool but I don't think provides a fair basis of comparison re: size and RAM usage.


very good question! source definition -> no. services -> yes. deployments -> yes. configmaps -> no. multiple containers on single host -> yes. I should add the most common features I think. I also haven't integrated it with any reverse-proxy service yet, but will try that too.


The site has been updated to reflect that the nodes only need 512MB of RAM to run K3s _and Kubernetes workloads_. K3s itself doesn't consume 512MB of RAM.


@mods: This should be merged with https://news.ycombinator.com/item?id=19257675


This looks great, we're planning to test with the intention of moving our CI/CD orchestration onto this.

We use Calico internally, is there a plan in the future to allow other SDNs?



Anyone know if this will run on a Pixelbook? (inside the Crostini linux virtual machine in ChromeOS). Thanks


Got excited about this too, but unfortunately no, it doesn't run on Crostini. Fails with querying netfilter.

Also tried through docker-compose from the GH repository. It starts the k3s server fine but fails on starting the nodes as they require privileged container mode, which doesn't work on Crostini at the moment.


Mentioned this elsewhere, but if you create an issue, Darren mentioned on the CNCF webinar that they would be looking into ChromeOS soon.


There's also a feature request in ChromeOS bugtracker about adding the missing bits for minikube, part of which seem to be similar to what k3s is using: https://bugs.chromium.org/p/chromium/issues/detail?id=878034


I find it humorous how many people in this thread seem so willing to use this in production.

If you can't make Vanilla upstream Kubernetes work, it is very ill advised to think you will be met with success using a heavily modified fork that has a fraction of the support of K8S.


In an environment where only a fraction of support is needed, it doesn't make sense to run a fat distro with features that you'll never use. K3s is designed for low-power, low-resource environments that don't need things like EBS or NFS volume support or alpha/beta API features. Imagine edge environments like traffic management networks, wind farms, telco substations, or other places where environmental constraints make it more reliable to run something fanless with flash storage and 1GB of RAM. Stock Kubernetes will choke under those circumstances, but does that mean that those environments are forbidden from using the benefits of Kubernetes?


It is nice to see people working on making software smaller!

It leaves me wondering though, what is Kubernetes doing with all of these resources? When I saw "lightweight" in the title I expected 5.12MB of RAM, not 512MB.


A bit off topic, but has anyone used K8s along with Proxmox?


Not a k8s user, but no reason it shouldn't work from what I read about it. Spawn a few CoreOS or RancherOS VMs and off you go.

Some people even used it with LXC, though it seems a bit of a work. e.g. https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f0...


Gonna check this out, thank you! I tried installing CoreOS a couple years ago and I recall encountering some issues using LXC, but maybe it'll work now :)


Yes. I run my k8s cluster on a Proxmox host.

Any specific questions you have?


I'm mostly curious how you got it all set up? Did you install something like CoreOS as the host?


Sorry for the late answer.

Host (inside the VM) is a plain Ubuntu 18.04 + Docker 18.06 and Kubernetes 1.13. Currently I have 1 Master and 2 Workers.

I tried to use LXC (and it worked), but for some reason the same setup resulted in 50% (yes, 50%) IOWAIT with LXC in comparison to 4-5% in the 3 KVMs. I didnt really care to look too far into the reason. The server I run it on only has 2 7200rpm SATA drives in Raid 1, so its obviously not the best thing youd want for massive load or an actual production setup.

But its basically my playpen + some private "production" servers, so with 1 User the performance doesnt really matter.


And we have done it! Solutions for the solutions!


kubes




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: