My hope is for more competition for development k8s (or k3s!) - minikube and docker-for-desktop is plagued with high CPU usage problems, and OSX-wielding developers are still doubtful. It will require a lot of work and elbow grease to convince them. I've built a hosted platform that tries to address this, but it's a complex education problem, which I am not well suited to solve :(
It works, but it kind of invalidates all assumptions k8s makes about free memory on a node
Docker Swarm on Raspberry Pi is a very common thing. So, from a performance perspective, these are on par.
Another question is around ingress and network plugin - is it a seamless "batteries included" experience ? Because these are two of the biggest pains in k8s to decide and setup .
- this has the same great K8S API, that Swarm lacks. (Deployment is a first class citizen in kube land, but you only have to make do with the service YMLs in the Swarm sphere.)
- k3s lacks some in-tree plugins, that swarm might have (mount cloud provider managed block device), but there are out of tree addons
- sqlite instead of etcd3 [but available], so out of the box k3s is not HA ready (if I interpret this right, there's one APIserver, and it's something in development )
- this seems more lightweight than a docker setup in some sense (it uses containerd, not full docker), but of course this needs a proper evaluation
- ingress plugins: good question. it should work with traefik 
Hmm... nothing against k8s, but it’s deployment api is an abomination on par with aws cloudformation.
You need teams of yaml engineers to manage these things.
That's the exact problem Rancher itself is trying to solve, and it does a pretty fantastic job at it.
Though suggesting its "on par" with CloudFormation suggests that you either know a ton about CloudFormation (anything becomes second nature if you're skilled in it) or you don't know much about either of them. Kubernetes isn't that bad.
On the other hand, a thought out declarative language like Hashicorp's HCL is much saner thanks to IDE code completion/refactoring and static typing.
YAML is 1-1 mappable to HCL - because both map to JSON, but the Hashicorp interface seems nicer because it's simpler.
For example, can I become an INI engineer?
Of course k8s's YAML API is just a clunky interface, because it's evolving very rapidly, and it's a rather low-level thing. So basically if you work a lot with k8s, it seems like all you do all day is to copy paste [or generate, or parse] YAMLs.
With all due respect, some (like me) consider that a feature of Swarm.
If k3s comes with the same API (and implicit yaml complexity) of k8s, but just reduces RAM usage...well that's not very interesting for me.
Docker Swarm is very lean - there are zillions of videos of how people are building interesting raspberry Pi stacks using it (https://youtu.be/mMpZpa7uUSk).
But for most people using Swarm - they are using it for one reason only : simplicity.
I set up my home server with minikube more as a learning exercise, but went back to docker-compose for running all the things I actually rely on (Plex for instance) just because it's so much simpler.
Rollback has existed for a while now - https://docs.docker.com/engine/reference/commandline/service... . you can actually tune the parallelism and the delay
> k3s includes a basic service load balancer that uses available host ports. If you try to create a load balancer that listens on port 80, for example, it will try to find a free host in the cluster for port 80. If no port is available the load balancer will stay in Pending.
Go's concurrency features are formally specified, are not ad-hoc (they are a major feature of the language, and the product of multiple language design iterations) and are certainly not bug-ridden.
Or are you talking about using Kubernetes to manage workloads on machines?
How does Erlang solve running my python webapp or my SQL database on my ec2 nodes, or in the case of this project, on an IoT device.
Unfortunately to use all those features the services needed to be written in Erlang of course, which isn't bad in itself, but unfortunately not the reality in todays multi-lingual atmosphere. Speaks to how highly Erlang was designed and written. It took a lot of real life operational concerns into account in it's design, a concept few other languages even attempt to address.
My number one reason for not using Erlang is lack of static typing. Dynamic type systems are great for small teams, not so great for "enterprise-wide" development. (I know this horse has been beaten to death here, trust me, I have heard the arguments for/against millions of times, I remain unconvinced).
Also the answer to this question is interesting to me as well http://erlang.org/pipermail/erlang-questions/2008-October/03... -- (granted this is from Oct 2008) apparently if you were to attempt to add static type checking in Erlang, it would have to fundamentally change some of these benefits erlang brings such as message passing and handling process death. Haven't been part of the erlang scene for a while, but would be interested to hear a rebuttal to that.
We run it in Kubernetes in both our CI environment and locally so we can use the same configuration for both. It runs a dozen microservices and saves us a lot of devops time.
In our CI environment it's nice for a developer to roll out a new copy of the services from a different branch for testing, and because of the autoscaler nodes are dynamically provisioned when needed, and then removed when the test environment is destroyed. That's pretty nice functionality you get basically out of the box. On EKS anyway. There are some occasional rough edges, but I think the decision to use k8s for this makes sense...
At some point we'll migrate prod too.
Kubernetes has a lineage as well... Based on the Borg project at Google.
I guess I'm not seeing why these things are being compared. Aren't they solving different problems?
So just rewrite all our code in Erlang, including off the shelf products we didn't write?
Then use some other technology to deploy the software, configure the machine, come up with a way to do service discovery, attach block devices to nodes, and automatically provision new machines based on resource usage?
It doesn't make sense yet it's seemingly repeated on every thread about Kubernetes.
The Erlang community should celebrate technologies like Kubernetes, Mesos, or any other number of resource scheduling systems because they're finally the industry taking seriously the problems addressed by Erlang.
Similar to what Envoy is when you do service mesh, you don't need to integrate libraries into your code to get service mesh features using Envoy as side car proxy.
I have that conversation with Erlang users on every k8s topics, they seem to miss the point that decoupling features outside of the language / runtime is the way to go.
Timely, here's antirez on this topic when Redis was announced on HN ten years ago :
> There are a lot of projects similar to Redis in Erlang actually: http://www.metabrew.com/article/anti-rdbms-a-list-of-distrib...
> Apart from speed what I like of C for this kind of projects is the self-contain-ness and the fact that most developers that may help in the development are probably able to read/write C but not Erlang.
Kubernetes -> K8s -> "kates" -> k3s
One can literally run a full fledged Java EE server in 64mb of Ram. What?
Also curious if this can run on osx.
Disclaimer: not affiliated with any of the companies above. Edit: formatting.
So that basically means local smarts and fall over in the case of a hardware failure.
Still happy to learn more arguments for using k8s in IoT (if you are already using k8s then yes, seems like k3s would make sense).
Of all things, the Chik-fil-A tech blog gives a pretty good explanation of why you might want this.
After toying with k8s, we realized just how much tinker time our current docker-n-script approach takes to optimize and scale optimally. We're close to moving to k8s after almost a year of on-again/off-again dev and I'm very excited to roll it out. Not as excited as DevOps, maybe, as it's going to lift a lot of weight off their shoulders.
TL;DR: IoT means different things in different industries.
Practically speaking: I worry this will become a parallel but subtlety different implementation of Kubernetes with it's own quirks.
Hello, we're already there. Despite the CNCF certification process, there are many flavours of k8s, all with their own quirks. Being on the product side, it is quite painful to support.
I agree the CNCF certification process isn't comprehensive enough.
Take Openshift as the most extreme example - it's a fork of K8s, with a lot of custom functionality like scc and routes, and also a lot of custom behavior like its admission controllers that manage pod security contexts.
It's almost like a different version of k8s - things that will run on GKE or vanilla k8s won't work on Openshift. Yet, it is CNCF compliant.
I think the "high availability by redundancy" story is oversold.
More generally, an awful lot of systems that'd be fine with a SQL mirror (non-failover), some real backups on top, and a restore-from-backup event every 3-5 years, plus the occasional 1hr maintenance window, are instead made "HA" and "redundant" at 10+x the ops cost and, unless very well done, with even more down-time anyway because they're so damn complex.
This is something the Kubernetes community might consider as well.
I am looking forward to this.
Pretty much any business will want inventory tracking, tracking staff hours, monitoring cameras, sensors for detecting flooding/doors left open/freezers dying, running point of sales terminals, keeping the supply chain running smoothly, handling customer loyalty cards, handling returns, etc.
An on site cluster can help with much of that, even if there's a hardware failure or the network is out.
Question is how will these business find the apps they need that run turn key on some small k8s cluster?
The obstacle to using custom hardware and software in local biz is and has always been, the fact that by relying on it, the one guy who knows how to administer it all becomes the single point of failure for the whole business.
No amount of tooling can save them from that. So local biz will always be beholden to the Oracles of the world, because smart individuals just can't be relied on.
Businesses need to rely on other businesses. A business with a customer service departments and spare stock on hand. K3s or k8s might be used in the technology product offered by such a business, but this by itself won't change the economics of that market.
Seems like there's an opportunity for a cheap 3 node cluster, sold as a value add to businesses and supported a platform that application writers could target. They might well come with a support contract, some basic functionality, and then users could pay for inventory, staff scheduling, point of sales terminal support, integration with food delivery services, etc.
Much like how some home/small business NASs these days have support for 100s of integrations to various online services.
I'm not thinking cluster for performance, just for HA.
For small business why not 3 x Raspberry Pi to enable as much functionality as possible without network and/or power? A cheap UPS would likely run a few Pi for days. Chick-fil-a (3 NUC in a k8s cluster) seems pretty proud of their setup, why not something similar for any similar size restaurant? 3x Pi for smaller businesses seems like a good fit.
Oh, and a Pi cluster isn't going to need any more AC than a human, even if it's uncomfortable. I have one in my attic and it regularly gets above 110 in the summer, no problems so far.
If the network is out, have the Pi fall over to WAN, this is pretty common these days. Some consumer routers support this (insert a sim car), and it's fairly common for raspberry Pis used for home security to support similar. Handling credit card transactions over WAN is reasonably practical... even falling over to a modem+POTS could do for an emergency.
So between UPS (even UPS + solar + battery would be reasonable for a Pi) and fallover to WAN or modem a 3 way cluster could help keep business up during power outages, storms, earthquakes, fires, and of course node failures.
Question is can the right combination of hardware standardization, software standardization, support, and application store get together to enable chick-fil-a like functionality at a price point acceptable to small businesses?
First of all, do you know what k8s is for? Bin packing. Does your small business have a bin packing problem? (And I don't mean crates) Does your small business even need containers at all?
Second, 3 nodes is more than you need. You only need 2 nodes for HA. There is no universe in which two nodes in the closet of a coffee shop would go down, but three would not.
Third, it's too complicated. It takes teams of people loads of time and money to get it working properly, and then they have to keep supporting it, because release cycles and changing standards, etc. Distributed systems are the most complex and the most costly, unless you're at a huge scale, and then it can be cheaper.
Four, it's unnecessarily expensive, because again, you don't need 3, and it's too complicated.
Five, you don't need 3 nodes to have redundant network paths. A DSL line and a cell modem are pretty easy to plug into one machine.
But six, the real reason this wouldn't work: small businesses do not buy HA.
> Question is can the right combination of hardware standardization, software standardization, support, and application store get together to enable chick-fil-a like functionality at a price point acceptable to small businesses?
Yes, it's called Windows on a Dell.
Docker is on my long term list (maybe) but the current system is fast, stable and requires little intervention so docker would have to offer a massive change not a step change to make it worth it.
I always default to things I know until the benefits are clear and it's worked out well so far.
Just to give more context, our use case is spawning Jupyter notebook in containers using JupyterHub and DockerSpawner so that each data scientist gets a personal ready-to-use environment to work with. With 4 data scientists we're beginning to reach the limits of the largest instance provided by our cloud provider and we're expecting to have more users soon. Actually our current cloud provider has just released a managed k8s service so that might be the way to go.
I highly recommend leveraging AWS, Google, Azure, etc services. Especially if you're a small business, spending a little for off the shelf tools that someone else operates and supports is going to help you big time.
Perhaps you'd be better served using more managed services, though?
Kubernetes is only relevant if you have the need to scale your service quickly and/or often. If you run it on your own hardware the value is even more tricky to work out, it can still be a benefit. Whether or not it help you save on hardware cost is extremely dependent on your usage.
- Rolling update? Build it yourself or get it with k8s
- Infrastructure as code? Do it somehow yourself with ansible, awx or other setup OR just use k8s
- Running different workloads on a bunch of hardware with resource scheduler -> some trickery or x different kind of vms
I like it very much and i hope and guess that the k8s way will take over a lot. And since i'm working with k8s i really wanna have it at home. k3s sounds like a very good way forward.
If you have a good working little setup, you don't need to replace it with k8s.
The problem of GO is that all those unused libraries are loaded in RAM instead of staying on disk, and loaded when needed.
An helloworld in GO (around 1.6MB) is still too large for my openwrt router.
A a few hundreds of megabytes of flash will soon cost less than the plastic and manufacturing costs of the chip itself. At that point, manufacturers will be putting in 128megabyte flash chips simply because 16 megabyte flash chips cost more.
The only place that argument won't apply is in microcontrollers where the flash is on the same die as the CPU.
I worked on a bit of premium consumer electronics. Retail would be about $80. That allowed an electronics BOM cost of $20. When selecting the processor, the size of the flash was a more dominant effect than RAM, clock speed or chip peripherals, because we're in the $1 vs $1.20 region.
There is a lot of software running in some very cheap electronics, all around you all the time. Everyone just forgets because it's all designed in corporations and not talked about on the internet so much.
I still do not understand why you need hundreds of MBs of RAM for a container orchestrator.
Thirty years ago someone could have said the same when seeing laptop computers and remembering when computer literally used to occupy whole large rooms.
Come on, let's stop doing this kind of jokes, they add nothing to the discussion.
The word 'embedded' is not well defined, but if you use it on things with 1GB of RAM, what do you call a PIC with 1K?
My personal definition of embedded (which is of course still flawed) is anything without an MMU. A Raspberry Pi isn't embedded for instance, it's just a small computer, just like a phone.
There is certainly a lot of technical merit in calling the five-inch thing in my hand a distributed system of multiple computers, but in practice it's not the common definition.
> My personal definition of embedded (which is of course still flawed) is anything without an MMU
Probably "embedded" would better be defined as an electronic device capable of performing digital computations small enough to be put literally into other stuff, usually everyday things.
If you mean tiny devices, why not say "microcontroller" or something?
Yes 1GB is huge, my Windows Phones are capable of so much with a tiny portion of it.
Maybe that point in time is now, maybe it isn't. But it is coming nonetheless, that is one thing we can be sure of.
For at least 50% of developers, the "physical reality" is that of gathering client requirements quickly and accurately and implementing them correctly and on budget.
Not of micro-optimizing CPU cache hits :)
A day will come when CPU speeds and RAM sizes will stop increasing. When that day comes you don't be able to ignore performance, because a lack of performance will mean actual, real-world dollar costs for the client, and so performance will be the #1 requirement of the client.
The client doesn't seem to care, and just sees the 5-20 second webpage load times as totally normal (at my company/industry at least). Our applications are horrifically slow, but we're still the market leaders in our segment
Yes, but will they tolerate 2 minute page load times? 10 minute?
At some point reality kicks in and you start to have to care about the nuts-and-bolts hardware.
LOB applications are made to replace humans and they need to be a multiple of times faster than human operations (+ the extra benefits of automation). And the vast majority of them are. And most programmers are working on those applications.
So there's 0 incentive for what you're saying.
The real heavy lifting will be done, as usual, by a handful of people: OS devs, DB devs, game engine devs, runtime devs, etc.
And it has to run on every node even if the worker nodes are just running workloads.
However unlike Modula-3, Mesa/Ceda or Oberon(-2)/Active/Component Pascal they are load only instead of being possible to unload them, and they require manual binding of symbols.
For this kind of system work a more feature rich dynamism is certaly a big missing feature.
I want to use and support this product for this reason alone. Etcd was always an un-necessary complexity added for god knows what reason. Later cluster management solutions have abstracted etcd creation and management away (thankfully), but it's always irksome that it is there. Thank you to the K3s development team for taking on that challenge!
Of course that's totally fine for many k8s deployments and might even increase reliability for some use cases, but still, moving from a distributed system to a local one is a significant change.
Your project does sound very cool but I don't think provides a fair basis of comparison re: size and RAM usage.
We use Calico internally, is there a plan in the future to allow other SDNs?
Also tried through docker-compose from the GH repository. It starts the k3s server fine but fails on starting the nodes as they require privileged container mode, which doesn't work on Crostini at the moment.
If you can't make Vanilla upstream Kubernetes work, it is very ill advised to think you will be met with success using a heavily modified fork that has a fraction of the support of K8S.
It leaves me wondering though, what is Kubernetes doing with all of these resources? When I saw "lightweight" in the title I expected 5.12MB of RAM, not 512MB.
Some people even used it with LXC, though it seems a bit of a work. e.g. https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f0...
Any specific questions you have?
Host (inside the VM) is a plain Ubuntu 18.04 + Docker 18.06 and Kubernetes 1.13.
Currently I have 1 Master and 2 Workers.
I tried to use LXC (and it worked), but for some reason the same setup resulted in 50% (yes, 50%) IOWAIT with LXC in comparison to 4-5% in the 3 KVMs. I didnt really care to look too far into the reason.
The server I run it on only has 2 7200rpm SATA drives in Raid 1, so its obviously not the best thing youd want for massive load or an actual production setup.
But its basically my playpen + some private "production" servers, so with 1 User the performance doesnt really matter.