All of the features removed are based off of two basic ideas
1) If there are X ways to do something, I choose the best one (obviously subjective).
2) The features are not commonly used or really shouldn't be in k8s to begin with.
The end goal of what I'm working on is basically to use k8s code as a library to do orchestration. Step one was to reduce the footprint of k8s (I'm 80% there with this project), step two is to completely rework the state management such that you no longer need to persist state in k8s. This is a much larger goal. I've fooled around a lot with the persistence layer in k8s (this project is running on sqlite3) so I'm basically doing a lot of work of figuring what state I can throw away. The theory is that all desired state comes from your yaml files. Actual state is actual (what really exists). Everything else in k8s should be purely derived and not important.
I've setup DC/OS and played with Nomad and wrote this post about my frustration dealing with orchestration systems:
That seems overstated, frankly. What is so complex about, for example, GKE that renders it unfit for smaller organizations? If you mean "caution you against hosting your own clusters" I could probably get onboard with that.
That's not to say I don't think Kubernetes will survive. I do think it will survive and it will be around for awhile. Because it's a good fit for enterprise right now. Not a good cloud provider business, not good for small dev shops, but good for enterprise.
And this makes it large, slow to start, difficult to work with, and provides a bunch of obsolete features you might mistakenly start using, that turn out not to be as well designed as everyone thought, but that can't be fixed for reasons of backwards compatibility?
In 10 years, could people will look at Kubernetes the same way.
Though I’m still running into a hundred gotchas a month :P
I don't work with K8S or even docker on a day-to-day basis, but to me the ecosystem looks rather good. There are several solutions of orchestration and containerization competing from different vendors, and things tend to be standardised to make them work together :
No later than yesterday there was that post on HN : https://news.ycombinator.com/item?id=18050781
People experiment, dissect, keep some old things, hack new things. K8S is still in active development, new initiatives like Istio and Knative are built on top of it. It looks like an healthy innovative ecosystem to me. I guess it depends of point of view and interests.
No IPv6 support in it at all, and the company seems to have moved resources internally away from Swarm development.
Doesn't bode well for it's future. Expecting it'll be taken out to the back shed (-bang-) in a while.
With no actual announcement(s) then occurring.
The commit activity graph on InfraKit - which people are supposed to use now that Docker-Machine has been deprecated - seems pretty clear:
SwarmKit is the same:
Clearly these aren't where resources have been placed.
Actions are not lining up with the words given. :(
This stack doesn't offer all of the same functionality as the Kubernetes ecosystem, but it has the benefit of being vastly simpler and composed of relatively small, independent parts.
We did examine Kubernetes, but for various reasons we need to manage our own cluster and the engineering overhead was too high for our team to take on.
Just wanted to bring your attention to a YouTube channel from Rancher Labs, where "Learning Rancher" videos are posted regularly. But the content is all the same: boring 10min intro to every video, after which developers go through the same stuff, painfully slowly. Is it meaningful to produce this kind of content over and over again? IMO there are already way better Rancher tutorials by indian dudes on YouTube.
I've started with k8s thanks to Rancher, and I presume that the vast majority of Rancher users are doing the same. Rancher without k8s does not make sense, and it seems to me that k8s stuff is actively avoided in docs and tutorials. Maybe it would be better to make a series of tutorials on Rancher+k8s, and go through concrete topics like networking and CI?
Just dead weight otherwise. You can always add it later if people really need it.
True, Swagger docs can be helpful, but also if present but rubbish ignored swagger docs with missing features or side effects is more confusing. Just talk.
You have to write your documentation in a real human language. Maybe I just don't get the point of swagger.
Currently the idea with k8s is that you just destroy anything that has broken state and rebuild it somewhere else. While this is the easiest solution to the problem, I think that containers should not always be treated like disposable objects.
I think in many ways, Solaris Zones (and FreeBSD Jails to a lesser extent) did this right. Every administrative command in Solaris is zone-aware and you can manage your zones like you manage your host. On Linux, containers are completely opaque root filesystems which contain who-knows-what and are completely unmanagable or introspectable by the host. Everyone is cargo-culting and creating hacks around this fundamentally opaque system to try to make it less opaque (or just giving up and making it easy to purge them).
I'm working on some improvements to the OCI image-spec that _might_ help with introspection of images, but runtime introspection is going to be quite hard (needs runtime agreement).
I get that it's no longer sexy to work on container runtimes, now that everyone is losing their mind over orchestrators (and I imagine soon they'll be losing their mind over the next big thing). But I think that improving these fundamental blocks is pretty interesting and might result in better systems.
you mean, as in "VM"? :)
For instance in your post I didn't really get your point because at one time you seem to favor containers as a single entity from the perspective of the OS so they can be easily managed. This is what I understand you called "transparent to the OS". The is opposed to the container root fs that you call "opaque", supposedly because it cannot be easily mounted from the host OS etc.
As the implementor of container runtimes you know that there's not a container entity but an agglomeration of isolation techniques such as namespaces, cgroups etc. including some kind of overlay fs that make up an actual container. For me that's not very transparent. VMs on the other hand are in that respect – in a similar way Solaris zones and Jails are, as they are actual entities that can be seen and managed on the host OS.
> The is opposed to the container root fs that you call "opaque", supposedly because it cannot be easily mounted from the host OS etc.
Well, you can mount it and you can manually funge with it using 'nsenter' but that's not really ideal. Why (from the perspective of a user) can't you run a package upgrade across your containers? Why can't you get the free memory across all containers? Free disk space? And so on. This information is _available_ but it's not cohesive for _users_.
> VMs on the other hand are in that respect – in a similar way Solaris zones and Jails are
I agree that Solaris Zones and FreeBSD Jails are transparent (in fact that level of tooling transparency is what I wish we had on Linux). I don't agree that VMs are transparent in the same way -- they are an opaque DRAM blob from the perspective of the host kernel. You can dtrace zones/jails, you cannot dtrace a VM (and get meaningful results about what the guest kernel is doing) AFAIK.
It makes it difficult to reason about containers, as well. For instance do you see confinement technologies such as AppArmor or SELinux belonging to the container or are they slapped on top of containers? Where do you draw the boundary? And does it make sense to talk about "containers" when it's such a vague concept. Also, there are now "VM containers" such as Kata.
Well at some point we have to be able to talk about the widely-used and glued-together namespaces+cgroups+... concept. "Containers" is a perfectly fine thing to discuss (especially since the history of namespaces was definitely based in more "fully fledged" container technologies like Xen).
> Also, there are now "VM containers" such as Kata.
... which are just VMs that can interact with container tools. I'm pretty sure that most people would not call these "containers" from an OS point of view because they do not share any of the primary properties that containers (or Jails/Zones) have -- the host kernel actually knows what the container processes are doing.
My biggest issue with the current state is that the common tools (e.g. Docker, k8s) are pretty terrible. Docker overcomplicates a great many things, which is one reason for the many bugs. And k8s ... well, let's not even go there.
I don't think much more is needed than just `run-container /usr/containers/cool-container` and presto, a container with the files in that directory. No need for buggy/slow platform-dependent FS abstraction layers that change every year because they never quite work right in a bazillion edge cases. This is how e.g. FreeBSD jails work.
This won't prevent implementing tools like `docker pull`; at it simplest it could be `curl ... | tar xf - -C /usr/containers/cool-container`. Running multiple containers based of a single image? `cp -r`.
Maybe I'm just an old Unix guy, but I think that composition from generic tools is a much better approach for a lot of this stuff. It's easier, less platform-dependent, has less code (meaning less bugs), etc.
I don't think it has to be; I mean, the current Linux/Docker situation is kind of that, but that's just a single implementation.
> most containers are bundles of software (multiple binaries)
Yeah, exactly. Containers aren't just about security, also convenience. Something like "docker pull someimage" can be a handy cross-platform way to run stuff.
I think "docker pull" is the reason for Docker's popularity. You can build a self-contained container containing Ruby, all your Ruby on Rails stuff and JS stuff, and whatnot, and then send that to any server and just "docker run" it. Getting some of this dynamic stuff to run can be tricky and error-prone: need to get the correct Ruby version, Rails version, muck about with bundle, etc.
Want to add a new app-server or migrate to somewhere else? Need to spend a day setting everything up again and hope you didn't make a mistake that will bite you later :-(
A self-contained container is a lot easier, even with good OS package management or automation tools like Puppet.
Many people also use Docker as "Vagrant" (e.g. to run dev versions or tests and whatnot). Again, "docker pull" is convenient here, especially when there are certain dependencies/requirements, or for running testing against different versions of some dependency, etc.
Again, I don't think Docker is very good as such, but I do think there are some ideas worth keeping, both for security and plain convenience (not often those two go hand-in-hand!)
The site also has some "playgrounds" for experimentation once you know what you're doing and apparently can be used to create one's own courses, e.g. for training people at your company, etc.
List of all the courses:
Because of this, I'm forced to either use another orchestrator (Docker Swarm is lightweight enough to work on a single node or a small cluster), or pay for machines I don't really need.
What are is the current thinking on this issue at Rancher?
If you have many such projects, perhaps something like Saltstack or Ansible, both of which have support for Docker container management built in.
I already use Ansible to setup machines and automate deployments.
I used systemd unit files until I switched to Docker. I don't use them anymore since dockerd supervises the container itself.
I use Docker Compose to describe the set of services composing my app and it's great.
But in production, Docker Compose is unable to provide zero-downtime deployments, and this is why I'm looking at Docker Swarm or Kubernetes.
systemd unit files are not very useful for Docker containers, and they don't solve the zero-downtime deployment issue.
Are you saying that the overhead on the worker nodes is about 10 % for the kubelet, the containerd and the logging daemons?
GKE documentation says 25 % of the first 4 GB are reserved for GKE:
Also, ignoring a feature seems far cheaper than forking a project and renouncing to updates for desired features?
e.g: feat A is desired, feat B is not, a new upstream refactor touches A and B. How do you stay up-to-date?
Lately I have been playing with kubernetes and now docker swarm. I must say that I too find kubernetes to be too heavy (feature- and concept-wise) for the smaller scale things that I am up to: I have to read up too much, learn too many new concepts, write too many config files.
Docker swarm is a lot more appropriate for scaling those side-projects to a slightly bigger step, however, I have concerns about the longevity of the project. Will Docker Swarm still exist in 5 years?
Thanks for the explanation.
Maybe if you accept a bit of oscillation under stress, and damp that oscillation a lot, it would be ok.
I assume the goal is to make k3s into something light where you can deploy small projects instead of using something like swarm/dokku.
He has also removed etcd and replaced it with sqlite which seems like a nice way to reduce resource requirements.
Are you saying your project is named "kates"? Why not name it something like "Khios", a Greek island, which still lets you keep 'k3s' as your abbreviation?