The article complains that kubernetes requires installation but Swarm just runs a container. True, but kubernetes runs atop docker, not inside it (although the kubernetes components do run within docker). I think this is a better design, because it makes migrating off of docker easier.
The article complains that one must know everything in advance. That's not quite true: one is able to add nodes. But yes, they don't add themselves. That's also okay with me, since kubernetes in general seems to take an intentional approach.
The fact that kubernetes avoids using docker commands is a plus for us: we declare what we want to do, and kubernetes does it, rather than us imperatively instructing docker what to do.
Contrary to the article, we certainly don't have multiple definitions of anything. We don't have any Docker Compose definitions; we use kubernetes throughout, whether it's single-node running on a local laptop or multi-node in integration, staging and production. Thus, 'in other words, once you adopt Docker, Docker Compose or Docker CLI are unavoidable,' is pure and total nonsense.
In our experience, kubernetes just does things right. The service abstraction, and service discovery, are exactly what we need. The way that replication controllers work is great. The idea of pods of containers, all of which can communicate on the loopback interface, is extremely useful. It's certainly not perfect (c.f. installation, supra, and there're plenty of other warts), but it's going in the right direction.
As an aside, given my experience with docker itself, I'm not eager to adopt any more software from Docker; I'm looking forward to when kubernetes supports rkt or some similar container format.
I have deployed with Docker Compose, and AWS ECS. Kubernetes solves a lot of the pain points I discovered when putting containers into production with Docker Compose or AWS ECS. A concept of a pod that has it's own IP address? Awesome. Replication controllers that uses pod definitions as templates? Very nice. Resiliency where -- if the kubelet on minions or the Kubernetes master / API server goes down, the cluster keeps functioning? Great! Add-ons that uses the K8S definitions? Cool!
In re: setup, we'd really really like to be better. And we're getting there, but we really appreciate the feedback. If I might hijack the top comment - we are open source, and would love to hear issues/feedback:
- Github: https://github.com/kubernetes/kubernetes/issues
- Slack: https://kubernetes.slack.com/messages/kubernetes-users/
> With Docker we were supposed not to have installation instructions (aside from a few docker run arguments). We were supposed to run containers. Swarm fulfils that promise and Kubernetes doesn’t.
We run Kubernetes on CoreOS and every piece runs in a container. On the master, the api-server, controller and scheduler all run in docker containers. On the minions, the proxy and kubelet both run in containers.
> Another thing I dislike about Kubernetes is its need to know things in advance, before the setup. You need to tell it the addresses of all your nodes, which role each of them has, how many minions there are in the cluster and so on.
The master does not need to know the these things. Minions do need to know the address of the master, but they can self-register with master and join the cluster anytime.
With that being said, it did take two of our senior devs over a week to get a cluster we were happy with. I'm a fan of docker compose, and hope swarm becomes a good competitor to Kubernetes. I don't think it's there yet though.
I don't think the direct complaint was that Kubernetes didn't "run in containers" but that it isn't as easy to set up as "just running some containers".
"With Docker we were supposed not to have installation instructions (aside from a few docker run arguments)". He was saying that the Kubernetes setup didn't fit the ease of setup touted by Docker.
But I think the author then translated that to mean that Kubernetes is also difficult to use. It's much easier to use than it is to install.
On the flip side, when I had started seriously using Docker, I quickly ran into it's limitations. Docker is easy to use... as long as you stayed within the playground. When I started using Docker Compose seriously, I quickly ran into problems it couldn't handle. This was about 12 months ago. I think the current networking and docker swarm expanded that playground ... but it doesn't bridge the gap into getting this out as reliable and resilient infrastructure.
Docker seems to go for solutions that are general and flexible whereas kubernetes is very opinionated.
I'm not so sure that Kubernetes is being opinionated. They sit higher in the stack, and yet, it is built as a collection of building blocks.
Chances are, what Docker Swarm will turn into is reinventing Kubernetes.
Setting up and configuring a Kubernetes cluster by hand does have a steep learning curve. I had spent more time trying to get Kubernetes up and running than I have actually using it.
Actually using Kubernetes is fairly straight-forward (once you get it up and running). It solves many of the pain points with seriously using containers in a development workflow. Though, I ended up writing a Ruby framework to bridge the gap -- but that's the point. Kubernetes exposes a set of primitives onto which I can build on top of. So I'm not entirely sure where the "opinionated" bit comes from.
Our goal is to make the deployment bits production ready for 16.04, and I'm sure $your_favorite_os_vendor is probably doing the same thing.
For staging, I ended up using CoreOS. I didn't want to use any of the existing scripts to deploy k8s on ubuntu, and I didn't want to write my own Chef recipes. I was a bit intimidated by CoreOS at first, but it has been a great experience ... except when it wasn't :-D.
That seems unfair, as as far as I can tell there are not "swarm equivalents" for most of kubernetes features.
Does swarm have any of the features that kubernetes does for scaling different containers or doing rolling upgrades?
It looks like  the swarm scheduler does not yet restart containers if a host goes down.
I'd recommend anyone trying to install Kubernetes check out: https://get.k8s.io/
Or the single node Docker based instructions:
Or kubernetes on OS X:
Or of course the "as a service" versions like Google Container Engine, CoreOS Tectonic, etc.
We're definitely also working on simplifying the install instructions.
I'd also like to expand a little on some of the features that differentiate Swarm from Kuberentes, namely:
* Replicated sets of containers
* Rolling update from one version of code to the next
* AutoScaling (Kubernetes 1.1, in Release Candidate now)
* HTTP load balancing for autoscaling
* Load balancing for sets of objects (e.g. Frontend)
* Service discovery of those replicated sets (Swarm has discovery for individual containers, but no concept of Service)
I would also refer people to the discussion in https://github.com/docker/compose/issues/1899#issuecomment-1... as well as https://github.com/docker/docker/pull/8859 on why we feel that the Node-based Docker API is problematic for a cluster level API.
To expand on what Brendan wrote, Kubernetes does a lot more than just run containers. It provides container-centric infrastructure and a platform for building robust automation.
In a VM-centric world, one wouldn't just use raw VMs in production, but would also use managed groups, load balancing, autoscaling, DNS, Spinnaker, etc. If one manually pins specific containers to specific hosts, IaaS APIs and tools can still be used directly. When dynamically scheduling containers, that doesn't work.
Of course if you are using AWS I believe their container stuff is based on compose.
So, if you are currently using docker-compose, it's pretty easy to start it up on ECS
Isn't entirely a fair exploration of the downsides of Docker Swarm compared to Kubernetes. I agree a lot of the wind is taken out of Kubernetes sails with Docker's new features but certainly not all of it.
This deserves a longer write up then I can give it right now as so few articles on the web right now really delve into the differences between these systems outside saying who maintains but for its attempt I'm pleased to see this article.
One of the biggest opportunities for improvement we hear about is just making sure getting it going is easy - there's lots of work in flight to make this better, most recently here: https://github.com/kubernetes/kubernetes/pull/16077
We would really appreciate your thoughts on this - it's an extremely active area on the project.
Unfortunately it doesn't seem like an easy solution exists yet, but once it does I'm all in with Swarm.
Or you could just kubernetes, which does them (mostly) right, right out of the box…
The way it works is that services are exposed as hostnames and environment variables; connexions to those hosts on certain ports are load-balanced to the actual backing services, transparently. It's really nice, and very Twelve-Factor-like.