Hacker News new | comments | show | ask | jobs | submit login
Docker Clustering Tools Compared: Kubernetes vs. Docker Swarm (technologyconversations.com)
82 points by vfarcic 767 days ago | hide | past | web | favorite | 25 comments



Kubernetes is currently a pain to install, granted. Indeed, building a real-life (as opposed to toy) kubernetes cluster is pretty miserable. But other than that, it's a really nice system.

The article complains that kubernetes requires installation but Swarm just runs a container. True, but kubernetes runs atop docker, not inside it (although the kubernetes components do run within docker). I think this is a better design, because it makes migrating off of docker easier.

The article complains that one must know everything in advance. That's not quite true: one is able to add nodes. But yes, they don't add themselves. That's also okay with me, since kubernetes in general seems to take an intentional approach.

The fact that kubernetes avoids using docker commands is a plus for us: we declare what we want to do, and kubernetes does it, rather than us imperatively instructing docker what to do.

Contrary to the article, we certainly don't have multiple definitions of anything. We don't have any Docker Compose definitions; we use kubernetes throughout, whether it's single-node running on a local laptop or multi-node in integration, staging and production. Thus, 'in other words, once you adopt Docker, Docker Compose or Docker CLI are unavoidable,' is pure and total nonsense.

In our experience, kubernetes just does things right. The service abstraction, and service discovery, are exactly what we need. The way that replication controllers work is great. The idea of pods of containers, all of which can communicate on the loopback interface, is extremely useful. It's certainly not perfect (c.f. installation, supra, and there're plenty of other warts), but it's going in the right direction.

As an aside, given my experience with docker itself, I'm not eager to adopt any more software from Docker; I'm looking forward to when kubernetes supports rkt or some similar container format.


I agree with zeveb -- that has been my experience with Kuberentes. It is a pain to install. It is a really nice system once it is up and running. The abstractions are at the right level.

I have deployed with Docker Compose, and AWS ECS. Kubernetes solves a lot of the pain points I discovered when putting containers into production with Docker Compose or AWS ECS. A concept of a pod that has it's own IP address? Awesome. Replication controllers that uses pod definitions as templates? Very nice. Resiliency where -- if the kubelet on minions or the Kubernetes master / API server goes down, the cluster keeps functioning? Great! Add-ons that uses the K8S definitions? Cool!


Disclaimer: I work at Google on Kubernetes.

In re: setup, we'd really really like to be better. And we're getting there, but we really appreciate the feedback. If I might hijack the top comment - we are open source, and would love to hear issues/feedback:

- Github: https://github.com/kubernetes/kubernetes/issues - Slack: https://kubernetes.slack.com/messages/kubernetes-users/

Thank you!


At my company, we've set up our own Kubernetes cluster and moved our production applications over to it over the past couple months. While I definitely agree, there is a steep learning curve for getting Kubernetes set up and configured, there are several factually inaccurate things in this article.

> With Docker we were supposed not to have installation instructions (aside from a few docker run arguments). We were supposed to run containers. Swarm fulfils that promise and Kubernetes doesn’t.

We run Kubernetes on CoreOS and every piece runs in a container. On the master, the api-server, controller and scheduler all run in docker containers. On the minions, the proxy and kubelet both run in containers.

> Another thing I dislike about Kubernetes is its need to know things in advance, before the setup. You need to tell it the addresses of all your nodes, which role each of them has, how many minions there are in the cluster and so on.

The master does not need to know the these things. Minions do need to know the address of the master, but they can self-register with master and join the cluster anytime.

With that being said, it did take two of our senior devs over a week to get a cluster we were happy with. I'm a fan of docker compose, and hope swarm becomes a good competitor to Kubernetes. I don't think it's there yet though.


> We run Kubernetes on CoreOS and every piece runs in a container. On the master, the api-server, controller and scheduler all run in docker containers. On the minions, the proxy and kubelet both run in containers.

I don't think the direct complaint was that Kubernetes didn't "run in containers" but that it isn't as easy to set up as "just running some containers".

"With Docker we were supposed not to have installation instructions (aside from a few docker run arguments)". He was saying that the Kubernetes setup didn't fit the ease of setup touted by Docker.


It's a fair point -- Kubernetes is a pain to install.

But I think the author then translated that to mean that Kubernetes is also difficult to use. It's much easier to use than it is to install.

On the flip side, when I had started seriously using Docker, I quickly ran into it's limitations. Docker is easy to use... as long as you stayed within the playground. When I started using Docker Compose seriously, I quickly ran into problems it couldn't handle. This was about 12 months ago. I think the current networking and docker swarm expanded that playground ... but it doesn't bridge the gap into getting this out as reliable and resilient infrastructure.


I have a feeling Docker will get there eventually. It seems they are moving slower because are being very careful and deliberate with what they introduce.

Docker seems to go for solutions that are general and flexible whereas kubernetes is very opinionated.


I agree that Docker is careful about where they are going.

I'm not so sure that Kubernetes is being opinionated. They sit higher in the stack, and yet, it is built as a collection of building blocks.

Chances are, what Docker Swarm will turn into is reinventing Kubernetes.


Kubernetes itself doesn't have a steep learning curve.

Setting up and configuring a Kubernetes cluster by hand does have a steep learning curve. I had spent more time trying to get Kubernetes up and running than I have actually using it.

Actually using Kubernetes is fairly straight-forward (once you get it up and running). It solves many of the pain points with seriously using containers in a development workflow. Though, I ended up writing a Ruby framework to bridge the gap -- but that's the point. Kubernetes exposes a set of primitives onto which I can build on top of. So I'm not entirely sure where the "opinionated" bit comes from.


We're working on making Kubernetes easily deployable on Ubuntu, we just had a session on this today if you want to check out the state of things: https://www.youtube.com/watch?v=aj76OeBjxpk

Our goal is to make the deployment bits production ready for 16.04, and I'm sure $your_favorite_os_vendor is probably doing the same thing.


That's great to hear. I initially installed Kubernetes via the hypercube containers on my dev ubuntu box ... but I have found that some key pieces really should be built in (flanneld, early docker). Of course, with Docker 1.9 out, you guys probably want to support that too :-D

For staging, I ended up using CoreOS. I didn't want to use any of the existing scripts to deploy k8s on ubuntu, and I didn't want to write my own Chef recipes. I was a bit intimidated by CoreOS at first, but it has been a great experience ... except when it wasn't :-D.


> Kubernetes requires you to learn its CLI and configurations. You cannot use docker-compose.yml definitions you created earlier. You’ll have to create Kubernetes equivalents. You cannot use Docker CLI commands you learned before. You’ll have to learn Kubernetes CLI and, likely, make sure that the whole organization learns it as well.

That seems unfair, as as far as I can tell there are not "swarm equivalents" for most of kubernetes features.

Does swarm have any of the features that kubernetes does for scaling different containers or doing rolling upgrades?

It looks like [1] the swarm scheduler does not yet restart containers if a host goes down.

[1] https://github.com/docker/swarm/issues/599


NB: I'm one of the founders of the Kubernetes project

I'd recommend anyone trying to install Kubernetes check out: https://get.k8s.io/

Or the single node Docker based instructions:

https://github.com/kubernetes/kubernetes/blob/master/docs/ge...

Or boot2k8s: https://github.com/skippbox/boot2k8s

Or kubernetes on OS X: https://github.com/rimusz/coreos-osx-gui-kubernetes-cluster

Or of course the "as a service" versions like Google Container Engine, CoreOS Tectonic, etc.

We're definitely also working on simplifying the install instructions.

I'd also like to expand a little on some of the features that differentiate Swarm from Kuberentes, namely:

   * Secrets
   * Replicated sets of containers
   * Rolling update from one version of code to the next
   * AutoScaling (Kubernetes 1.1, in Release Candidate now)
   * HTTP load balancing for autoscaling  
   * Load balancing for sets of objects (e.g. Frontend)
   * Service discovery of those replicated sets (Swarm has discovery for individual containers, but no concept of Service)
In general, our goal is to build a system that makes distributed system construction easier.

I would also refer people to the discussion in https://github.com/docker/compose/issues/1899#issuecomment-1... as well as https://github.com/docker/docker/pull/8859 on why we feel that the Node-based Docker API is problematic for a cluster level API.


I also work on Kubernetes.

To expand on what Brendan wrote, Kubernetes does a lot more than just run containers. It provides container-centric infrastructure and a platform for building robust automation.

https://github.com/kubernetes/kubernetes/blob/master/docs/wh...

http://www.slideshare.net/BrianGrant11/wso2con-us-2015-kuber...

In a VM-centric world, one wouldn't just use raw VMs in production, but would also use managed groups, load balancing, autoscaling, DNS, Spinnaker, etc. If one manually pins specific containers to specific hosts, IaaS APIs and tools can still be used directly. When dynamically scheduling containers, that doesn't work.


One big advantage of Kubernetes is that Google Container Engine is simply hosted Kubernetes, meaning if you are using google cloud stuff Kubernetes is already installed and all of that stuff is taken care of.

Of course if you are using AWS I believe their container stuff is based on compose.


AWS ECS is entirely proprietary. It's a whole different beast.


well, they might have meant that they support (most of) the compose configuration file.

So, if you are currently using docker-compose, it's pretty easy to start it up on ECS


It's easy to change the config file, but that is very different from actually swapping over to ECS.


I work for Giant Swarm, a German startup. We are preparing to release our less opinionated container stack for public availability. Features include bare metal provisioning, flexible service discovery, user/org management tools, PaaS'ish interface for devs, and a nice CLI. We're now deploying this solution on-prem for large customers who were looking to run container technologies at scale, including Docker. If you are in operations and are interested, drop me a line @giantswarm.io. My username is the same same as here! If you are a developer, we have an alpha test going on and I'm happy to invite you to it.


> "The negative side of that story is that if there is something you’d like Swarm to do and that something is not part of the Docker API, you’re in for a disappointment."

Isn't entirely a fair exploration of the downsides of Docker Swarm compared to Kubernetes. I agree a lot of the wind is taken out of Kubernetes sails with Docker's new features but certainly not all of it.

This deserves a longer write up then I can give it right now as so few articles on the web right now really delve into the differences between these systems outside saying who maintains but for its attempt I'm pleased to see this article.


Disclaimer: I work at Google on Kubernetes.

One of the biggest opportunities for improvement we hear about is just making sure getting it going is easy - there's lots of work in flight to make this better, most recently here: https://github.com/kubernetes/kubernetes/pull/16077

We would really appreciate your thoughts on this - it's an extremely active area on the project.


My biggest beef with Kubernetes as a prospective user is the AWS under the hood document contents. We use AWS and seeing a bunch of caveats in that document is very off-putting.


I have recently come to the same conclusion. Right now I am trying to figure out the best way to do service discovery and load balancing. It would be really nice if I could just specify some swarm constraints and create a load balancer serving the containers that match those constraints.

Unfortunately it doesn't seem like an easy solution exists yet, but once it does I'm all in with Swarm.


> Right now I am trying to figure out the best way to do service discovery and load balancing.

Or you could just kubernetes, which does them (mostly) right, right out of the box…

The way it works is that services are exposed as hostnames and environment variables; connexions to those hosts on certain ports are load-balanced to the actual backing services, transparently. It's really nice, and very Twelve-Factor-like.


Articles like this that give zero context for the arguments are annoying. A bit from the author describing who they are, what kind of projects are the building, the size of the company, (rough) size of code base, number of engineers would help immensely in evaluating their arguments.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: