Hacker News new | more | comments | ask | show | jobs | submit login

What is the difference between Compose, Swarm, Tutum and Kubernetes? To me looks like you can use each of them to compose a set of containers to run an app.



The Docker command is used for working with one image.

When working with multiple images, coordinating ports, volumes, environment variables, links, and other things gets very troublesome very quickly as you get into using a mish-mash of deployment and management scripts.

Compose aims to solve this problem by declaratively defining ports, volumes, links, and other things.

Compose does allow you to scale up and down easily but it doesn't do auto-scaling, load balancing, or automated crash recovery -- this is where Swarm comes in.

Kubernetes does what both compose and swarm do, but in a single product.

Both Swarm and Kubernetes are designed to accommodate provisioning of resources across multiple hosts and automatically deciding which containers go where.

Compose, Swarm, and Kubernetes are all things you can install yourself.

Tutum is far bigger and the scope of its usage falls well outside of what Kubernetes and the other's do, but suffice to say that it's more of a PaaS than anything else.

Someone please correct me if I'm wrong, I'm not very familiar with Swarm, Kubernetes, or Tutum.


Thanks, that was helpful.


Swarm is a service that sits between your docker cli and your docker engines. It makes it as if you are talking to one docker engine from the cli when in fact you are talking to many. This makes it easier to manage docker engines across multiple hosts.

Compose is a tool that issues commands to docker engines and will e.g. spin up containers and link them together in the right order. It makes rote docker commands a little less painful. It can talk to a single engine or, apparently, many engines via Swarm.

When it comes to putting providing a production "service" based on containers you need to be able to add and remove docker engines, to, for example, deploy new code via rolling update. Google Container Engine (GKE) and Amazon ECS marry docker concepts with front-to-back implementations of hosted infrastructure like server instances and network load balancers. Over-simplified, each has an agent that runs on a docker engine and does work similar to Compose and Swarm against AWS and GCE. Google's daemon is called Kubernetes.


I won't comment on Kubernetes because I'm not qualified to do so.

Compose: Multi-container orchestration. You define an application stack via a compose file including links between containers (web front end links to a database). When you run docker compose up on that file, compose stands up the containers in the right order to deal with dependencies.

Swarm: Represents a cluster of Docker Hosts as a single entity. Will orchestrate the placement of containers on the hosts based on different criteria and constraints. You interact with the swarm the same way you would any single docker host. You can plug in different discovery services (Consul, Etcd, Zookeeper) as well as different schedulers (Mesos, K8s, etc).

Tutum: SaaS-based graphical front end for Docker Machine, Swarm, and Compose (although it's Stackfiles not Compose files w/ Tutum). The stuff described above is handled via Web GUI

You didn't ask but Machine is automated provisioning of Docker hosts across different platforms (AWS, GCE, Digital Ocean, vSphere, Fusion, etc).


If tutum is primarily a gui on top of a bunch of open source products, it doesn't sound like much of a business plan.


Ansible Tower is a gui on top of Ansible.

Docker Hub is a gui on top of the Docker registry.

GitHub is a gui on top of git.

And so on.


It worked well enough to be acquired by Docker :)


There was an article posted on HN not too long ago about this.

https://news.ycombinator.com/item?id=10438273


Swarm and Kubernetes are definitely competitors.

Swarm is a container manager that automatically starts and stops containers in a cluster using a scheduling algorithm. It implements the Docker API, so it actually acts as a facade that aggregates all the hosts in the pool. So you talk to it just like you would with a single-host Docker install, but when you tell Swarm to start a given container, it will schedule it somewhere in the cluster. Asking Swarm to list the running instances, for example, would list everything running on all the machines.

Kubernetes is also a container manager. The biggest difference is perhaps that it abstracts containers into a few high-level concepts — it's not tightly coupled with Docker and apparently Google plans to support other backends — that map more directly to how applications are deployed in practice. For example, it comes with first-class support for exposing containers as "services" which it can then route traffic to. Kubernetes has a good design, but for various reasons the design feels overly complicated, which is not helped by some of the terminology they've invented (like replication controllers, which aren't program, but a kind of declaration), nor by its somewhat enterprisy documentation.

Kubernetes is also complicated by the fact that every pod must be allocated a public (or at least routable) IP. If you're in a private data center that already has a DHCP server set up, that's a non-issue, but in this day and age, most people probably will need an overlay network. While there are tons of such solutions — Open vSwitch (aka OVS), GRE tunnels, IPsec meshes, OpenVPN, Tinc, Flannel (formerly Rudder), VXLAN, L2TP, etc. — none of them can be called simple. Of course, plain Docker doesn't solve this in any satisfactory way, either, but at least you can be productive with Docker without jumping into the deep end like Kubernetes forces you to do.

Docker Networking is a stab at solving the issue by creating an overlay network through VXLAN, which gives you a Layer 2 overlay network. VXLAN has historically been problematic because it has required multicast UDP, something few cloud providers implement, and I didn't know VXLAN was a mature contender; but apparently the kernel has supported unicast (which cloud providers to support) since at least 2013. If so, that's probably the simplest overlay solution of all the aforementioned.

As for Compose, it's a small tool that can start a bunch of Docker containers listed in a YAML file. It's unrelated to Swarm, but can work with it. It was designed for development and testing, to make it easy to get a multi-container app running; there's no "master" daemon that does any provisioning or anything like that. You just use the "compose" tool with that one config file, and it will start all the containers mentioned in the file. While its usefulness is limited right now (for example, you can't ensure that two containers run on the same host, unlike Kubernetes with its pods), the Docker guys are working on making it more mature for production use.


> If so, that's probably the simplest overlay solution of all the aforementioned

(I work on Weave)

Weave Net also lets you create a Docker overlay network using VXLAN, without insisting that you configure a distributed KV store (etcd, consul, etc.). So I would argue Weave Net is the simplest :-)

More detail here: http://blog.weave.works/2015/11/03/docker-networking-1-9-wea...


FWIW, it's possible to specify Swarm filters using Compose which Swarm can use to know it should colocate containers on the same node ("affinity"): https://github.com/docker/swarm/tree/master/scheduler/filter...


I think it worth mentioning that setting up networking in Kubernetes is largely a problem for deployers - not so much k8 users.

There are a variety of hosted, virtualized and bare metal solutions available.

The CoreOS folks have some nice all-in-one VMs if you want to experiment.

Google's hosted Container engine is about as simple as it gets - and very inexpensive (I have been playing with it for a few weeks and have spent about $20).




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: