
Announcing Docker 1.9: Production-Ready Swarm and Multi-Host Networking - ah3rz
http://blog.docker.com/2015/11/docker-1-9-production-ready-swarm-multi-host-networking/
======
silversmith
On topic of docker and multi-container, multi-machine orchestration... Is
there a comprehensive "docker deployment for dummies" guide out there? For
example, let's say I have couple web applications with their dockerfiles
ready, a database and a redis instance on software side, and then couple
server instances for it all to run on. Where do I go from there? What's the
best process to package everything up and get it to run on those servers?
Deliver updates to those applications, preferably in zero-downtime manner? I
have a vague notion that my CI should be building the images, and pushing them
to something called docker registry. But how are those secured? Is that a paid
service? And what happens then, how do servers know to fetch and run the new
version?

~~~
jschorr
This space is still (fairly) new, so the general answer seems to be that there
are multiple solutions to each problem, some that work well with others and
some that do not.

For orchestration, offhand the most active projects seem to be Kubernetes [1],
Swarm [2], Deis [3] and Mesos[4]. Kubernetes is built primarily by Google,
Swarm by Docker and Deis by EngineYard, with each team having experience in
different areas (orchestration, containers and full-tier solutions,
respectively).

[1] [http://kubernetes.io/](http://kubernetes.io/) [2]
[https://docs.docker.com/swarm/](https://docs.docker.com/swarm/) [3]
[https://github.com/deis/deis](https://github.com/deis/deis) [4]
[http://mesos.apache.org/](http://mesos.apache.org/)

Kubernetes, Swarm and Mesos handle the orchestration portions only, while Deis
is a more feature-complete solution that handles the CI and registry portions
as well.

Delivering updates to these solutions and doing so with zero downtime is still
very early as well. Kubernetes has a rolling update mechanism, but it can
still (occasionally) result in downtime if not setup correctly. Deis handles
updates via git-push and will ensure that new containers are in place before
the old ones are taken out of service. As for Swarm, my personal knowledge is
limited in regards to rolling update, so I'll leave that for someone else to
fill in.

For building and delivering images, there are as well multiple solutions. The
common solutions are to use a Docker-compatible registry such as Quay [5]
(Disclaimer: I'm a lead engineer on the Quay team) or the DockerHub [6]. In
addition to supporting simple image pushes, both registries as well support
building images in response to GitHub or BitBucket, so they can also be used
as an integrated CI, of sorts. Both these services are paid for private
repositories. Docker, as well, has an open source registry [7] which can be
run on your own hardware or a cloud provider.

Registries are secured by running under HTTPS at all times (unless explicitly
overridden in Docker via an env flag), and having user credentials for pushing
and (if necessary) pulling images. Registries typically offer organizations
and teams support as well, to allow for finer-grained permissions. Finally,
some registries (such as Quay) offer robot credentials or named tokens for
pulls that occur on production machines as an alternative to using a password.

[5] [https://quay.io](https://quay.io) [6]
[https://hub.docker.com/](https://hub.docker.com/) [7]
[https://github.com/docker/distribution/blob/master/docs/depl...](https://github.com/docker/distribution/blob/master/docs/deploying.md)

In terms of how servers know when updates are available, it all depends on
which orchestration system is being using. For Kubernetes, we at CoreOS has
been experimenting with a small service call krud [8] which reacts to a Quay
(or DockerHub) image-push webhook and automatically calls Kubernetes to
perform a rolling update. Other orchestration systems have their own means and
methods for either pushing or pulling the fact that the image to deploy has
changed.

[8] [https://github.com/coreos/krud](https://github.com/coreos/krud)

Hope this information helps! (and if I forgot anything, I apologize)

~~~
thedevopsguy
The docker ecosystem is hard to follow. Like you've just mentioned there are
multiple solution to each problem. Docker based solutions for
orchestration(Swarm), storage (v1.9) and networking (v1.9) overlap with the
offerings from Kubernetes,Mesos, Flocker and whole bunch of others.

It's hard to know whether to wait for Docker to provide a solution or to use
something that already has momentum. Take networking for example. Solutions
have been bandied about for the last year or so and only now do we have
something that's production ready. Do I rip out what I already have for
something that is docker native or do I continue with the community based
solution.

Storage (Data Locality) also follows a similar path. Kubernetes provides a way
for making network based storage devices available to your containers. But
now, with the announcement of Docker v1.9 do I go with their native solution
or something that has been around for ~6months longer?

I've been working with these technologies for the past year and it has not
been easy building something that is stable with a reasonable amount of
future-proofness baked in.

~~~
robhaswell
My advice would be to think hard about your requirements and pick something
which meets them. Don't fret about the "best" solution - you and your team
have more important problems to solve. If something works for you then you
have made the right choice. All the solutions you would pick today will still
be around tomorrow.

------
ninkendo
How is the multi-host networking implemented? Is there a dependency on a
service discovery system? What service discovery system? Or are they using
portable IP addresses? How are those implemented? Overlay networks? BGP? Or is
it doing some crazy IPTables rules with NAT?

Will it work with existing service discovery systems? What happens if a
container I'm linked to goes down and comes up on another host? Do I get
transparently reconnected?

There's so much involved with the abstraction they're making that I'm getting
a suspicion that it's probably an unwieldy beast of an implementation that
probably leaks abstractions everywhere. I'd love to be proven otherwise, but
their lack of details should make any developer nervous to bet their product
on docker.

~~~
trout
This link is basically what Socketplane was working on when they got acquired:
[https://github.com/docker/docker/issues/8951](https://github.com/docker/docker/issues/8951)

Basically integrating OVS APIs into Docker so it could use more mature
networking code as well as VXLAN forwarding. VXLAN is basically IP
encapsulation (a 16-bit ID) that the networking industry has standardized on.
It more or less allows for L2 over L3 links. I like to think of it as the next
Spanning Tree.

So the unwieldy part is the weight OVS brings as well as the VXLAN
encapsulation in software - both of which have momentum towards being more
lightweight.

~~~
ghayes
Could you explain this to someone not as familiar with networking?

------
dcosson
I don't quite understand the swarm & compose workflow for production. I'd
rather use a declarative language to specify what the systems look like,
potentially with auto-scaling, health checks to replace containers if they go
down, etc. I don't want to run one-off commands to launch containers based on
local instead of centrally stored configuration, run one-off commands to
launch the underlying hosts and to scale to more instances (which then isn't
persisted anywhere), etc.

I feel like I'm just not understanding the "docker approved" approach. Which
is surprising because docker itself is so great.

The networking stuff seems interesting though, I'm very curious how the rest
of the ecosystem will evolve to take advantage of it or not.

~~~
olalonde
Swarm & compose are pretty low level and not sufficient for production
deployments in my experience. In production, you usually need things like
logging/monitoring, versioning/rollbacks, scaling, configuration/user
management, load balancing, etc. which you either have to setup yourself or
get through a PaaS. I personally recommend Deis because it's Docker/CoreOS
based (lightweight) and its developers are very active and responsive. I have
to admit though I haven't evaluated all alternatives.
[https://github.com/deis/deis](https://github.com/deis/deis)

~~~
dalyons
If you don't mind using AWS, check out Empire. Open source, 12-factor
compatible PaaS built on top of amazons robust ECS container scheduler.
Rollbacks/versioning/load balancing/scaling included.

[https://github.com/remind101/empire](https://github.com/remind101/empire)

------
merb
Docker more and more gets into a direction which i don't like.

EDIT the missing content..: I mean currently it mostly is for the big users.
There aren't too much things for "small" users. The big things like
kubernetes, etc are really hard to configure / maintain, etc. I mean it's
easier to maintain ansible / puppet / chef / etc... - scripts than maintaining
a real "docker" environment. even looking at deis, flynn, openshift its not
just run "this, upgrade with this".

after you setup the hole thing you need to create huge buildpack scripts or
Dockerfiles or kubernetes configs or whatever. you just needed process
isolation, now you build a infrastructure on top of a infrastructure.

~~~
ed_blackburn
Certainly. There's an excellent opportunity there if someone is willing to
execute. Would you (the community) pay for a product? Or support an open
source project through consultancy? Or are we going to sit and wait for a
large engineering organisation to build it internally and open source?

~~~
merb
i would pay, yes. however it would need to be a excellent product. and I don't
think that any product the near term could replace a existing
ansible/puppet/etc... workflow which contains a bunch of lines (less then 1000
for multiple projects).

as said the only thing gain for docker would be process isolation, so it
should be really really awkward simple and useful on low end hardware. (as the
other solution already does) and getting process isolation with cgroups isn't
too hard on newer kernels. (# systemctl set-property httpd.service
CPUShares=500 MemoryLimit=500M)

so what the product needs to have:

    
    
      - process isolation
      - easy configuration
      - configure the os/software and update it easily
      - nothing more than a bunch of lines per project (no dockerfile frickling)
      - binary / git rollouts

~~~
ownagefool
Installing kubernetes isn't actually as difficult as you've made it out to be.
You'd be able to draft a workflow in less effort in comparable circumstances
using puppet and you'll get things such as health checking and failover of
your apps for mostly free.

If you're well invested in puppet, using puppet is going to be easier because
you know it. You can happily use docker with puppet. Stop puppet from
installing $APP and instead use it to docker pull && docker run $APP.

This means the logic for building your application has obviously moved to the
Dockerfile. You cannot currently get rid of this logic, only hide it in
abstractions. I prefer it living in the apps repo as it's a nice seperation of
concerns, but you obviously would prefer it to be magic, which you can have
but at the price of versatility.

If you are able write code to build and deploy apps, then moving over to using
docker should be pretty trivial for you. However, I actually see docker as a
replaceable part, whilst kubernetes might actually be here to stay.

------
jonesetc
I've been watching these 5 RCs for a while now because of great additions:

\- Build time variables:
[https://docs.docker.com/engine/reference/builder/#arg](https://docs.docker.com/engine/reference/builder/#arg)

\- docker volume subcommand, finally we have a sane way to clean up left over
volumes which are left by default.

~~~
russell_h
That volume subcommand is critical to any description of Docker as "production
ready". Docker defaults to leaving volumes "dangling" on the filesystem and
there was previously no supported way to remove them. Really glad to see this
make it in.

------
stephenitis
Congrats to the Docker team on 1.9 and Swarm 1.0! From the ClusterHQ team.

Thanks for the shoutout to flocker in the docker volume example.

$ docker volume create -d flocker --name=myvolume $ docker run -v
myvolume:/data busybox sh -c "echo hello > /data/file.txt" $ docker run -v
myvolume:/data busybox sh -c "cat /data/file.txt"

on a related note we've been have been having a great time seeing volumes get
mounted and remounted with swarm 1.0/flocker.

------
tonyhb
"We’ve been scale testing [swarm] to 1,000 nodes and 30,000 containers and it
keeps on scheduling containers in less than half a second."

That's really, really exciting. Been looking forward to the GA release of
Swarm for quite some time, glad to see it here.

------
twelvenmonkeys
Freakin' awesome. Congrats on the docker team for this new release! Looking
forward to testing out the multi-network hosting.

On a side note, is there any authentication / security for public-ip docker
multi-networking?

------
zwischenzug
I've put together this asciinema demo to show some of 1.9's new features:

[https://github.com/ianmiell/shutit-
docker-1_9/blob/master/RE...](https://github.com/ianmiell/shutit-
docker-1_9/blob/master/README.md)

feedback welcome @ianmiell

------
willyyr
Great to see this. I've been missing Docker Compose on windows. Playing around
with swarm for some time now I was hoping for it to become production ready.
Does anyone know if swarm can now reschedule failed containers to another
host? Couldn't find this detail in the blog post about it.

------
amalag
I am really trying to figure out the ecosystem. I did some stuff with a single
server but now as we need to move it to multiple servers we have Rancher,
Weave, and so many others (kubernetes?). And now docker has integrated
multihost networking so I am really not sure how to proceed.

~~~
jacques_chester
I'm biased: use a full PaaS. I like Cloud Foundry because I've worked on it.
OpenShift is another alternative.

~~~
DannoHung
Pretty sure openshift v3 is built on top of kubernetes

~~~
jacques_chester
And Cloud Foundry can run Docker containers.

The point is that both are _platforms_. Application developers shouldn't
really need to care about the internals of a PaaS, for the same reason that I
don't really care about the internals of the Linux kernel.

------
sytse
What is the difference between Compose, Swarm, Tutum and Kubernetes? To me
looks like you can use each of them to compose a set of containers to run an
app.

~~~
lobster_johnson
Swarm and Kubernetes are definitely competitors.

Swarm is a container manager that automatically starts and stops containers in
a cluster using a scheduling algorithm. It implements the Docker API, so it
actually acts as a facade that aggregates all the hosts in the pool. So you
talk to it just like you would with a single-host Docker install, but when you
tell Swarm to start a given container, it will schedule it somewhere in the
cluster. Asking Swarm to list the running instances, for example, would list
everything running on all the machines.

Kubernetes is also a container manager. The biggest difference is perhaps that
it abstracts containers into a few high-level concepts — it's not tightly
coupled with Docker and apparently Google plans to support other backends —
that map more directly to how applications are deployed in practice. For
example, it comes with first-class support for exposing containers as
"services" which it can then route traffic to. Kubernetes has a good design,
but for various reasons the design feels overly complicated, which is not
helped by some of the terminology they've invented (like replication
controllers, which aren't program, but a kind of declaration), nor by its
somewhat enterprisy documentation.

Kubernetes is also complicated by the fact that every pod must be allocated a
public (or at least routable) IP. If you're in a private data center that
already has a DHCP server set up, that's a non-issue, but in this day and age,
most people probably will need an overlay network. While there are tons of
such solutions — Open vSwitch (aka OVS), GRE tunnels, IPsec meshes, OpenVPN,
Tinc, Flannel (formerly Rudder), VXLAN, L2TP, etc. — none of them can be
called _simple_. Of course, plain Docker doesn't solve this in any
satisfactory way, either, but at least you can be productive with Docker
without jumping into the deep end like Kubernetes forces you to do.

Docker Networking is a stab at solving the issue by creating an overlay
network through VXLAN, which gives you a Layer 2 overlay network. VXLAN has
historically been problematic because it has required multicast UDP, something
few cloud providers implement, and I didn't know VXLAN was a mature contender;
but apparently the kernel has supported unicast (which cloud providers to
support) since at least 2013. If so, that's probably the simplest overlay
solution of all the aforementioned.

As for Compose, it's a small tool that can start a bunch of Docker containers
listed in a YAML file. It's unrelated to Swarm, but can work with it. It was
designed for development and testing, to make it easy to get a multi-container
app running; there's no "master" daemon that does any provisioning or anything
like that. You just use the "compose" tool with that one config file, and it
will start all the containers mentioned in the file. While its usefulness is
limited right now (for example, you can't ensure that two containers run on
the same host, unlike Kubernetes with its pods), the Docker guys are working
on making it more mature for production use.

~~~
bboreham
> If so, that's probably the simplest overlay solution of all the
> aforementioned

(I work on Weave)

Weave Net also lets you create a Docker overlay network using VXLAN, without
insisting that you configure a distributed KV store (etcd, consul, etc.). So I
would argue Weave Net is the simplest :-)

More detail here: [http://blog.weave.works/2015/11/03/docker-
networking-1-9-wea...](http://blog.weave.works/2015/11/03/docker-
networking-1-9-weave-plugin/)

------
davexunit
Can it create unprivileged containers yet?

~~~
boucher
The experimental build has support for user namespaces.

------
sshykes
I am curious (and afraid to look): How much breakage and backwards-
incompatible API changes are there this time 'round?

~~~
bfirsh
Here's a changelog for the API:

[https://docs.docker.com/engine/reference/api/docker_remote_a...](https://docs.docker.com/engine/reference/api/docker_remote_api/)

The API is backwards compatible, so you if you don't want any of the new
features, there is no breakage! E.g. if you're using Docker 1.8 (API version
1.20), prefix your API calls with /1.20/ and you're safe to upgrade to Docker
1.9.

------
m_mueller
What's the relation between Docker Swarm and Hipache? Is Hipache discontinued?
Is Swarm built on it? Is it compatible?

Also, is it possible to make Swarm answer to the same client IP with the same
backend server during a 'session'? This is very important for database
applications where the sync after each write may take some time and you don't
want the UI to show different states from different servers until the system
has settled. Hipache AFAIK doesn't offer this, IMO the biggest downside of
this.

~~~
cpuguy83
Hipache is a reverse proxy, swarm is for managing/scheduling a cluster of
docker engines.

~~~
m_mueller
Well, when I think of a cluster I think of load balancing and failover - for
which you can use a reverse proxy such as Hipache. So I don't quite understand
how this is not related. The way I understand Swarm, it could be just a higher
level abstraction built on top of that?

~~~
cpuguy83
Load balancing and failover are generally a requirement in clustered
environments, but are really different jobs. LB's "schedule" connection
requests, cluster schedulers schedule the things that those connections are
going to (and probably the LB), makes sure they are up and running, etc.

------
tachion
Congratulations on new release! Sadly, still without FreeBSD Jails support,
that was sort-of promised around 1.0...

~~~
cpuguy83
I'm not sure the completeness of the support as I don't run BSD, but I know
it's being worked on and at least compiles on BSD. There's actually quite a
bit of BSD-only code in.

------
TheMakeA
Does anyone know if Docker Toolbox supports NFS (or something faster than VBox
Shared Folders) yet?

~~~
picozeta
Couldn't you use ssh and build something with sshfs?

~~~
picozeta
May I ask, why this was downvoted? Isn't the idea of vbox's shared folders to
provide a convenient data exchange with the host OS? If one puts ssh into the
docker container, the host may mount a container's directory into its tree.

------
ossreality
[on-topic, but somewhat identifying comment removed] I didn't realize I was
actually properly unhellbanned. I prefer the echo. :/

------
eungyu
This completes Docker.

------
AdmiralACK
Docker is a solution looking for a problem.

~~~
Perceptes
Can you provide details about why you think that?

