
Announcing Docker Machine, Swarm, and Compose for Orchestrating Distributed Apps - KenCochrane
http://blog.docker.com/2014/12/announcing-docker-machine-swarm-and-compose-for-orchestrating-distributed-apps
======
shykes
Hi all. A few clarifications.

\- The meme that we are adding more and more features into the docker binary
is unfounded. Please, please, I ask that before repeating it you do your
homework and ask for actual examples. For example 1.4 is coming out next week:
it has 500+ commits and basically no new features. It's all bugfixes,
refactoring and a general focus on quality. That's going to be the trend from
now on

\- Swarm and Machine are separate binaries. They are not part of the core
docker runtime. You can use them in a completely orthogonal way.

\- Swarm follows the "batteries included but removable" principle. We are not
doing all things to all people! There is a default scheduling backend but we
want to make it swappable. In fact we had Mesosphere on stage today as part of
a partnership to make Mesos a first-class backend.

\- there is an ongoing proposal to merge compose into the docker binary. I
want to let the design discussion play out, but at the moment I'm leaning
towards keeping it separate. Now's the time to comment if you care about this
- that's how open design works :)

Yes, our blog post is buzzwordy and enterprise-sounding. I am torn on this, on
the one hand it helps make the project credible in IT departments which
associates that kind of language with seriousness. We may find that strange
but if it helps with the adoption of Docker, then it benefits every Docker
user and that's ok with me. On the other hand, it is definitely not popular on
HN and has the opposite connotation of dorky pencil holder suit douchiness.
Being from that tribe I share that instinctive reaction. But I also realize
it's mostly psychological. I care less about the specific choice of words than
the substance. And the substance here is that we launched a lot of new stuff
today, and put a lot of effort in keeping the focus on a small, reliable
runtime, composable tools which do one thing well, pluggability, open APIs,
and playing nice with the ecosystem. Basically everything the community has
been worrying about recently.

~~~
ridruejo
I think the concern from third-parties is not so much about bloat or whether
everything is bundled in the same binary but that by defining "the way" of
doing things, you are taking the oxygen out of other ecosystem partners that
have alternate approaches for doing so. You are making it pluggable but by
having your official implementation it becomes the "de facto" standard.

By defining a standard way of orchestrating you make life easier for users
that can deploy with confidence the same environment across multiple clouds,
etc. BUT the tradeoff is that you alienate another set of parties (providers
are commoditized, ecosystem partners offering alternate orchestration tools
feel marginalized). Secondarily, there are concerns whether your approach is
better than others and that is too early to tell.

I am curious to see how all this plays out, lots of things at stake here and a
very thick fog of war :)

~~~
nl
Anyone who has used Docker for more than a trivial app sees the need for
orchestration. If the requirement is that obvious then it clearly needs to be
part of the core.

~~~
23david
Many many solutions have been meeting that need just fine and many new ones
are still in the works. Adding a new solution will not help things.

~~~
nl
Yeah it will.

At the moment I have to choose between multiple competing solutions.
Competition is good when there are strong differentiating features.

In the "Orchestrate Docker" field the requirements are so clear after using it
for your first project that it isn't at all clear what benefit competition, or
solutions outside of the core Docker product suite gives.

------
themgt
I think the community really ought to take a good minute to consider, beyond
technical reasons, whether it really makes sense to so tightly tie the future
of computing to a single for-profit company's quickly enlarging platform.

Someone below compared this to systemd - it's really more like your entire
containerization operating system. And since you run everything via
containers, it effectively is your operating system/platform.

So, clearly they (and CoreOS, etc.) will want to monetize their container
operating system/platforms. But is it really a good idea to build the entire
industry's concept and implementation of containers themselves on the back of
a single company's implementation, when we know a healthy ecosystem would see
a number of companies with competing implementations of container OS with
varying degrees of compatibility, and hopefully eventually open standards.

I really am beginning to see the CoreOS guys point here - if Docker could have
just stuck to running containers and doing that awesome, there would have been
space for other companies to build out the ecosystem around that shared
interoperable container format. But if Docker is now set on tightly bundling a
toolchain for the container operating system around their format, suddenly it
looks a lot more like they took a Microsoft embrace-extend-extinguish approach
to LXC.

And thus the need for Rocket.

~~~
hosay123
> it looks a lot more like they took a Microsoft embrace-extend-extinguish
> approach to LXC

That's overstating things slightly, since it appears no Docker employee has
ever contributed a single line of code to the LXC implementation itself (that
would be hard work, rather than bikeshedding userspace tools, after all). The
kernel tree contains only one reference to a Docker bug, fixed by a Red Hat
employee, while the LXC tree contains a single reference to a Docker bug,
fixed by a Canonical employee who looks to be one of the LXC maintainers.

The reality is replacing Docker isn't all that hard, as we're now seeing with
Rocket. The kernel interfaces aren't tied to it (hell, its clear from Git logs
the LXC devs are barely even aware of it).

edit: It's simpler to look at Docker and Rocket for what they both are:
commercial plays built on the work of others. That work hasn't magically
disappeared somewhere, it still exists and is in use (albeit by a slightly
less PR/buzz-aware side of the community).

A vendor-independent implementation of containers already exists, as it did
long before Docker did. If people are serious about seeing the "container
ecosystem" flourish, they should be contributing cash, employment offers, or
employee time to the real people and projects doing the work (as opposed to
publishing upset blog posts and writing fashionable Go code)

~~~
coldtea
> _it appears no Docker employee has ever contributed a single line of code to
> the LXC implementation itself (that would be hard work, rather than
> bikeshedding userspace tools, after all)_

You keep using this word, bikeshedding. I don't think it means what you think
it means.

[http://en.wiktionary.org/wiki/bikeshedding](http://en.wiktionary.org/wiki/bikeshedding)

[http://www.urbandictionary.com/define.php?term=bikeshedding](http://www.urbandictionary.com/define.php?term=bikeshedding)

TL;DR; it's about discussing/debating trivial details instead of the important
characteristics of a system, not about working on easy stuff vs more hard
programs.

~~~
kordless
> while neglecting the design of the power plant itself

I think what hosay123 is trying to say is that LXC is the power plant and the
tooling around it is Docker, which runs in userspace. Note I'm not trying to
say that, I'm just saying things.

~~~
coldtea
That's what I got from his comment too.

But the first difference (with bikeshedding) is that bikeshedding is not
"doing the tooling around the power plant instead of the plant itself" but
spending time discussing and designing some insignificant detail instead of
the power plant.

(That is: bikeshedding is getting lost in DEBATING the easy and incosequential
details of an implemntation).

Docker is neither "lost in discussion" (they're building things, and a lot,
and fast), nor a trivial detail (that would be e.g. LXC code's tabs vs spaces
convention etc).

------
chuhnk
I'm a little bit afraid about the fragmentation occurring in the container
world right now. I felt like in the beginning I could rely on Docker being
focused on containers and really making that a stable building block and
utilise tools around that provided by industry leaders. Now Docker have thrown
their own hat in the ring, creating a monopoly for themselves. Do you choose
docker and their whole ecosystem? Do you pick something else off the shelve?
How about Amazon ECS container service, CoreOS with their array of tools.

I don't feel like I can depend on any of these things, so I stick with the
absolute bare minimum of what will build me a container. Which of these
technologies will stay? Which will go? What will change as time passes? What
will be deprecated?

In all honesty with Kubernetes talking about supporting Rocket and probably
any other container technology that creeps up in the next few years, I'm
leaning towards using that as the point of stability which I can deploy
anywhere and know that I get the exact same API. Google, the leader in cluster
management writing open source orchestration technology, think that's where
I'll keep my focus.

~~~
SEJeff
My bet is on Redhat's project atomic, which uses Kubernetes under the hood and
will eventually support Mesos for scheduling (via kubernetes).

~~~
chuhnk
I think what Redhat is doing with Atomic and OpenShift is really great. They
are building some awesome stuff around Kubernetes however I feel like that is
going to be entirely geared towards the enterprise space. Much like OpenStack
it's very complicated and the barrier to entry is still quite high. When I
looked at the docs I immediately had to go look at reference info for all this
new terminology they had introduced. They'd basically ignored what had come
out of industry usage and naming. But in saying that, I do hope they cause
massive shifts in the enterprise game.

I have quite a bit of experience in the microservice and cluster management
space and have started to prototype something much more accessible to the
masses. I'll know within the space of 6-8 weeks whether it's actually going to
work or not but nonetheless we need people who understand and use these
technologies on a day to day basis in the general tech space.

~~~
jacques_chester
The next generation of OpenShift is largely hypothetical at the moment.

Cloud Foundry already has a container-based PaaS and it's only getting better.
And it's not tied to a single vendor.

Disclaimer: I'm biased as hell, I work on CF for my dayjob.

~~~
fatherlinnux
The current version of OpenShift (2.X) is essentially Linux container based
(selinux, cgroups, kernel namespaces). OpenShift 2.X has been running in
production for years now (openshift.com). The reason something like Docker
wasn't used is because nothing existed, so Red Hat had to invent something.
From an app perspective it's fairly good for what it does.

~~~
jacques_chester
Thanks for the correction, I was wrong. I do note that Warden uses a lot of
the same primitives (as do several projects building on the facilities that
arose or were repurposed for Linux-VServer, OpenVZ, LXC etc etc).

------
endymi0n
Wonderful. We just containerized all of our apps and are in the process of
choosing our approach for running and deploying them in a cluster.

Now what?

Flynn? Deis? Kubernetes? Mesos? Shipyard? Pure Fig instead? CoreOS, Serf,
Maestro? Rather stay on AWS with Elastic Beanstalk or the new docker service?

Welcome to the party, Swarm and Compose. By now we are not even sure anymore
if Docker itself is still the way to go, now that Rocket and LXD have arrived.
I don't even have the time to compare all these options, respectively get a
deeper look into architectural considerations.

What to decide by? Company backing? Because it's good or bad? Github stars?
Deis for self-announcing it's 1.0, even if it's based on pre1.0 components or
Flynn for being honest they're still in beta?

Honestly, I've rarely been as tired of new technologies as I have been by now.
I could roll a dice as well. If you have a good and reasonable choice for me,
let me know (I'm actually serious)

~~~
borjaburgos
I'd invite you to check out [http://tutum.co](http://tutum.co) , but then
again, I'm one of the cofounders, so I'm 100% biased. Take my invitation with
a grain of salt. Feedback welcomed. Cheers,

~~~
thomasfromcdnjs
Was reading this earlier and saw your comment. A few hours later while
Googling you came up number one in results =D Giving it a shot now.

------
scanr
The 2 examples in Docker Swarm were Redis and MySQL.

From the announcement: "Docker Swarm provides high-availability and failover.
Docker Swarm continuously health-checks the Docker daemon’s hosts and, should
one suffer an outage, automatically rebalances by moving and re-starting the
Docker containers from the failed host to a new one.".

Does anyone know how they'll handle the data? Both Redis and MySQL have
various ways to deal with high availability e.g. Redis Sentinel, MySQL master
/ slave or MySQL multi master with Galera.

~~~
HorizonXP
I'd like to see a reasonable answer to this. Because up until now, I've been
using data-only containers to mount directories into these app containers
(i.e. Redis and PostgreSQL). The fail over handling has been horrendous for
me, because there's no easy way to migrate the data across machines, unless
you setup multiple hot slaves or something. And this happens often with CoreOS
updates on the alpha channel.

Ultimately, I gave up and created a separate Ubuntu VM to run as an NFS
server. Every CoreOS instance mounts it, and then my data-only containers now
map back to the NFS mount. This way, when CoreOS moves the Redis or PostgreSQL
containers, it has the data available to it.

It's not my favourite setup, but it's worked well-enough this past week that I
haven't had to manually correct things while on vacation.

I'm hopeful that someone smarter/more experienced can share a better solution.

~~~
rcoder
Mounting your database storage volume via NFS seems like a surefire way to
cause yourself pain down the road. You might want to review the following (old
but still relevant) article to understand some of the pitfalls:

[http://www.time-
travellers.org/shane/papers/NFS_considered_h...](http://www.time-
travellers.org/shane/papers/NFS_considered_harmful.html)

The tl;dr basically boils down to the fact that PostgreSQL and MySQL (or
really any good database engine running on *NIX systems) make very strong
assumptions about the POSIX-ness of their underlying filesystem: flock() is
fast, sync() calls actually mean data has hit disk, etc.

Docker/CoreOS/etc. aren't a replacement for a good SAN or other reliable
storage. If you value your data I'd suggest keeping your core database(s) on
dedicated machines/VMs (ideally SSD-equipped and UPS-backed). If managing
those is too much work, consider a managed cloud database; DynamoDB and RDS
can stand in for Redis and Postgres, respectively.

~~~
rckclmbr
So much this. RDS is a great solution to this problem which hasn't been solved
yet. Basically zero maintenance.

~~~
HorizonXP
My immediate problem is that my software is running on a dedicated server
hosted on-site; I have Internet access, but everything is hosted and run on a
single massive VMWare ESXi server. I don't have the benefit of cloud-based
services like RDS. I could modify my architecture to utilize that instead, and
that's something I've thought about doing.

As it stands, the VM server is UPS-backed, but does not run on SSDs. There is
no SAN. If I were to fix the existing implementation, I would: a) Add a
secondary VM server as a redundant backup. b) Add a SAN

However, I don't think I can justify the capital expenditures for that. So
what I'll likely do is replace the NFS server with a dedicated PostgreSQL
server (VM), and perhaps start thinking about moving the majority of the
infrastructure out of the building and into the cloud to take advantage of
things like RDS. The latter is even more important for scalability as we add
more customers.

------
geerlingguy
I think this sheds a little more light on the reasons CoreOS decided to start
building the Rocket container runtime[1], and not tie it's destiny to being
paired with Docker.

[1] [https://coreos.com/blog/rocket/](https://coreos.com/blog/rocket/)

------
bfirsh
GitHub repo for Machine:
[https://github.com/docker/machine](https://github.com/docker/machine)

GitHub repo for Swarm:
[https://github.com/docker/swarm](https://github.com/docker/swarm)

Compose is still being designed in the open. If you want to have a say about
how it works, check out the proposal:
[https://github.com/docker/docker/issues/9459](https://github.com/docker/docker/issues/9459)

------
slifin
Talking about Docker as we are

Does any one agree it's still too hard for dumb dumb developers like me? I'm
on windows (boo hiss) so in the past I've tried to use boot2docker, but you
can't just point your webserver container at a place on your local file system
and say serve that please

You have to bring in some crazy storage file container which will serve it all
via samba or something and then you need to figure out linking those
containers together and then how the hell do you tell a web server "hey you,
document root is over here on another container"

At this point I'm usually like fuck it we'll use some bad idea .exe web stack
and develop as normal I like the idea of containers, quicker smaller than vms,
nice file system history going on but in practice it isn't easy enough in my
opinion

~~~
cdoxsey
Completely agree. You can do something like this:

1\. Statically compile your application so that it can run standalone. For
ruby, python or java that means including the interpreter and any dynamic
libraries. Hard at first, but once you have a build setup it's pretty
straightforward

2\. Bundle your application with its assets in a zip file or tarball

3\. Make it so that your application has a well defined set of resources it
uses that are isolated from other applications. For example with a database
you might have a `your-app\data` folder where it stores the actual database.
Also make sure you are careful with ports

4\. Pass around configuration via environment variables that you hand to your
application (ie: DATABASE_HOST=127.0.0.1)

With a setup like that you can build a sensible stack that runs anywhere. You
don't need containers:

1\. You don't need the protection: just don't write processes which clobber
other processes. It's not that hard.

2\. The reuse mechanism seems cool, but it comes with baggage. Which version
of ubuntu are you starting with? Does it include the latest updates to shared
libraries? If it does how do you know that your application will still run
down the road? And if it doesn't, how are you keeping on top of security
updates?

3\. Containers, as envisioned by docker, are way overkill. Why do you need an
entire ubuntu to run a simple web app?

~~~
Gigablah
In regards to (3), you don't... people have made functional containers with
the scratch or busybox images that are less than 10mb in size.

------
fndrplayer13
Maybe I'm just thick in the head, but one of the thing that continues to
disappoint me about Docker is the size of the binaries. Wouldn't it be good if
we could build the container a single time, and then ship that top-level
changeset around? For example. If I build a 200mb binary on top of
`ubuntu:latest` I would like to be able to just ship that 200mb around,
instead of 200mb + ubuntu:latest (another ~167mb?). If you colocate many
services in a single machine (say 10-12) the network of grabbing those
tarballs makes Docker less appealing.

edit: Also, its inefficient to build this Dockerfile every single time on
every single host, which is why I'm talking about shipping tars. You could
have 30 hosts with these 12 containers running on each one.

Any plans on dealing with something like this in the future?

~~~
xnxn
The idea is that you build the image and push it to a registry. The service
hosts then only pull the layers they don't already have.

In practice, running a private registry is a pain (last I checked the official
Docker image for it crashed on startup). I like what Rocket is doing here with
filesets and plain old URLs.

~~~
nickstinemates
Specifically about filesets - take a look at the docker import command.

It will take an arbitrary rootfs (tar file) and turn it in to a docker image.

~~~
bmurphy1976
I've been down this path, far down this path.

Docker can import arbitrary layers from a tar file either via the command line
or the api. The problem is there is no official way of getting a set of
arbitrary layers.

That might not seem like a big deal, but when your image has something heavy
like mono or java and pushes upwards of a gig or more running on a relatively
puny cloud instance with poor I/O that adds up.

If you want to have a much more efficient workflow, you have to roll this
yourself like we did by going direct to the file system (at least for export).
This is a messy pain in the ass and I would not expect most people to do it.

I would be very happy if Docker stopped trying to shove the registry down our
throats and gave us a model where we could substitute our own push/pull code
that better utilized our existing infrastructure.

This is a case where Docker feels more monolithic than it needs to be and I
would be happier if it was broken up into a set of smaller more independent
tools (e.g. docker, docker-push, and docker-pull).

~~~
nickstinemates
Would it surprise you to know we completely agree in theory that push and pull
should not be reliant or used at all with the registry if a user so chooses?
But what we do care about is when you docker pull or docker push you have a
set of assumptions that are always true?

Leading questions, I know, but I couldn't agree with you more and I know I'm
not alone.

If you don't know the truth - open source works a lot like other software
projects. You gather feedback in as many different forums as possible make a
guess at how to solve that, work with your developer communities and vested
parties to come up with a solution iterate a bunch of times and hopefully get
something out in users hands that doesn't completely suck.

So the question is - Who's going to work on what and in what order? The
feedback is important and with enough it would get higher on the list. But
someone has to make it happen. Or make a proposal on how it would work. That
would be truly welcome. Let me know how I personally, and we collectively, can
help.

~~~
bmurphy1976
Trust me, I know how it works. I had a pull request to do at least one part of
this but it was never accepted. I'm not bitter about it, I don't care, it
wasn't right for Docker at the time. We had to get something done to meet our
commitments so we hacked it and moved on. Agreement in "theory" doesn't help
when you have real commitments and limited resources.

> So the question is - Who's going to work on what and in what order? The
> feedback is important and with enough it would get higher on the list. But
> someone has to make it happen. Or make a proposal on how it would work. That
> would be truly welcome. Let me know how I personally, and we collectively,
> can help.

The community clearly desires a more stable and pluggable core, doesn't like
the registry workflow, and desires a daemonless mode. If the discussion over
the last few days hasn't woken the Docker team up to that fact, then I don't
know what to say.

The new features are big and flashy so they get the big announcements,
however, you should not be surprised that when a new announcement is dropped
the community collectively responds "what about x y and z?"

I think a Roadmap to 2.0 is what is missing and would give the community the
confidence that is currently lacking.

------
gtaylor
I'm excited to see Docker continue to progress so quickly, but I'll admit to
being more and more confused over how many components and services you have to
contend with now. I'm sure I could sort out all of these names if I spent more
time playing, but it's getting a little confusing to me.

There's a lot to be said for making something only do one thing and doing it
well, but it starts getting tough to keep track of when you've got a bunch of
somethings.

~~~
thaJeztah
> confused over how many components and services you have to contend with now

You don't have to. It really depends on what you're trying to do. Using Docker
alone, you'll be able to build and run containers, link them together to build
a "stack" etc.

If you want to make building a stack (a group of containers that together form
your application) easier, you can use an orchestration tool to automate this,
for example, Fig, Crane, or now Compose. Or, create a bash script to do this;
it's up to you.

If you want to build a cluster (run your containers distributed over several
servers), you _can_ do that with docker alone, but it will get hard to manage.
You can build a tool for that (making use of the Docker API), or use an
existing tool, like Flocker, Shipyard or now Swarm.

So if all you need is running a few containers on a single host, Docker alone
may be enough for you, in that case you can safely ignore the other stuff for
now.

~~~
gtaylor
> So if all you need is running a few containers on a single host, Docker
> alone may be enough for you, in that case you can safely ignore the other
> stuff for now.

I should have clarified on that earlier, but I'm interested in Docker for the
more clustered approach. I don't really want to re-invent any wheels if I
can't help it, so I'd be using existing tech.

Just in your reply, you mentioned:

* Fig * Crane * Compose * Flocker * Shipyard * Swarm

Whew. That's kind of what I'm chaffing on. I could learn all of this, but it's
a lot harder to casually understand how it all would potentially fit together.

~~~
IanCal
> Whew. That's kind of what I'm chaffing on. I could learn all of this, but
> it's a lot harder to casually understand how it all would potentially fit
> together.

This is part of what the things mentioned in the announcement should help
with. There's a clear need for clustering and linking, and so lots of people
have built different tools to solve it. That's grand, but it's incredibly
confusing coming into it. Particularly since tools vary from production ready
to beta to "not actually implemented yet".

The aim here is to still allow people to come up with all kinds of different
approaches but to have a set of components that work and are supported by
docker. So that you can just get it and it works, and _then_ start swapping
bits in and out for the new hotness or some ultra-performant component/etc.

~~~
gtaylor
To clarify, I'm not asking for a monolithic stack. I like the modularity. It's
just getting confusing to those that don't look at this stuff every day. It's
more of an ergonomic issue than an approach issue.

There could be some simple solutions to this, mostly centering around
education and the promotion of the most popular components. I don't have
answers, here, but figured I'd share that it's getting a little confusing in
case someone had ideas.

~~~
IanCal
I think what I said came across wrong, sorry.

I fully agree, I find the whole ecosystem really confusing. Every time I look
at it there's some new combination of things, but then one turns out to be
alpha/beta ...

What I hope is that the production of these new docker 'blessed' tools will
allow me to go to their homepage and see a series of "getting started"
tutorials and easily installable tools so I can at least get going.

------
23david
These aren't new projects... just rebranded versions of half-baked feature
proposals that I thought were still being reviewed/discussed. I guess
somewhere a decision was made to move forward regardless of community
concerns?

Baking these features into Docker is the beginning of the end of Docker's
Enterprise story. Moving forward with these proposals guarantees the rise of
Rocket and other Enterprise focused containers. Docker is forking its own
community here.

~~~
bfirsh
Both Machine and Swarm are not baked into Docker – they are separate projects
and binaries:

[https://github.com/docker/machine](https://github.com/docker/machine)

[https://github.com/docker/swarm](https://github.com/docker/swarm)

Compose is still in the design proposal stage. We want to hear whether you
think it should be built into Docker or not:
[https://github.com/docker/docker/issues/9459](https://github.com/docker/docker/issues/9459)

~~~
23david
Docker Machine looks to be a revision/rebrand of your docker hosts proposal
made in the last 1-2 days or so. Very confusing.

The proposal discussion:
[https://github.com/docker/docker/issues/8681](https://github.com/docker/docker/issues/8681)

Rename "docker hosts" to "docker machines":
[https://github.com/bfirsh/docker/commit/e6abec4033f48d1cad31...](https://github.com/bfirsh/docker/commit/e6abec4033f48d1cad31380f3c94da137b64ae74)

2 days ago, from you:

    
    
      I have now rebased the host management branch on top of #8265 and squashed it:
      https://github.com/bfirsh/docker/compare/host-management
      Any pull requests should now be based on top of that. The driver interface hasn't changed, so it should be a trivial matter to rebase any existing pull requests. The main thing which has changed is that drivers are expected to set up identity auth for communication with the host. See this commit for an example of how to do so.
      The old branch is here for reference.
      Full update and preview builds coming soon."
    

1 day ago, a message from tianon, core Docker maintainer:

    
    
      Has there been any progress on splitting the actual driver implementations out of the core binary?
    

And now this. Color me confused.

~~~
mynameisvlad
The proposal was made in October, no? The only thing that happened 2 days ago
was renaming it to "docker machines".

So unless I misread the thread, your timeline is a bit off.

------
saryant
I'm excited to watch this battle between CoreOS and Docker heat up. I recently
took a CoreOS/Docker-based system into production on AWS and there are
definitely still some missing pieces. Swarm appears to be a slightly higher-
level version of fleetd. Compose is something CoreOS is missing though.

~~~
lclarkmichalek
Compose seems to be coming straight out of Kubenetes' pod design, which the
CoreOS people have quite a stake in.

------
23david
They sound like a closed-source vendor at this point. I'm surprised to see an
open-source project mention "ecosystem partners":

    
    
      Each one is implemented with a “batteries included, but
      removable” approach which, thanks to our orchestration 
      APIs, means they may be swapped-out for alternative
      implementations from ecosystem partners designed for 
      particular use cases.
    

So if I have a startup working on an orchestration solution, what is the
process to become an approved 'ecosystem partner'. Do I need to sign a NDA and
pay for an approval process to get my stuff merged in?

~~~
potto
Why would you have to sign a NDA to create an open source ecosystem around an
open source project with an Apache 2.0 license?

~~~
23david
Not sure. Never heard of an official partnership program either, so I'm
interested to know details about how this partnership program works. Could be
innocent, but something about it smells like a OSS shakedown to me.

Just 'cause it's open-source doesn't mean there aren't any politics involved
in what gets merged or not.

An example:

lmctfy support was added by the Google GCE team a long long time ago and I
attended a meetup where the GCE team submitted the pull request right there...
it was never merged. Languished for months without any public review comments
from Docker maintainers.

I'd never seen anything like this before. Here we have Google engineers
integrating their work with Docker on their own time and being completely
ignored. Embarrassing is a nice way to put it.

There may have been outside discussions and real issues that made merging a
_bad_ idea, but as an OSS project I expect those discussions to happen in the
pull request, not in some business meeting. I'm sure any technical issues
would have been addressed if there had been any. I'm also sure that the GCE
team would have been more than happy to maintain their driver. Politics and
open-source are a happy mix.

sources:

    
    
      https://github.com/docker/docker/pull/4891
      https://github.com/docker/docker/issues/4874

~~~
nickstinemates
> There may have been outside discussions and real issues that made merging a
> bad idea

Yeah, and Eric Brewer got on stage at DockerCon June and stated that pretty
publicly. So your instinct was right.

~~~
23david
Part of the problem here could definitely be communication. For whatever
reason, it's been incredibly hard to follow what is going on with the Docker
project. Keeping info in one place would definitely help.

I'm not able to find any details that support what you're saying. It seems
that in June Eric Brewer was still publicly asking for libcontainer to merge
in LMCTFY support.

I definitely could be wrong, but looking at this screenshot from Eric's June
talk it looks like they were still trying to get it merged:

[https://pbs.twimg.com/media/B4DKb51CYAA687s.jpg:large](https://pbs.twimg.com/media/B4DKb51CYAA687s.jpg:large)

In June, Eric Brewer posted this:

    
    
      We’ve released an open-source tool called cAdvisor that 
      enables fine-grain statistics on resource usage for containers. 
      It tracks both instantaneous and historical stats for a wide 
      variety of resources, handles nested containers, and supports 
      both LMCTFY and Docker’s libcontainer. It’s written in Go with 
      the hope that we can move some of these tools into libcontainer
      directly if people find them useful (as we have).
    

[http://googlecloudplatform.blogspot.com/2014/06/an-update-
on...](http://googlecloudplatform.blogspot.com/2014/06/an-update-on-container-
support-on-google-cloud-platform.html)

------
skrebbel
Can anyone who understands this better than I do explain whether this replaces
Fig, and if so, which parts of it?

~~~
shingler
Yes, and all of it. It's largely the same thing, in Docker core.

That's just Compose, though. The other two are completely different beasts.

~~~
bfirsh
It's also worth pointing out that Compose is a proposal for a new feature to
Docker. If you don't think the design is right, make your voice heard. :)

[https://github.com/docker/docker/issues/9459](https://github.com/docker/docker/issues/9459)

------
dingdingdang
With this amount of buzz words needed to install and run an app I see a bright
future for Go and its "all compiled in one, web-server included, ready to go"
executable structure. I mean sure: you get automation and repeatability of
installs but at what cost? You have to maintain all the buzz-word hoops that
your app needs to be wrapped into - requiring what amounts to a full new job
in medium sized software company. And you still need the sysadmin to actually
make the servers work.

------
lclarkmichalek
Pretty amazing. Is there anything the docker binary doesn't do now? The
comparisons with systemd (I like systemd) are becoming more apt.

~~~
cpuguy83
swarm and machine are separate binaries. Compose is still a proposal.

EDIT -- link to compose proposal
[https://github.com/docker/docker/issues/9459](https://github.com/docker/docker/issues/9459)

------
preillyme
Mesos 0.20.0 adds the support for launching tasks that contains Docker images,
with also a subset of Docker options supported while we plan on adding more in
the future.

Users can either launch a Docker image as a Task, or as an Executor. The
Docker Containerizer is translating Task/Executor Launch and Destroy calls to
Docker CLI commands.

------
RemoteWorker
I haven't been reading Docker related news lately. Is there anything I should
know if I already have my own working continuous deployment system made with
Ansible, Jenkins and Docker? For example, it seems like I don't need Docker
Machine if I already have my own Ansible recipes for provisioning.

~~~
nickstinemates
If it's already working and you're happy, awesome. Machine is just a
convenient way to provision new Docker hosts, in a separate project/binary.

If that is being filled by Ansible for you today, fantastic !

------
deeviant
Begun, these docker wars have.

------
dmritard96
This is all cool, but I really am I excited for better ARM support one day...

~~~
alexandros
The Raspberry Pi installation you get at Resin.io uses Docker on ARM and it
works great! We also cross-compile containers in the cloud. Docker for ARM
standalone is now available in several Linux Distro repos, Arch Linux is one I
am aware of.

What other parts would you like to see?

~~~
dmritard96
have used the resin.io docker images on arch on my pi. What I would love is a
raspbian version as its just a super popular distro on raspi. I know its a bit
of gridlock right now. It seems that its going into the jesse version of
debian but the raspbian release of jesse isn't out the last time a checked (a
couple weeks ago). I did try out lessbian - awesome name - on raspi and I am
not recalling specifically why it didn't work out for me. I had some issue
with the filesystem dependencies not being there and getting them was a pretty
large undertaking (so much so that it would have been just as easy to do it on
raspbian). Would love to contrib myself but right now i'm swamped and don't
really have much expertise in either docker or arm. At least my experience
should bring some clarity/insight into what I encountered.

~~~
alexandros
Hey there. When we did Docker-on-ARM, we asked people to push their rpi-
compatible containers to the Docker index with the rpi- prefix. Since then
there have been many containers published, including several flavours of rpi-
raspbian. There's nothing to stop you from packaging up your favourite distro
and posting it on the Docker index.

[https://registry.hub.docker.com/search?q=rpi-](https://registry.hub.docker.com/search?q=rpi-)

~~~
dmritard96
totally, I think the painful part is prepping the underlying os with the
filesystem dependencies as it requires modding the kernel? I could be way off
though

