

Docker extensions: the new Docker plugins model - mrmrcoleman
https://clusterhq.com/blog/docker-extensions/

======
fundies
Be mindful of incentives and you can see into the future.

Docker wants to grow (a direction that's out of their hands, as they have a
board to answer to). They must do this while playing the delicate balancing
act of tightly controlling the project's direction towards their desired list,
while at the same time maintaining the goodwill of its community. Part of this
is defensive enterprisey blog posts, part of it is singing their praises at
their conferences, and part of it is reigning in fractures in their platform
back under their own umbrella with a plugin model. I'm not even judging Docker
for doing these things; as a funded company they almost _have_ to do these
things.

CoreOS wants to ride the same wave, but they know that Docker is encroaching
on their turf and will increasingly do so (see above). That's why they're
deliberately breaking away with their own format. Core hopes that they can
lock away enough of an audience behind their platform before Docker can do the
same. They basically _had_ to do this to survive, because Core, being composed
of modular components, can't make money when everyone takes their moudlar
components and simply runs them under the umbrella of Docker's growing
platforms.

And this whole API wrapping thing? It's a power play. Whoever controls the UI
holds a captive audience -- literally -- whether that's Docker, Core, or
someone else. This is basically war over your servers, though I'm sure nobody
involved will commit the PR suicide of admitting it. You are encouraged to
engage in healthy skepticism and wonder if this comment itself is written by
one of the players.

I have no idea who (if anyone) is in the right here, but I encourage everyone
reliant on these platforms to see what's going on beneath the waves. Be
mindful of the incentives these companies have, and make your technology
choices accordingly.

~~~
kelseyhightower
Disclosure: CoreOS employee.

The application container specification (ACI) and the initial implementation,
Rocket, are here to provide balance to the container ecosystem. If you review
the draft spec ([http://bit.ly/1s4Om8p](http://bit.ly/1s4Om8p)) you can see
it's clear that the proposed specification is designed to remove the need for
a "vendor" when it comes to building, distributing, and running containers. As
an example everything in the spec can be accomplished using a handful of Linux
command line utilities such as gpg, tar, and systemd-nspawn -- you could use
lxc or libcontainer as well.

When it comes to distributing container images the ACI spec only requires
hosting ACI images (tarballs) behind a HTTP webserver and leverages DNS
instead of custom registry APIs. The ACI spec skips the app store model in
favor of simplicity -- think RPM and YUM.

docker vs rocket?

Don't really care about that right now. Nothing wrong with Docker providing
value, and yes, docker provides tons of value. Rocket is just our (CoreOS)
attempt at providing value in a slightly different way. IMO the only people
that should "own" the container spec are the people using it, and right now I
believe that is the main driver behind our (CoreOS) intentions for proposing
the ACI specification.

I know I'm biased by association, but trust me, I want using containers to be
a simple thing. In reality using containers should be about as exciting as:

$ yum install app; service start app

~~~
jzxcv
(disclosure: another CoreOS employee)

> When it comes to distributing container images the ACI spec only requires
> hosting ACI images (tarballs) behind a HTTP webserver

Just to clarify a little: the spec [1] actually only stipulates that the
_metadata_ for image discovery is provided over HTTPS. The actual ACI images
(i.e. tarballs) can be distributed using any desired mechanism. For example
the meta discovery service can provide HDFS and BitTorrent addresses, and then
a client can choose how to retrieve the payload. This is one of the parts of
the spec I'm personally most excited about, so would welcome feedback and/or
implementations!

[1] [https://github.com/coreos/rocket/blob/master/app-
container/S...](https://github.com/coreos/rocket/blob/master/app-
container/SPEC.md#meta-discovery)

------
lewq
Luke here, the author of this post. I just wanted to say that Solomon did a
great job pulling this meeting together, and that this meeting is an example
of a pattern that Docker is using to get community feedback in gradually
increasing circles (finally resulting in a proposal on GitHub). IMO, it's a
good balance between completely open design and sane forum for discussion!

~~~
lewq
And huge thanks to Alexis for proposing the meeting in the first place and
inviting me along. ;)

------
TheDong
The model sounds like it _might_ be inspired by git. If it truly is, that
would be wonderful. (Example git extension:
[http://blogs.atlassian.com/2013/04/extending-
git/](http://blogs.atlassian.com/2013/04/extending-git/) )

The idea of supplying a convenient set of low-level primitives and then
letting you compose them, in any language, and become a first-class citizen
(at least in cli exposure) is wonderful.

I could see a world where docker plugins simply work by "docker XYZ" doing a
"docker run XYZ" and bind-mounting the docker.sock in as well as providing a
few additional convenience sockets / functions / potentially container links.

Making docker plugins also docker containers sounds really appealing to me and
if anyone has more information on the actual details being considered I'd love
to hear.

~~~
lewq
The current thinking from the meeting (correct me if I'm wrong) is that Docker
extensions are late-bound named Docker containers; each extension defines a
set of "drivers", which have privileged communication to the Docker daemon,
and which speak libchan (which is msgpack over HTTP 2) to interface with well-
defined "internal" (but public) APIs in the Docker daemon.

Each driver can bind to sets of internal APIs, which allow it to insert
blocking hooks in e.g. a networking API, for "network create" or for volumes,
before "volumes create".

In terms of extending the CLI, there was some mention of standardizing on JSON
Schema for specifying the APIs, such that CLI user interfaces could be auto-
generated.

A lot of detail here remains to be thrashed out, but yes, the git model is
definitely one to look at. We also talked about modelling this on other
successful plugin models, such as the one that ships with Eclipse.

------
lclarkmichalek
I wonder why CoreOS weren't in that meeting. They provide a ton of stuff that
could be seen as a docker plugin (flannel being the most obvious imo)

~~~
shykes
CoreOS got a dedicated meeting the previous week. I pitched all this to them
in person and seemed to get a cautiously optimistic response with agreement to
follow-up soon. The next day, they announced rocket without warning.

I'm happy to spend as much time as needed with everyone to build a great
ecosystem together. But if someone is not interested, they're not interested.
I can't force them.

~~~
lclarkmichalek
Oo so that's why you're so pissed. ah well

~~~
shykes
Doesn't excuse my initial knee-jerk reaction... Anyway, let's move on and
focus on building cool stuff now :)

------
antocv
Ah ffs, again another tool another system reinventing/adding-support for the
whole application stack from networking to "business logic".

Docker is starting too smell like Java Enterprise Edition, but none-Java, and
comes from Linux.

It was supposed to contain an application, now there is special networking
needs to support an "application" that is made to run "in" docker, and special
software /orchestration to faciliate what is a middle-ware business-domain
problem.

Are we going to see books soon with titles such as "Programming your E2E
business app on the platform of Docker in 24hours". "Inside your social media
networking platform up and running in and with Docker."

~~~
fundies
The cynic in me thinks that this is a highly desired end-state for Docker, and
engineering such a situation is precisely the goal plan.

It would certainly be profitable for them.

