
Docker, CoreOS, Google, Microsoft, Amazon to Develop Common Container Standard - yurisagalov
http://techcrunch.com/2015/06/22/docker-coreos-google-microsoft-amazon-and-others-agree-to-develop-common-container-standard/
======
joslin01
I am a big believer in containerization technology from a practical
standpoint. It has allowed me to create repositories that act as services.
Database, search, api, admin, etc.. they are all their own service. I do not
have to configure any servers this way; instead, I declare what the system
ought to be and Docker makes it happen. I don't even have to configure init
scripts because a proper Dockerfile will contain a start mechanism usually in
the form of any other executable: `docker run your/api --port 80 --host
host.company.com`.

The only thing that matters then between services is their bindings, which
gives you the ability to use any programming language for any service.
Deployment with ECS has been going well so far for me. My flow:

1.) Push code to GitHub

2.) GitHub tells Docker, which builds a private image

3.) Docker tells my build server

4.) Build server tells ECS to update the given service

5.) ECS pulls from DockerHub, stops service, starts service

The only thing missing is that DockerHub doesn't tell my build server what tag
it just built! It builds tags like dev / staging for the given branches, but
doesn't relay that info over its webhook. There's a ticket open about this
already and I'm sure they'll get to it soon.

Nevertheless, I'm able to administer any system -- things like Elasticsearch,
db, api -- from code on a branch. This is powerful to me because I have to
administer environments for everything. Rather than do all this work with
Puppet, Chef, or even Ansible, I can just _declare_ what the systems ought to
be and cluster them within code branches.

With ECS coming into picture, you're encouraged to forget you even have
physical boxes in the first place. If you think of the power at your finger
tips that results from this development workflow, I believe it's a no brainer
for everyone to jump on board and get this as good as it can be. It's going to
be a huge boon to the software community and enable more _services_ sharing.

~~~
jfoster
What happens if there is a security flaw somewhere in your software stack?
(kernel, web server, etc.)

Do you end up having to update multiple containers with the same patch?

Context: I've looked at Docker, but not used any containerization yet.

~~~
geggam
Security ? How do you find out what libraries / versions are in a container ?
... you are pulling the curtain dont look back there :)

~~~
jfoster
Is that really the state of things? It is what I concluded after looking into
this a bit for Docker, but it seems incredible to me that so many companies
are jumping into this idea of containerization without any good & available
solutions for this problem.

One potential solution that came to mind was that if there was a standard way
of deploying an application into containers, and Google/Amazon/Microsoft
provided auto-updating containers, the maintenance of a secure container would
be in the hands of companies who (hopefully) have the resources necessary to
keep the entire stack up-to-date.

~~~
digi_owl
They seem to be betting the farm on the containerization will contain (heh)
whatever security issues that come up.

This in the sense, i guess, that if they have a security flaw in their php
that gives disk access, all the attacker will see is the content of the php
container as the database will be on the next container over.

Then again the containerization seems to have come alongside devops, where the
mantra seems to be "update early, update often, to hell with stable branches".

~~~
jfoster
I've heard of that approach (breaches being limited to a container), but I
don't think it makes sense.

If a security flaw exists in one container due to the stack not being updated,
isn't there a pretty good chance that it also exists in the other containers?

Also, for any given container, there probably still is a way for an attacker
to do immense amounts of damage. With the database container you can steal
customer data. With the PHP container you can remotely instruct the database
to do whatever you want, or just point the code at your own database.

~~~
u02sgb
Depends what the security flaw is. If it's (as discussed above) a php error,
it's unlikely to be in your database container.

------
mpdehaan2
Hmm, interesting.

I'm unclear what value this adds in the end

 _Yes_ container images would become portable between systems, but if you hide
the underlying system enough under abstraction layers what makes me choose
between CoreOS or Docker or the future thing? What's the value difference?

Containers are useful if you have the build systems in source control, but if
you don't, you don't know how to rebuild them or what is in them - they become
dangerous in that case. They become scary "golden images".

Docker files were already very easy to regenerate things -- and I think
interface wise, one of the more compelling wins. If there were other systems
it's still likely they would have different provisioners.

It seems the (excuse me for the buzzword) value add then quickly becomes in
the people providing management software for Docker, rather than in Docker,
and Docker becomes more or less a subcommitee of a standards body.

I'm sure that's _NOT_ true, but it's confusing why they wouldn't want to seek
differentiation to me and what this means for valuation purposes.

~~~
istvan__
The value added here is a non-fragmented container ecosystem. This would be
huge, at least to me.

~~~
mpdehaan2
More so asking about business/product value, given Docker's investment levels
(congrats on this) and that other companies are seemingly doing container
management better and investing more labor here. This seems to leave Docker
with (A) DockerHub or (B) a foundation for profit options. Both might be
totally valid, but it's unclear.

Sorry for crossing streams.

That standardization makes it easier for the orchestration companies and
clouds is obvious. I'm just legitametly curious what this means for Docker,
Inc and the business model, since it seems to be seeding the lower end, and
they haven't invested in the upper end as much -- Docker itself not being a
tremendously large amount of plumbing, and all OSS, it's easy to replicate. So
what they have is basically support and the leadership of that community.

As right now, this reads like I'll be able to use everything on
Mesos/CoreOS/ECS and just swap out a backend, it's unclear why I would want to
pick things from Docker Inc. It's like I get pluggable tooling where all the
frontends can speak to backends and the image format is the same -- so it
seems differentiation would have to happen at the _top_ in the tooling, which
is weird seeing efforts have gone into the bottom end and other companies have
done a lot on the top.

Perhaps some messaging to address. Perhaps there's enough funding that this
isn't a concern even for the next five years. I don't know, but I'm curious.
It's useful to know this to tell where container-land is going, and it's an
uncertain time in which management orchestration software to pick for running
Docker clouds. (We can probably guess ECS is going to be around, roadmaps of
others subject to speculation)

Mostly because I find the evolution of tools in this space interesting.

~~~
shykes
If you're interested in Docker's opinion on this topic, I recommend that you
watch today's keynote. That's where we introduced runC and where we explain
why.

[http://www.ustream.tv/recorded/64499822](http://www.ustream.tv/recorded/64499822)

A few highlights:

00:28:00 keynote intro

00:42:48 Docker growth stats

00:52:54 I take our positioning from the top

01:01:22 I talk about what you call "the top of the tooling"

01:46:13 I talk about the plumbing (including runC announcement)

01:12:25 I talk about open standards (including open container project
announcement)

------
deathhand
This is great news and a shift in the way business is traditionally done. If
containers were a thing 20 years ago there would be fierce vendor lock-in and
patent/lawsuits flying everywhere. People would chose which cloud platform to
deploy based upon which tools they prefer.

Docker has fundamentally changed the way they think of the way they fit in the
tech eco-system. Instead of selling a set of containers that only work with
their tools they've opened up the platform strengthening their position as the
go-to solution for management. Prudent move on their part. It limits their
potential market cap but solidifies them as an entrenched member for the
foreseeable future.

~~~
_hadrian
These are great times! Companies learnt that alone/closed/private doesn't
drive innovation... You can see in the latest news many standards been made by
the companies together... Working with docker for dev-environment has been
fantastic (for me), really fast and easy to start/stop/modify different
setups.

~~~
deathhand
Companies don't drive innovation. Innovation is just the side product of the
pursuit of profit. They fill a 'need' in society and get rewarded. If this
just so happens to be innovative then so be it.

------
gtirloni
We're not even near the peak yet.

[https://en.wikipedia.org/wiki/Hype_cycle#/media/File:Hype-
Cy...](https://en.wikipedia.org/wiki/Hype_cycle#/media/File:Hype-Cycle-
General.png)

~~~
bduerst
Depends. Gartner doesn't put "Container Computing" on their hype curve, but I
doubt it's on the level of autonomous vehicles.

[http://na2.www.gartner.com/imagesrv/newsroom/images/HC_ET_20...](http://na2.www.gartner.com/imagesrv/newsroom/images/HC_ET_2014.jpg;pv4cc7877f7de80268)

------
vezzy-fnord
I was going to ask why IBM weren't in, but read on to see that it's a general
Linux Foundation collaboration, and so naturally they're part of it.

So I guess we're going to have libcontainer support for AIX Workload
Partitions and OS/400 LPARs? It's gonna be interesting to see just how big the
Docker libs become.

~~~
nickstinemates
Stay tuned for the rest of DockerCon :)

------
bobsky
The proof is in the pudding. Overall this is very positive for the ecosystem
as a whole, and glad to see them all come together. But I thought a big
selling point of a standard means it's written down, currently the spec
returns a 404 on github [1] seems like a lot of unknowns on what's actually
being proposed.

It’s confusing why the App Container (appc) spec which is written down [2] and
has maintainers from RedHat, Twitter, Google, Apcera, CoreOS [3] is not being
promoted - what's the new OCP standard offering that isn't in the appc spec?

[1]
[https://github.com/opencontainers/specs](https://github.com/opencontainers/specs)
[2] [https://github.com/appc/spec](https://github.com/appc/spec) [3]
[http://www.infoq.com/news/2015/05/appc-spec-gains-
support](http://www.infoq.com/news/2015/05/appc-spec-gains-support)

~~~
cwyers
I'm guessing because Docker didn't participate in App Container, so this is a
compromise to bring Docker on board.

~~~
shykes
To clarify, the Open Container Project was started by Docker with the help of
the Linux Foundation. Then other vendors were invited (including AppC
maintainers). We did this because there was a clear demand for transforming a
de-facto standard (the Docker format) into a proper standard (OCF), and for
opening the governance of our runC implementation.

Since AppC is a completely different format from what Docker uses, starting
from that would have defeated the purpose. However it made a lot of sense to
invite the _people_ behind AppC to join, so that we could all build a better
spec and implementation together, instead of arguing on technical details that
don't matter.

------
jakejake
Serious question. We have a master DB and a slave, two memcache servers and 3
webservers behind a load balancer. We're not a public-facing company and so
have no reason to be building for "web scale" or whatever, we're well within
capacity.

Deploying new code (happens weekely) is just as simple as clicking one deploy
button in our version control system (which does a "git pull" on the web
servers). DB changes (which are very rare, once or twice a year) we run
manually. The cache servers never change. All of the server run automated
security updates on the OS. Otherwise we upgrade non-essential packages every
few months.

Is there a way that using Docker could make things better for us? I feeling
the "you should be using Docker" coming at me from every angle. Our deployment
is certainly not very sexy but it is simple and doesn't take a major amount of
effort. Is there a use case for a company like mine?

~~~
rattray
My guess is probably not, you're doing just fine.

If your server configurations change for some reason in the future (sounds
unlikely) it might be easier to start from scratch with Docker than manually
adjust the servers; but that all depends on your situation.

------
ape4
Lots of cooks. I hope its not a huge mess.

~~~
panarky
What is Goldman Sachs doing in the kitchen?

~~~
nickstinemates
They're a huge user and have written a lot of code.

------
TheMagicHorsey
What does this mean for vendors like VMWare that want VMs to be the unit of
deployment that developers interface with?

Seems to me that VMWare's VM management technology is still needed, but the
clock is now running on how long it will be before their part of the stack is
irrelevant, as all the smarts move into the container-management layer.

~~~
xahrepap
I'm at DockerCon and VMWare has a booth. They demoed some cool stuff. Like
using docker as the interface to some of their stuff. It wasn't pure
containers, but they had non-linux "containers" running (DOS was their demo).
Powered by VMWare but the interface was Docker. It even supported commits,
pulls, and pushes. That will be a good "polyfill" for platforms that aren't
supported by Docker (yet). It seems to me that VMWare is looking for ways to
stay relevant long term.

~~~
travem
Here's the video of the VMware demo of running MSDOS using docker -
[http://youtu.be/NrsVbnn-znk](http://youtu.be/NrsVbnn-znk)

------
vacri
If Docker's heavily screwed-up tag system becomes locked into a standard, I
may as well slit my wrists now.

~~~
colordrops
I agree that the tagging system needs some work. I'm curious as to what you
think is wrong with it. Could you share your thoughts?

~~~
vacri
Everything gets shoehorned into the one tag. Originating repo, image name,
version. So next time you pull "x/y:latest", the current holder of that tag
loses _everything_. You see an image in the list with nothing listed - no repo
and no image name, when it should just lose the 'latest' tag. If I have
multiple images on a machine, I now can't tell which are the old images from a
particular repo (well... I can guess by image size), and that's not great if I
want to roll back. There is no reason for an image to automatically lose a tag
describing where it came from.

It also means you have to tag twice using the full tag if you want build
numbers: this build is "x/y:latest" and "x/y:v1.2.3", when you could just do
"latest" and "v1.2.3". Similarly, when you pull an image, it should pull all
tags associated with that layer, so you pull 'latest' and it also brings the
tag 'v1.2.3'. This seemed to be the case with the docker v1.5, but it seems
inconsistent in 1.6. I haven't had time to nail down that suspicion, though.

There are other bits that could do with polish (like being able to do multiple
tags at once, rather than push afresh for each one), but the main problem is
the single tag field. Given the amount of metadata they already store for an
image, this seems a strange behaviour to suffer.

------
castell
What's next? Develop a common executable standard? Develop a common UI
standard?

------
olalonde
Weird to see Microsoft in that list. On a related note, will this new
container standard support non-Linux kernels? Would be nice to be able to run
containers directly on OS X without having to go through the boot2docker VM.

~~~
nickstinemates
Not so weird.

We have been working together for a very long time, and publicly announced in
October last week that Docker was coming to Windows Server.

~~~
contingencies
October last week, eh! That was about the time I deleted my last Windows VM,
too.

~~~
nickstinemates
How does a demo today work? :)

------
abritishguy
I hope from this we get a really well engineered standard and not some silly
mess.

------
rbanffy
Would anyone care to explain how Microsoft fits on this picture?

~~~
aduric
There are a large number of web applications running in IIS under the .NET
platform:
[http://news.netcraft.com/archives/2014/04/02/april-2014-web-...](http://news.netcraft.com/archives/2014/04/02/april-2014-web-
server-survey.html)

Also, Microsoft has made some announcements recently to get on the
containerization bandwagon:
[http://azure.microsoft.com/blog/2015/04/08/microsoft-
unveils...](http://azure.microsoft.com/blog/2015/04/08/microsoft-unveils-new-
container-technologies-for-the-next-generation-cloud/)

~~~
rbanffy
Yes, but Docker is an application distribution format that essentially bundles
parts of the server the app needs to run on with it so they can run within a
container on the host machine. Didn't Microsoft solve this application
bundling with .NET before?

------
erikpukinskis
If I was the NSA I would be drooling at this opportunity.

------
nlake44
Containers in themselves offer little additional value to virtual machines.
Don't switch over just because of the hype. Evaluate it yourself! (I
personally LOVE vagrant!)

------
justignore
Still true: [https://xkcd.com/927/](https://xkcd.com/927/)

~~~
bhhaskin
I have been seeing this posted all over HN as of late. Although it is funny,
it doesn't really add anything to the conversation...

~~~
falcolas
It points out, in the typical form of humor, that we're going to take a N
wheels (the container spec in this case), try and shove them together, to get
N+1 wheels. And this is relevant in a space with at least 4 competing
standards (and one dominant one).

In the short term, what Docker uses with will matter more than the output of
any such committee. In the long term, well, it depends on Docker-the-company's
fate.

~~~
jsmthrowaway
> In the short term, what Docker uses with will matter more than the output of
> any such committee.

How is that true? I'm already excited to kill dockerd just to run containers.

~~~
falcolas
Many container services currently support starting Docker images, and with all
of the money spent on marketing, Docker also owns the container ecosystem
mindshare. It's their game to lose.

