

Docker 0.8: Quality, new builder features, btrfs, OSX support - asb
http://blog.docker.io/2014/02/docker-0-8-quality-new-builder-features-btrfs-storage-osx-support/

======
shykes
Just to clarify on the OSX support: obviously we did not magically get Darwin
to support linux containers. But we put together the easiest possible way to
run Linux containers on a Mac without depending on another machine.

We do this by combining (1) docker in "client mode", which connects to (2) a
super-lightweight linux VM using boot2docker.

The details are on
[http://docs.docker.io/en/latest/installation/mac/](http://docs.docker.io/en/latest/installation/mac/)

~~~
oellegaard
Too bad it depends on VirtualBox - had a lot of kernel panics when using it,
so I decided to stick with VMware fusion.

~~~
darkarmani
Are you sure it isn't caused by the mid-2010 macbook video card bug? I had it
re-exposed when i upgraded past 10.6 and that laptop now reboots 8 times a
day. Mostly from mac mail or when the power drops and it shifts video cards.

~~~
xentronium
FWIW, I had my 2010 mbp motherboard warranty-replaced and the crashes were
gone.

~~~
dunham
Mine started failing a few months outside the replacement period (3 years). I
was due for a new work laptop, so I just replaced the machine.

------
joeshaw
Was the OS X binary built without cgo? I can't seem to access containers in
private https registries:

    
    
        $ docker login https://registry.example.com
        2014/02/05 14:36:20 Invalid Registry endpoint: Get https://registry.example.com/v1/_ping: x509: failed to load system roots and no roots provided
    

The hostname in question has a valid SSL certificate. I encountered a similar
problem in the past with Go built from homebrew[1][2]. Has anyone else seen
this?

[1]
[https://github.com/Homebrew/homebrew/pull/17758](https://github.com/Homebrew/homebrew/pull/17758)
[2]
[https://code.google.com/p/go/issues/detail?id=4791](https://code.google.com/p/go/issues/detail?id=4791)

Update: Filed a bug against docker, others are having the same issue.
[https://github.com/dotcloud/docker/issues/3946](https://github.com/dotcloud/docker/issues/3946)

~~~
crosbymichael
Thanks for the report. We are working on a fix ASAP and will post updates on
[https://github.com/dotcloud/docker/issues/3683](https://github.com/dotcloud/docker/issues/3683).

~~~
jschorr
In the meantime, if you need to get it working right now, you can build your
own binary as outlined here: [http://blog.devtable.com/2014/01/using-docker-
on-osx-with-pr...](http://blog.devtable.com/2014/01/using-docker-on-osx-with-
private.html)

We've confirmed the instructions still work with Docker 0.8 (make sure to
change the checkout branch though :))

------
mattbessey
Glad to hear OS X has official support. I jumped into Docker for the first
time last week and have a burning unresolved question for those using
boot2docker.

What is your development workflow? I am working on a Rails app, so my instinct
is to have a shared folder between OS X and boot2docker, but afaik this is not
supported as boot2docker doesn't support VirtualBox guest extensions.

~~~
shykes
Hi Matt, you are not alone :)

It turns out that shared folders are not a sustainable solution (independently
of whether boot2docker supports them), so the best practices are converging
towards this:

1) While developing, your dev environment (including the source code and
method for fetching it) should live in a container. This container could be as
simple as a shell box with git and ssh installed, where you keep a terminal
open and run your unit tests etc.

2) To access your source code on your host machine (eg. for editing on your
mac), export it from your container over a network filesystem: samba, nfs or
9p are popular examples. Then mount that from your mac. Samba can be natively
mounted with "command-K". NFS and 9p require macfuse.

3) When building the final container for integration tests, staging and
production, go through the full Dockerfile + 'docker build' process. 'docker
build' on your mac will transparently upload the source over the docker remote
API as needed.

There are several advantages to exporting the source from the container to the
host, instead of the other way around:

\- It's less infrastructure-specific. If you move from virtualbox to vmware,
or get a Linux laptop and run docker straight on the metal, your
storage/shared folders configuration doesn't change: all you need is a network
connection to the container.

\- Network filesystems are more reliable than shared folders + bind-mount. For
example they can handle different permissions and ownership on both ends - a
very common problem with shared folders is "oops the container creates files
as root but I don't have root on my mac", or "apache complains that the
permissions are all wrong because virtualbox shared folders threw up on me".

That said, we need to take that design insight and turn it into a polished
user experience - hopefully in Docker 0.9 this will all be much more seamless!

~~~
bstar77
Thanks for taking the time to write this. I've hit a major wall in figuring
out the best workflow for this exact scenario. Good to finally hear an
official suggestion on the matter. I've been depending on shared directories,
so I'll definitely be experimenting with network filesystems.

As Docker evolves, it would be great to have some kind of official resource to
get suggestions for optimal workflows as new features become available (the
weekly docker email is my best resource right now). Searching the internet for
info has been a huge chore as most of the resources (including the ones hosted
by docker.io) are woefully out of date.

~~~
shykes
> _As Docker evolves, it would be great to have some kind of official resource
> to get suggestions for optimal workflows as new features become available_

Yes! We are trying to figure this out. Our current avenue for this is to
dedicate a new section in the docs to use cases and best practices.

As you point out, our docs (and written content in general) are often
inaccurate. We need to fix this. Hopefully in the coming weeks you will start
seeing notable improvements in these areas.

Thanks for bearing with us!

------
stesch
Is Docker a good way to bring more security on a server with a few different
websites? Separating the sites from each other and run nginx as a proxy in
front of them?

What's the overhead?

~~~
_wmd
Unfortunately Docker prevents hosting environments from employing some of the
most potent security mitigations added to Linux recently.

You cannot treat a docker container like a virtual machine – code running in
the container has almost unfettered access to the parent kernel, and the
millions of lines of often-buggy C that involves. For example with the right
kernel configuration, this approach leaves the parent machine vulnerable to
the recent x86_32 vulnerability ([http://seclists.org/oss-
sec/2014/q1/187](http://seclists.org/oss-sec/2014/q1/187)) and many similar
bugs in its class.

The algorithms in the running kernel are far more exposed too - instead of
managing a single process+virtual network+memory area, all the child's
resources are represented concretely in the host kernel, including its
filesystem. For example, this vastly increases the likelihood that a child
could trigger an unpatched DoS in the host, e.g. the directory hashing attacks
that have effected nearly every filesystem implementation at some point
(including btrfs as recently as 2012).

The containers code in Linux is also so new that trivial security bugs are
being found in it all the time – particularly in sysfs and procfs. I don't
have a link right now, though LWN wrote about one a few weeks back.

While virtual machines are no security panacea, they diverge in what classes
of bugs they can be affected by. Recent Qemu/libvirt supports running under
seccomp.. ensuring even if the VM emulator is compromised, the host kernel's
exposure remains drastically limited. Unlike qemu, you simply can't apply
seccomp to a container without massively reducing its usefulness, or using a
seccomp policy so liberal that it becomes impotent.

You could use seccomp with Docker by nesting it within a VM, but at that point
Docker loses most of its value (and could be trivially replaced by a shell
script with a cute UI).

Finally when a bug is discovered and needs to be patched, or a machine needs
to be taken out of service, there is presently no easy way to live-migrate a
container to another machine. The most recent attempt (out of I think 3 or 4
now) to add this ability to Linux appears to have stalled completely.

As a neat system for managing dev environments locally, it sounds great. As a
boundary between mutually untrusted pieces of code, there are far better
solutions, especially when the material difference in approaches amounts to a
few seconds of your life at best, and somewhere south of 100mb in RAM.

~~~
shykes
> _Unfortunately Docker prevents hosting environments from employing some of
> the most potent security mitigations added to Linux recently._

You list various facts that are mostly correct, but your conclusion is wrong.
Docker absolutely _does not_ reduce the range of security mitigations
available to you.

Your mistake is to present docker as an alternative to those security
mitigations. It's not an alternative - it presents you with a sane default
which can get you pretty far (definitely further than you are implying). When
the default does not fit your needs, you can fit Docker in a security
apparatus that does.

The current default used by docker is basically pivot_root + namespaces +
cgroups + capdrop, via the lxc scripts and a sane locked down configuration.
Combined with a few extra measures like, say, apparmor confinement, dropping
privileges inside the container with `docker run -u`, and healthy monitoring,
you get an environment that is production-worthy for a large class of payloads
out there. It's basically how Dotcloud, Heroku and almost every public "paas"
service out there works. It's definitely not a good environment for _all_
payloads - but like I said, it is definitely more robust than you imply.

So your first mistake is to dismiss the fact that linux containers are in fact
an acceptable sandboxing mechanism for many payloads out there.

Your second mistake is to assume that if your payloads need something other
than linux containers, you can't use Docker. Specifically:

> _You cannot treat a docker container like a virtual machine – code running
> in the container has almost unfettered access to the parent kernel, and the
> millions of lines of often-buggy C that involves. For example with the right
> kernel configuration, this approach leaves the parent machine vulnerable to
> the recent x86_32 vulnerability ([http://seclists.org/oss-
> sec/2014/q1/187](http://seclists.org/oss-sec/2014/q1/187)) and many similar
> bugs in its class._

> _The containers code in Linux is also so new that trivial security bugs are
> being found in it all the time – particularly in sysfs and procfs. I don 't
> have a link right now, though LWN wrote about one a few weeks back._

> _While virtual machines are no security panacea, they diverge in what
> classes of bugs they can be affected by. Recent Qemu /libvirt supports
> running under seccomp.. ensuring even if the VM emulator is compromised, the
> host kernel's exposure remains drastically limited. Unlike qemu, you simply
> can't apply seccomp to a container without massively reducing its
> usefulness, or using a seccomp policy so liberal that it becomes impotent._

Of course you're right, sometimes a container is not enough for sandboxing and
you need a VM. Sometimes even a VM is not enough and you need physical
machines. That's fine. Just install docker on all of the above, and map
containers to the underlying machines in a way that is consistent with your
security policy. Problem solved.

> _You could use seccomp with Docker by nesting it within a VM, but at that
> point Docker loses most of its value (and could be trivially replaced by a
> shell script with a cute UI)._

That's your judgement to make, but I'm going to go a whim and say that you
haven't actually used Docker that much :) Docker is commonly used in
combination of VMs for security, so at least _some_ people find it useful.

> Finally when a bug is discovered and needs to be patched, or a machine needs
> to be taken out of service, there is presently no easy way to live-migrate a
> container to another machine. The most recent attempt (out of I think 3 or 4
> now) to add this ability to Linux appears to have stalled completely.

In my opinion live migration is a nice to have. Sure, for some payloads it is
critically needed, and no doubt the day linux containers support full
migration those payloads will become more portable. But in practice a very
large number of payloads don't need it, because they have built-in redundancy
and failover at the service level. So an individual node can be brought down
for maintenance without affecting the service as a whole. Live migration also
has other issues, for example it doesn't work well beyond the boundaries of
your shared storage infrastructure. Good luck implementing live migration
across multiple geographical regions! This has been established as ops best
practices , so over time the number of payloads which depend on live migration
will diminish.

> _As a neat system for managing dev environments locally, it sounds great. As
> a boundary between mutually untrusted pieces of code, there are far better
> solutions, especially when the material difference in approaches amounts to
> a few seconds of your life at best, and somewhere south of 100mb in RAM._

To summarize: docker is primarily a system for managing and distributing
repeatable execution environments, _from development to production_ (and not
just for development as you imply). It does not implement any security
features by itself, but allows you to use your preferred isolation method
(namespaes, hypervisor or good old physical separation) without losing the
benefits of repeatable execution environments and a unified management API.

~~~
_wmd
Look, I'm really glad that you're excited for docker, but name dropping
companies running the risk of exposing their machines does not magically
invalidate the specific examples I gave. In fact I've really no idea what
purpose your reply was hoping to serve.

In the default configuration (and according to all docs I've seen), regardless
of some imagined rosy future, _today_ docker is a wrapper around Linux
containers, and Linux containers _today_ are a very poor general purpose
security solution, especially for the kind of person who needs to ask the
question in the first place (see also: the comment I was originally replying
to)

~~~
shykes
> _name dropping companies running the risk of exposing their machines does
> not magically invalidate the specific examples I gave_

You're right. But what it does is provide anecdotical evidence that your views
are not shared by a large and growing number of experienced engineers.

> _In fact I 've really no idea what purpose your reply was hoping to serve._

It's pretty simple: you made an incorrect statement, I'm offering a detailed
argument explaining why.

> _In the default configuration (and according to all docs I 've seen) [...]
> today docker is a wrapper around Linux containers_

Yes.

> _[...] regardless of some imagined rosy future [...]_

I only described things that are possible today, with current versions of
Docker. No imagined rosy future involved :)

> _and Linux containers today are a very poor general purpose security
> solution_

I guess it really depends of your definition of "general purpose", so you
could make a compelling argument either way.

But it doesn't matter because if you don't trust containers for security, you
can just install Docker on a bunch of machines and make sure to deploy
mutually untrusted containers on separate machines. Lots of people do this
today and it works just fine.

In other words, Docker _can_ be used for deployment and distribution without
reducing your options for security. Respectfully, this directly contradicts
your original comment.

~~~
comex
> But it doesn't matter because if you don't trust containers for security,
> you can just install Docker on a bunch of machines and make sure to deploy
> mutually untrusted containers on separate machines. Lots of people do this
> today and it works just fine.

> In other words, Docker can be used for deployment and distribution without
> reducing your options for security. Respectfully, this directly contradicts
> your original comment.

If I understand services like Heroku correctly, they give customers standard
access to run arbitrary code inside a container as a standard user. Therefore,
I expect it would be standard and unavoidable to have many different
customers' applications running on the same machine, leading to the ability to
exploit vulnerabilities similar to the recent x32 one. If they instead used a
VM for each application, they would have to pierce the VM implementation,
potentially plus seccomp in some cases, which is the mitigation the parent was
referring to. The choice to use Docker instead of VMs limits the security
options available.

~~~
nl
>> In other words, Docker can be used for deployment and distribution without
reducing your options for security. Respectfully, this directly contradicts
your original comment.

>If they instead used a VM for each application, they would have to pierce the
VM implementation, potentially plus seccomp in some cases, which is the
mitigation the parent was referring to. The choice to use Docker instead of
VMs limits the security options available.

The parent is suggesting you can use Docker as a supplement to _any_
additional security measure one might choose (to quote: "Docker is commonly
used in combination of VMs for security, so at least some people find it
useful").

In your example, a person would run Docker on top of the VM, and gain "a
system for managing and distributing repeatable execution environments".

------
optymizer
So what's the solution for 'root inside a docker container is root on the
host'?

We'd like to ship a set of utilities as a docker container, but unless the
sysadmin gives everyone 'sudo' privileges on the server (unlikely and
insecure), they can't run the container and its utilities.

Any advice?

~~~
shykes
Look for an update on this in 0.9 :)

Future versions of the Docker API will natively support _scoping_. This means
that each API client will see a different subset of the Docker engine
depending on the credentials and origin of the connection. This will be
implemented in combination with _introspection_ , which allows any container
to open a connection to the Docker engine which started it.

When you combine _scoping_ and _introspection_ , you get really cool
scenarios. For example, let's say your utility is called "dockermeister". Each
individual user could deploy his own copy of dockermeister, in a separate
container. Each dockermeister container would in turn connect to Docker (via
_introspection_ ), destroy all existing containers, and create 10 fresh redis
containers (for reasons unknown). Because each dockermeister container is
_scoped_ , it can only remove containers that are its children (ie that were
created from the same container at an earlier time). So they cannot affect
each other. Likewise, the 10 new redis containers will only be visible to that
particular user, and not pollute the namespace of the other users.

Of course scoping works at arbitrary depth levels... so you could have
containers starting containers starting containers. Containers all the way
down :)

------
davidcelis
Awesome that OSX support is now official, but is there any benefit to using
this process as opposed to using docker-osx?
[https://github.com/noplay/docker-osx](https://github.com/noplay/docker-osx)

The official installation process seems more complicated, and I don't really
see an advantage.

------
ak217
I'm curious about how the focus on multiple, ABI-incompatible platforms will
affect the pace and momentum of Docker development. So far, Docker has
benefitted a lot from the focus on amd64 userland on Linux.

~~~
derefr
Personally, when I read "OSX support", I thought that meant that there would
now be containers with Darwin-ABI binaries inside them. So on Linux, you'd use
cgroups for Linux-ABI binaries and a VM for Darwin-ABI, just as on OSX you use
a VM for Linux-ABI (and presumably would use the OSX sandbox API for Darwin-
ABI containers.)

This "native sandboxing for own-ABI if available, VM if not, and VM for
everything else" approach would extend to any other platform as well, I'd
think (Windows, for example.) I'm surprised that this _isn 't_ where Docker is
going, at least for development and testing of containers.

(Though another alternative, probably more performant for production, would be
something like having versions of CoreOS for each platform--CoreOS/Linux,
CoreOS/Darwin, CoreOS/NT, and so on--so you'd have a cluster of machines with
various ABIs, where any container you want to run gets shipped off to a
machine in the cluster with the right ABI for it.)

~~~
FooBarWidget
Going that way would dilute Docker's value. Docker's promise is that you can
build a container and it will always work; it won't mysteriously break in
production or giving you installation headaches. To do that, your development
environment has to be as close to the production environment as possible.
Having a totally different ABI doesn't help with that goal.

~~~
shykes
Our priority in the short term is definitely to focus on the Linux ABI and
making it available on as many physical machines as possible. This is the
reasoning behind our current OSX support, and support for more platforms
coming soon.

Longer term we do need to support multiple ABIs, if only because a lot of
people want to use Docker on x86-32 and ARM. Having ELF binaries built on
Linux isn't of much help if they're built for another arch :) So at the very
least we will need to support 3 ABIs in the near future.

The good news is that it can be done in a way which doesn't hurt the
repeatability of Docker's execution environment. Think of it this way: every
container has certain requirements to run. At the very least it needs a
certain range of kernels and archs (and yes it's possible, although uncommon
for a binary to support multiple archs). It may also require a network
interface to bind a socket on a certain TCP port. It may require certain
syscalls to be authorized. It may require the ability to create a tun/tap
device. And so on.

Docker's job is to offer a portable abstraction for these requirements, so
that the container can list what it needs on the one hand, the host can list
what it offers on the other, and docker can match them in the middle. If the
requirements listed by a given container aren't met ("I need CAP_SYSADMIN on a
3.8 linux kernel and an ARM processor!") then docker returns an error and a
clear error message. If they _are_ met, the container is executed and _must
always be repeatable_.

TLDR: ABI requirements are just one kind of requirements. Docker can handle
multiple requirements without breaking the repeatability of its execution
environment.

~~~
mwcampbell
Your reply primarily addresses architecture support and the implications for
pre-built payloads. But I think a more important concern is the fragmentation
that would result if Docker attempted to natively support other operating
systems. Consider that practically every Dockerfile starts with a Linux
distro, and includes commands specific to that distro (e.g. installing
packages with apt). Everybody assumes that the payload is Linux-based, and it
all just works. How would it work if Docker also supported FreeBSD jails,
Illumos zones, or whatever other options are up for consideration? Would the
public registry of Docker images now be fragmented along OS lines? Or would
Docker try to automagically smooth over the fragmentation by firing up VMs
when the host and container operating systems don't match? In the latter case,
would every Docker installation then require a working hypervisor?

Considering that the overwhelming majority of Unix servers are running Linux,
I think it's better to say that Docker is Linux-based, end of discussion.

~~~
FooBarWidget
I think what he's saying is that although Docker will support other Linux
ABIs, it will stay with Linux.

------
steeve
Note that you need to be running boot2docker 0.5.2 [1] (with docker 0.8) for
docker to work properly on OS X.

[1]
[https://github.com/steeve/boot2docker/releases/tag/v0.5.2](https://github.com/steeve/boot2docker/releases/tag/v0.5.2)

------
newman314
IMO, it would be fantastic if there was something like Docker for Windows.
Imagine being able to bundle up games in individual containers and easily
being able to move them from machine to machine as you upgrade. Same thing
applies for other Windows apps.

~~~
wslh
Docker is to Unix almost as Microsoft App-V/VMware ThinApp/Symantec Workspace
Virtualization products are to Windows.

Not exactly the same but closer.

~~~
newman314
Precisely, it's not the same. There still a whole bunch of fiddling going on.

I'll stick to my statement. I want something Docker-like for Windows so that I
can easily move things from one machine to another.

~~~
wslh
It seems like they are working on it: "Microsoft Corporation : Patent Issued
for Extensible Application Virtualization Subsystems"
[http://www.4-traders.com/MICROSOFT-
CORPORATION-4835/news/Mic...](http://www.4-traders.com/MICROSOFT-
CORPORATION-4835/news/Microsoft-Corporation--Patent-Issued-for-Extensible-
Application-Virtualization-Subsystems-17940954/)

------
arc_of_descent
I just tried the docker interactive tutorial. It was fun, but I still don't
get the point of using docker. Just been hearing a lot about it, and its
getting too much buzz.

------
sbt
I am interested in the BTRFS support in particular, it is clear that
performance in FS is key. However, what I like the most about Docker is the
ability to use layers and diff them. In effect, I want version control for
images, because it allows me to not run extra provisioning tools for the
images (just rely on simply Zookeeper stuff for app config). Whatever gives me
'vcs' for images in the most performant way, wins in my book.

------
midas007
It's confusing why btrfs support was prioritized ahead of zfs considering zfs'
superior architecture and ops capabilities. Is docker (formerly dotcloud)
going to start withholding capabilities as licensed features?

Edit: prelim zfs driver work is here
[https://github.com/gurjeet/docker/tree/zfs_driver](https://github.com/gurjeet/docker/tree/zfs_driver)

~~~
tacticus
What superior architecture does zfs have?

~~~
rch
My understanding is that btrfs is still in catchup mode for the foreseeable
future, but might _eventually_ cover the distance.

Has btrfs jumped ahead of zfs in ways I haven't heard about?

Edit - this is my first search result:

[http://rudd-o.com/linux-and-free-software/ways-in-which-
zfs-...](http://rudd-o.com/linux-and-free-software/ways-in-which-zfs-is-
better-than-btrfs)

~~~
midas007
I'd be biased to agree since that's Manuel's blog, someone I used to work
with. I've supported 24x7 and 9x5 ops where downtime was unacceptable. zfs
makes it a whole lot easier to perform upgrades, know data and metadata are
solid and send snapshots around.

~~~
rch
> ZFS uses atomic writes and barriers

This about settles the question for me. Assuming that the implication that
btrfs performs otherwise holds true.

~~~
tacticus
Barriers are also used in btrfs.

and from what i can tell clone operations are also handled atomically

though i kinda wonder what exactly is meant by atomic writes.

~~~
rch
Thanks for clearing that up.

I might have to dig into it a bit more.

------
1qaz2wsx3edc
It's not explicitly said, but are they following Semantic Versioning?
[http://semver.org/](http://semver.org/)

~~~
skywhopper
Not exactly. The article says the first number is for major lifecycle events,
ie 1.0 means "production ready". They'll be releasing monthly and the second
number will be the release increment. The third number will be for patches and
fixes.

So to me that doesn't fit in with the Semantic Versioning contract. I think
the product is too young yet to use a version scheme that assumes relative API
stability.

~~~
stdbrouw
Well, to be fair, this is why even with semver 0.x means "anything goes". It's
only from 1.x onwards that major version increments should be used for
backwards-incompatible changes.

------
morgante
Awesome release, especially since I can now use Docker on OS X without having
to boot up a full Ubuntu VM through Vagrant.

If anyone else is using Boxen, I packaged up a quick Puppet module to get up
and running with Docker on OS X: [https://github.com/morgante/puppet-
docker](https://github.com/morgante/puppet-docker)

------
andrewcooke
why is osx support necessary on the path to 1.0? i'd rather have a simple,
small, 1.0 release i can trust than all these "bells and whistles".

~~~
shykes
It's not necessary and we didn't go out of our way to get it. We just happened
"for free" as a result of writing portable code, a clean client-server
architecture and the appearance of the boot2docker project in the community.

------
brunoqc
Anyone using Docker on 32-bits?

~~~
jamtur01
Docker does not (yet) support 32-bit architectures.

~~~
brunoqc
I think I read that you just have to disable a condition somewhere and build
your own base image or something like that.

------
jokoon
can someone explain what this is ? and what's the purpose ?

~~~
jamtur01
Try here: [http://www.docker.io/learn_more/](http://www.docker.io/learn_more/)

~~~
jokoon
nope, I still don't get it.

> Docker is an open-source engine that automates the deployment of any
> application as a lightweight, portable, self-sufficient container that will
> run virtually anywhere.

Like an internet browser for executables ? I don't understand how can this be
useful...

~~~
McGlockenshire
Containers are like virtual machines, only without the hardware emulation.

See [http://en.wikipedia.org/wiki/LXC](http://en.wikipedia.org/wiki/LXC)

See [http://docs.docker.io/en/latest/faq/#how-do-containers-
compa...](http://docs.docker.io/en/latest/faq/#how-do-containers-compare-to-
virtual-machines)

Docker is a mechanism to bundle an application together inside a container
(think VM instance) in a way that makes it easier to distribute.

