
Docker - Way better than a VM - jonny_eh
https://github.com/dotcloud/docker/
======
sharms
For those that have thought about running something similar in production, I
have a few questions:

1\. Shared kernel - in this model all of the containers have a shared kernel,
so any activity or even tuning (ie io scheduler) would impact all of the
containers right?

2\. Patching - when you need to apply a kernel security patch, all containers
would need to agree on a change window unless you were using something like
KSplice?

3\. User space / kernel space dependencies - if we imagine even 5 years down
the road, will containers using for example Red Hat Enterprise Linux 5 apps be
containerized but broken? Ie the hosting provider will likely want to stay
ahead of the curve and upgrade their kernel, but the app teams may not be as
progressive, so when these upgrades occur the apps would break

~~~
cdjk
Those are problems, but they're problems with all container/jail based
virtualization platforms like OpenVZ, FreeBSD jails, Solaris containers.

The first question is a valid point. But you also get benefits from not having
the overhead of N different kernels running. This is easiest to see when
looking at VPS providers - a 512MB OpenVZ vps means you have 512MB of memory
for your application to use. Yeah, kernel overhead isn't that much, especially
if you're running a few high-resource instances, but it can help if you have
lots of low memory instances. There's lots of discussion online about OpenVZ
vs. xen/kvm vps hosts if you're curious.

As for patching - OpenVZ at least makes it very easy to do live migration of
instances between servers (barring some weirdness if you have NFS mounts in
the guest), although it appears that lxc (and therefore docker) can't do that
[1]. In any event, it shouldn't hard to shutdown the guest, migrate, and
restart the guest - especially if you're using shared storage of some sort.

As for your third point - backward compatibility with RHEL/Centos is generally
quite good (since that's kind of the point of RHEL). At work we're currently
on Centos 5, and our migration strategy to Centos 6 is probably going to be to
install Centos 6 OpenVZ hosts, then move the guests and worry about upgrading
the guests later. Forward-compatibility is an issue, but I don't think there's
an easy solution to that.

[1] [http://en.wikipedia.org/wiki/Operating_system-
level_virtuali...](http://en.wikipedia.org/wiki/Operating_system-
level_virtualization)

~~~
zobzu
KSM makes the overhead pretty low memory wise

------
sciurus
Duplicate of <https://news.ycombinator.com/item?id=5408002> and
<https://news.ycombinator.com/item?id=5445387>

~~~
nmcfarl
Not a dup of those - their URL and text content is different.

However those links are useful as the coversation is on topic, even if the
source material is different.

------
gexla
Right now, finding VPS providers which work well with Docker is a bit like
walking through a mine field.

Currently it recommends kernel version 3.8 or greater. This means that if you
prefer Ubuntu then you need 13.04 or the ability to upgrade the kernel.

It also currently requires usage of AUFS which means that you need the AUFS
kernel module installed. So, if you have a supported kernel then you still
might need the ability to modify the kernel. They are working on supporting
alternate implementations such as BTRFS though.

EC2 is a great option right now and it's what I'm using.

I agree with another comment mentioning this is the future. However, I wonder
how long it will be before something like "Erlang On Xen" becomes more
widespread, cutting out the OS completely.

ETA: I love watching this project, it has really taken off and the maintainers
have been making fast progress. It seems that as soon as I run into a show
stopping problem, it's fixed the next day. It's a bit inspiring and makes me
look at the progress I have made on my own projects. ;)

~~~
23david
What show-stopping problems have you been encountering? Also, which VPS
providers have you had luck with so far?

~~~
gexla
The only provider I have been using so far is EC2. I tried Joyent (you can
recompile your own kernel but but it's more of a PITA than I'm willing to deal
with) and DigitalOcean (no 3.8 kernel for Ubuntu.)

The show stopping problems were early in development. The major one is that I
ran into the kernel issues early on and then they added the recommendation to
use only 3.8 or higher. So, that's not a fix but it addressed my problems. I
was also having problems running Docker in stand-alone mode per their own
docs, they have since removed this and daemon mode works great. I don't
remember what the others were.

------
peter_l_downs
This is fantastic, I've been wondering how DotCloud made everything work so
well. Can't wait to try this for deploying a Django application I work on —
hopefully it will solve my dependency headaches.

One nitpick — not a big fan of the recommended installation method (curl
get.docker.io | sh -x). Is it really that hard to ask people to download and
run the script themselves?

~~~
primitur
Drop the sh, and inspect the script before you run it. It just checks a few
dependencies and system items, and then retrieves the docker binary, and puts
it in /usr/local/bin. All of this can be done by hand by any Linux sysadmin ..
And of course, it wouldn't take much to make a .deb for it either.

However, I would like to discuss the docker design a little more in detail, on
the basis of its ease of use. First of all, I too do not like to have random
stuff piped into my shell, so I went looking for the Docker sources. It was
darned easy to build from sources, and quick too. At the end, I had a single
binary.

And the cool thing about this binary is that its both the server and the
client in the same package! So - the sysadmin of your Linux machine can (and
should, manually, for now) build from sources, install in a local/ or ~/bin,
and add the daemon to start up as needed.

Then, anyone else on the machine - not needing su rights - can run docker
images, and so on.

This isolation, simplicity of install, and .. frankly .. rocking feature set
.. is a beautiful thing. Can it be that golang is the secret sauce here? I
say: yes.

~~~
shykes
Docker author here. Yes, the ability to produce a single binary that "just
works" is one of the reasons we chose Golang. The operational simplicity is
hard to beat, and you don't have to convince python hackers to install gems,
java hackers to mess with virtualenv, and so on.

For the record, another reason we chose Golang is because it was _not python_.
Docker is a rewrite of a 2 year old python codebase which has seen a lot of
action in production. We wanted the operational learnings but not the
spaghetthi code. So we made it impossible to cut corners by changing
languages. No copy-pasting!

~~~
primitur
Well I for one was quite surprised that a) docker compiled so rapidly on my
system, and b) its a very sublime binary. I guess I'm learning another reason
why golang ought to get more of my free-time attention, so thats enough HN for
me, I'm off to spend the afternoon reading your code.. Cheers to you and what
I'm about to learn from docker! :)

------
JoeKM
Any recommended guides to getting a Django/RoR development environment set up
with Docker? How does networking work with Docker? Currently I'm using
Vagrant, the guest VM is sharing /Vagrant to the host (how does this work with
Docker?), and the host has made Vagrant projects available over my LAN for my
Macbook/Windows machines with IDEs. I also get the feeling Docker is meant to
be complementary with VMs and not exclusive.

Also does it make sense using Chef/Puppet with Docker?

------
ballard
Docker is awesome.

It lets you use a Linode or AWS instance as a bunch of NATed containers. This
makes it way easier to install just one thing in one container and not mess up
the other ones. This is where configuration management is going.

Plug: CloudVolumes is awesome for Win apps.

------
jamesaguilar
My main ask is whether this provides the "hard" isolation types (io, mostly,
and gpu). It sounds really cool.

Now that I'm looking at it, I don't think this provides any kind of isolation.
That's not what it's for. It's for distributing packaged programs.

~~~
karlmdavis
> Docker relies on a different sandboxing method known as containerization.
> ... Most modern operating system kernels now support the primitives
> necessary for containerization, including Linux with openvz, vserver and
> more recently lxc, Solaris with zones and FreeBSD with Jails.

I'd guess it's (at most) as secure as the underlying OS containerization
support.

~~~
jamesaguilar
I was asking about how the os controls access and allocation of shared
resources.

~~~
buster
Since docker uses cgroups and cgroups can limit CPU, Memory, I/O, i guess this
blog post by the docker guys gives a good overview:
[http://blog.dotcloud.com/kernel-secrets-from-the-paas-
garage...](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-
part-24-c)

------
namuol
Way better than a VM, but only covers a subset of VM use cases.

~~~
jared314
It also covers some of the use cases for packages. So, it's half way between a
package and a vm.

~~~
namuol
Exactly. That is actually a bigger problem to solve, IMO, so suggesting that
it's mostly a VM alternative is doing it a disservice.

~~~
shykes
One of my biggest challenges has been finding the right way to describe
Docker, because it borrows properties from multiple categories: virtualization
tools, package managers, build tools, process managers, and even operating
systems :)

People have told me they use Docker as a "vmware alternative", a "make
alternative", a "rpm alternative", a "vagrant alternative" and even a "tomcat
alternative". But people also use Docker _together_ with all of these tools!

In that way Docker reminds me of Redis: depending on what you want to do, you
could use it as a replacement for memcache, rabbitmq, couchdb, mysql or even
zeromq. But you could also use it _together_ with all of these tools. Over
time we're getting more comfortable with the fact that Redis might just be a
tool of its own - useful in its own unique way, in combination with whatever
other tool is best for the job at hand.

... but none of that matters if nobody uses it, and to get people to use
software you need catchy titles like "better than VMs" :)

------
ksec
Isn't this .. old news?

Why keep posting this every two month or so? Would be better if there are
improvements, major site using it to roll out etc.

~~~
shykes
We (the docker authors) had nothing to do with this particular posting :)

That said there have been a lot of improvements - see the changelog here
<https://github.com/dotcloud/docker/blob/master/CHANGELOG.md>

------
MrQuincle
It looks really nice. It would be great if you write more in specific on
<https://github.com/dotcloud/docker>. So, what "size" can we expect, what kind
of elements are part of the container. What "interpreters" currently exist for
the portable format you describe. But also meta-level information such as the
size of the team, why has been chosen for Go, and in what situations NOT to
use docker. I've been following docker.io for a while for the purpose of cloud
services for robotics, so I think I'm a nice example of a person who would
liked to be convinced to use your solution. :-)

------
zx2c4
Depending on containers for security is absolute insanity. Kernel
vulnerabilities are common. Own the kernel, break out of the docker
containers, and the whole concept is rendered useless.

VMs are a much safer bet (though not perfect either).

~~~
lholden
Containers are nice for putting some distance between the compromised app and
the OS. Especially useful when combined with a system for mandatory access
controls (SELinux, Smack, etc). If you can attempt to limit the exposure of a
compromise... you should!

Having said that... If someone has enough determination, they will manage to
compromise your system regardless of how 'secure' it's has been made. :)

I generally like to use containers in _addition_ to a virtual machine. I do
find it a bit shocking when I see a company offering up containers as an
alternative to a VM though. I suppose it's a compromise some companies are
willing to make for the additional performance.

~~~
zx2c4
> Containers are nice for putting some distance between the compromised app
> and the OS.

Some distance. But now a days when you can own the kernel, that distance
shrinks to zero.

I just saw these a few seconds ago: <http://grsecurity.net/~spender/logs2.txt>
<https://twitter.com/grsecurity/status/335963659337601024>

~~~
shykes
The real value of containers is not security (although lxc does include robust
security mechanisms), it's the streamlining of application deployment.

If you don't trust lxc to sandbox untrusted code, don't! Just deploy 1
production container per VM, or even per physical machine. But maybe you don't
mind running multiple containers on your local dev vm or on your staging
machine - I know I don't.

What containers give you is a constant payload which can be moved across very
different hardware setups, from the most shared to the most isolated.

~~~
zx2c4
> it's the streamlining of application deployment.

I always liked the idea of nicely integrating with the environment and
utilizing the features of package managers, the file system, users, and all
the rest of the niceties we have at our disposal. "I don't know how to
organize a bunch of things together!" seems like a silly reason to
containerize every component into a separate root fs.

But on the other hand, I can imagine this work flow does have some merit, and
some folks save a lot of time and energy and potential headaches just popping
things in containers.

~~~
shykes
> "I don't know how to organize a bunch of things together!" seems like a
> silly reason to containerize every component into a separate root fs.

One good reason to separate every component is that it facilitates moving them
to separate machines down the road, or scaling them separately.

Another good reason is that it reduces the out-of-band requirements of the
components. "all the niceties" you have at your disposal may very well be
specific to your distro, or your site setup. By contrast, docker containers
only require docker, which is a single binary file. A developer needs to know
his component will run anywhere, not just on your particular setup.

------
amalag
A more relevant comparison is with Juju and Heroku buildbacks support by
Stackato.

~~~
jcastro
Juju person here. Docker would be a nice enhancement for Juju. Currently we
use LXC in our older version (.7 in python) for a local provider to deploy
containers on your local laptop. The idea is being thrown around to instead
use Docker containers in _every_ provider like EC2 and OpenStack. This would
enable you to just move stuff around transparently, which is awesome.

The Juju team is keeping a close eye on Docker and it'll likely (but I can't
promise) be in 13.10.

------
marcamillion
This looks very interesting.

I would love to be able to setup a workflow on my Windows machine that allows
me to be able to do Rails dev on the Windows machine, in as close to a native
way as possible - but with using much of the same workflow I use on my MBP.

If I setup a container on my Windows machine, I would have still have to SSH
into some virtual environment to be able to run my rails app, right?

Ideally, I would love to be able to just go to my localhost in my browser and
see my app - will this help me be able to do that, rather than going to a
browser within a VM or some 'contained environment'?

~~~
rurounijones
Docker looks like it is *nix only. It CAN be run on windows and Mac...inside a
VM, which kind of eliminates all the advantages they talk about.

An option is Vagrant (Which Docker uses on the above OSes) + chef / puppet.

It uses VMs but works well enough for me and both configuration engines have
widespread support. <http://docs-v1.vagrantup.com/v1/docs/getting-started/>

~~~
shykes
Running docker inside doesn't eliminate all the advantages. For example, you
can test a full stack of components (frontend, database, memcache, background
workers...) on a single VM, instead of deploying 1 vm per component, which
gets really heavy really fast.

Another advantage is that docker on a vm is still docker: the container
running on your local VM is 100% identical, bute-for-byte, to the container
you will run on an octo-core box with half a tera of ram.

------
undoware
I'm about to install and answer my own question, but what is the difference
between this and, say, supermin + lxc? Or just supermin.

------
xorgar831
Is there any way to migrate running containers to another physical machine if
you need to do maintenance on the underlining system?

------
lee
Is this similar to RedHat's mock environment?

The advantage is that you're not limiting yourself to a RedHat distro. But
conceptually is it similar?

------
defied
If only something like this existed for Windows.

~~~
conductor
VMWare ThinApp

Enigma VirtualBox

Molebox

Thinstall Virtualization Suite

There are more...

~~~
fungi
anything opensource?

------
undoware
I really like the way the dependency format is basically a monad, but they
don't call it a monad so devs won't get scared :)

------
grigio
How is possibile to limit the resources (ram, storage, cpu) and get the
resources usage stats?

Tnx

~~~
shykes
You can limit memory and cpu usage with "docker run -m" and "docker run -c",
respectively.

Docker relies on cgroups for resource limuts and accounting. So anything you
can do with cgroups, you can do with docker.

------
Axsuul
Will this replace vagrant someday?

------
vonskippy
"Way better than a VM"

Why?

~~~
plorkyeran
Crazily enough if you read the linked page you'll find out!

~~~
VLM
Probably a very snarky reference to, if you're uninterested in the four big
features of docker, which are implied to be the only reasons for
virtualization, and aren't even accurate anyway, then its not better than VMs,
it might even be worse.

