
Ask HN: What is the actual purpose of Docker? - someguy1233
I&#x27;m hearing about Docker every other day, but when I look into it, I don&#x27;t understand the purpose of it.<p>I run many websites&#x2F;applications that need isolation from each other on a single server, but I just use the pretty-standard OpenVZ containers to deal with that (yes I know I could use KVM servers instead, but I haven&#x27;t ran into any issues with VZ so far).<p>What&#x27;s the difference between Docker and normal virtualization technology (OpenVZ&#x2F;KVM)? Are there any good examples of when and where to use Docker over something like OpenVZ?
======
tinco
> What's the difference between Docker and normal virtualization technology
> (OpenVZ/KVM)? Are there any good examples of when and where to use Docker
> over something like OpenVZ?

Docker is exactly like OpenVZ. It became popular because they really emphasize
their OpenVZ Application Templates feature, and made it much more user
friendly.

So users of Docker, instead of following this guide:
[https://openvz.org/Application_Templates](https://openvz.org/Application_Templates)

They write a Dockerfile, which in a simple case might be:

    
    
        FROM nginx
        COPY index.html /usr/share/nginx/html
    

So no fuzzing with finding a VE somewhere, downloading it customizing it, and
then installing stuff manually, stopping the container and tarring it, Docker
does that all for you when you run `docker build`.

Then you can push your nice website container to the public registry, ssh to
your machine and pull it from the registry. Of course you can have your own
private registry (we do) so you can have proprietary docker containers that
run your apps/sites.

From my perspective, the answer to your question would be: Always prefer
Docker over OpenVZ, they are the same technology but Docker is easier to use.

But I've never really invested in OpenVZ so maybe there's some feature that
Docker doesn't have.

~~~
segmondy
Docker and OpenVZ are not the same. Docker is single application focus. OpenVZ
provides the entire OS in a container. OpenVZ has support for live migration.

~~~
klibertp
> OpenVZ provides the entire OS in a container.

Docker does that, too. Actually, I'm running docker containers as a fast and
easy replacement for VirtualBox VMs.

~~~
unclesaamm
Perhaps you could do this if you're on Linux, and booting into separate OS's
that run off the Linux kernel. Or if you're using boot2docker. However, in
neither case does Docker itself provide the kernel in the container.

~~~
tinco
What do you mean with that Docker does not provide the kernel itself? I did
not mean Docker and OpenVZ are similar technology, they are exactly the same
technology, just different toolset. You can only run Docker on linux
(boot2docker simply runs a Linux VM), you use the kernel of the host os.

------
hueving
It serves as an amazing excuse to re-invent the wheel at your own workplace.
It's a hot technology, and if you're not using it, it's because you're inept.
Rip all of the stable things out that everyone knew how to use and slap
containers in there! If it's not working, it's because your not using enough
containers.

No security patching story at your workplace? No problem, containers don't
have one either! If someone has shipped a container that embedded a vulnerable
library, you better hope you can get a hold of them for a rebuild or you have
to pull apart the image yourself. It's the static linking of the 21st century!

~~~
jordigh
I want to downvote the first paragraph but upvote the second one.

Doesn't Docker also help cause problems like ssh private key reuse? I am sure
that there are mitigations, but it's sad to have ways to prevent some activity
that the software makes easy to do.

~~~
davexunit
>I want to downvote the first paragraph but upvote the second one.

I had the very same feeling. Containers _are_ very useful, but the Docker
suite of tools just don't have a very good security story.

~~~
justizin
They have the same security story as other linux systems.

~~~
davexunit
That's... just not true.

~~~
justizin
You have an SSL vulnerability, you need to patch the docker image, just like
you'd have to patch a linux system.

Now you say something of substance!

~~~
andrewguenther
I think the problem here is that people seem to assume that "application
isolation" is synonymous with "security isolation." Your statement is true,
the vulnerabilities are the same, but people don't seem to get that there is
no "security story" for containers in the first place. That isn't their job.

~~~
iguyking
Isn't one of the claims that if you patch the main OS (without changing the
libraries..just patch like you would normally) with a new base image, that
with the dockerfile you could re-setup the application in a matter of minutes?

------
KaiserPro
docker and openVZ aim to do the same thing.

docker is a glorified chroot and cgroup wrapper.

There is also a library of prebuilt docker images (think of it as a tar of a
chroot) and a library of automated build instructions.

The library is the most compelling part of docker. everything else is
basically a question of preference.

You will hear a lot about build once, deploy anywhere. whilst true in theory,
your mileage will vary.

what docker is currently good for:

o micro-services that talk on a messaging queue

o supporting a dev environment

o build system hosts

However if you wish to assign ip addresses to each service, docker is not
really mature enough for that. Yes its possible, but not very nice. You're
better off looking at KVM or vmware.

There is also no easy hot migration. So there is no real solution for HA
clustering of non-HA images. (once again possible, but not without lots of
lifting, Vmware provides it with a couple of clicks.)

Basically docker is an attempt at creating a traditional unix mainframe system
(not that this was the intention) A large lump of processors and storage that
is controlled by a singular CPU scheduler.

However, true HA clustering isn't easy. Fleet et al force the application to
deal with hardware failures, whereas Vmware and KVm handle it in the
hypervisor.

~~~
ageofwant
> docker and openVZ aim to do the same thing.

docker is a process container not a system container.

> docker is a glorified chroot and cgroup wrapper.

that is fairly immaterial, suffice to say that the underlying linux core tech
that enables docker has matured enough lately to enable a tool like docker. I
built many containers and I never thought about them in terms of the
underlying tech.

> There is also a library of prebuilt docker images (think of it as a tar of a
> chroot)

yes

> and a library of automated build instructions

more accurate to say there is a well defined SDL for defining containers.

> You will hear a lot about build once, deploy anywhere. whilst true in
> theory, your mileage will vary.

have to agree, this is oversold as most of the config lives in attached
volumes and needs to be managed outside of the container.

> However if you wish to assign ip addresses to each service, docker is not
> really mature enough for that. Yes its possible, but not very nice. You're
> better off looking at KVM or vmware.

Have to disagree here, primarily because each service should live in each own
container, docker is a process container, not a system container. Assemble a
system out of several containers, don't mash it all up into one - most people
don't seem to get this about docker.

> There is also no easy hot migration. So there is no real solution for HA
> clustering of non-HA images. (once again possible, but not without lots of
> lifting, Vmware provides it with a couple of clicks.)

None is required. Containers are ephemeral and generally don't need to be
migrated, they are simply destroyed and started where needed. Requiring 'hot
migration' in the docker universe generally means you are doing it wrong. Not
to say that there is no place for that.

As a final note, all my docker hosts are kvm vm's.

~~~
KaiserPro
_edit_ this sounds like I'm being petty, I apologise, I'm just typing fast.

> docker is a process container not a system container.

Valid. However the difference between docker image and openVZ images is the
inclusion of an init system.

> Have to disagree here, primarily because each service should live in each
> own container, docker is a process container, not a system container.
> Assemble a system out of several containers, don't mash it all up into one -
> most people don't seem to get this about docker.

I understand your point,

I much prefer each service having an IP that is registered to DNS. This means
that I can hit up service.datacenter.company.com and get a valid service.
(using well tested dns load balancing and health checks to remove or re-order
individual nodes)

Its wonderfully transparent and doesn't require a special custom service
discovery in both the client and service. because like etcd it has the concept
of scope you can find local instances trivially. using DCHP you can say
connect servicename and let dhcpd set your scope for you.

> None is required. Containers are ephemeral and generally don't need to be
> migrated, they are simply destroyed and started where needed. Requiring 'hot
> migration' in the docker universe generally means you are doing it wrong.
> Not to say that there is no place for that.

This I have to disagree with you. For front end type applications, ones that
hold no state, you are correct.

However for anything that requires shared state, or data its a bad thing. Take
your standard database cluster ([no]SQL or whatever) of 5 machines. You are
running at 80% capacity, and one of your hosts is starting to get overloaded.
You can kill a node, start up a warm node on a fresh machine.

However now you are running at 100% capacity, and you now need to take some
bandwidth to bring up a node to get back to 80%. Running extra machines for
the purpose of allowing CPU load balancing aggrieves me.

I'm not advocating writing apps that cannot be restarted gracefully. I'm also
not arguing against ephemeral containers, its more a case of easy load
balancing, and disaster migration. Hot migration means that software is
genuinely decoupled from the hardware.

~~~
hosh
> However the difference between docker image and openVZ images is the
> inclusion of an init system.

No it isn't. Most people don't use an init system with Docker images. However,
one of the top-10 popular images uses one -- the Passenger Phusion base
images. They make a pretty compelling argument why you should.

None of these arguments are relevant in the big picture. Where Docker shines
is the package management, not the virtualization. As a package management
system, it is brilliant -- though incomplete. The package management could be
fully content-addressable, and at which point, we'll have something more
brilliant than what it is now. But it isn't, and I doubt anyone will try it
until after this core concept gets adopted into the mainstream.

Ten years ago in 2005, I've heard these same types of arguments about cloud
providers, the Zen hypervisors, and the AWS API. I've seen old mainframe folks
rolling their eyes saying the technology is old, and this is hyped up. Of
course it's hyped up; but unless you can look past the hype and your contempt,
you won't see what's really there. No one is really arguing about cloud
technology now, and the hold-outs are outnumbered by the majority.

------
grhmc
For me, it is the ultimate in the idea in Continuous Delivery of "build once."
I can be very confident that the docker image I build in the first stage of my
pipeline will operate correctly in production. This is because that identical
image was used for unit tests, to integration and functional testing, to the
staging environment and finally production. There is no difference than
configuration.

This is the core that Docker solves, and in such a way that developers can do
most of the dependency wrangling for me. I don't even mind Java anymore
because the CLASSPATHs can be figured out once, documented in the Dockerfile
in a repeatable programatic fashion, and then ignored.

In my opinion the rest of it is gravy. Nice tasty gravy, but I don't care so
much about the rest at the moment.

 __Edit: __As danesparz points out, nobody has mentioned immutable
architecture. This is what we do at Clarify.io. See
also:[https://news.ycombinator.com/item?id=9845255](https://news.ycombinator.com/item?id=9845255)

~~~
odiroot
How do you handle different configurations then? Especially if you need to
provide N values (or structured data).

Also, how do you manage your containers in production?

~~~
falcolas
Configuration files, made available to containers as a read-only mount via the
volume flag. No external network or service dependencies that way.

I'm not terribly fond of using environment variables for configuration,
personally. That method requires either a startup shim or development work to
make your program aware of the variables, and your container manager has to
have access to all configuration values for the services it starts up.

------
shawnee_
Docker is a cute little tool that gives people who aren't that great at Linux
the illusion that they know what they're doing. Throw in the use of some
"Container" semantics and people become convinced it's _that_ easy (and
secure) to abstract away the containers from the kernel.

But it's not, at least in my experience; not to mention that as of now,
anything running Docker in production (probably a bad idea) is wide open to
the OpenSSL security flaw in versions of 1.0.1 and 1.0.2, despite the
knowledge of this issue being out there for at least a few days.

Docker's currently "open" issue on github:
[https://github.com/docker/compose/issues/1601](https://github.com/docker/compose/issues/1601)

Other references: [https://mta.openssl.org/pipermail/openssl-
announce/2015-July...](https://mta.openssl.org/pipermail/openssl-
announce/2015-July/000037.html) [http://blog.valbonne-
consulting.com/2015/04/14/as-a-goat-im-...](http://blog.valbonne-
consulting.com/2015/04/14/as-a-goat-im-skeptical-of-dockers-hype/)

~~~
javajosh
_> Docker is a cute little tool that gives people who aren't that great at
Linux the illusion that they know what they're doing._

Well, that's what I personally _hoped_. Then you run into problems, distro
specific problems, and find yourself unable to deal with it without actually
becoming great at linux under a deadline. Docker can actually introduce
tremendous complexity at both the Linux and application level because you have
to understand how an image was prepared in order to use it, configure it, etc.
(Of course, a big part of the problem is that there's no way that I know of to
interactively examine the filesystem of an image without actually running the
image, and accessing the FS from the tools that the image itself runs. This
_has_ to be some sort of enormous oversight either on my part or on Docker's).

~~~
yebyen
I'm sure this is not the answer you are looking for, but you can 'docker
export' a container to a tar file and examine your image file that way.

(1) You're exporting a container, not an image, so if you wanted to export
your image, deploy it to a container first. Run echo or some other noop if you
need to.

(2) This is similar to how git operates. You wouldn't want to examine your git
commits interactively (assuming that means the ability to change them in
place) well, if you did, git has --amend, but no such thing exists in Docker.

An image with a given id is supposed to be permanent and unchanging,
containers change and can be re-committed, but images don't change. They just
have children.

It can get hairy when you reach the image layer limit, because using up the
last allowed image layer means you can't deploy to a container anymore. So,
how do you export the image? 'docker save' \-- but 'docker save' exports the
image and all of its parent layers separately. (you need to flatten it, for
example?)

I once wrote this horrible script[1] whose only purpose was unrolling this
mess, since the latest image had the important state that I wanted in it, but
I needed the whole image -- so, untar them all in reverse order and then you
have the latest everything in a single directory that represents your image
filesystem.

The horror of this script leads me to believe this is an oversight as well,
but a wise docker guru probably once said "your first mistake was keeping any
state in your container at all."

[1]:
[https://raw.githubusercontent.com/yebyen/urbinit/del/doit.sh](https://raw.githubusercontent.com/yebyen/urbinit/del/doit.sh)

~~~
fragmede
Given stupid hacks, like "Run echo or some other noop if you need to" to go
from an image to a container, and 'docker commit' to go back from a container
to an image, the distinction between a docker image and docker container seems
a bit academic and a bit of poor UX rather than anything else.

~~~
yebyen
Not really, containers are disposable and images (at least tags) are somewhat
less disposable. Containers are singular, malleable units and represent a
processes' running state, images are atomic and composable, inert, basically
packages.

You wouldn't say that the difference between a live database and its binaries
compiled source code is academic, would you?

I agree that it would make more sense if you could dump the image to a flat
file with a single verb. I also think docker needs an interface to stop a pull
in progress that has stalled or is no longer needed. These are academic
concerns, you can submit a pull request.

------
alextgordon
1\. Stateless servers. Put your code and configuration in git repos, then
mount them as volumes in your docker container. The absolute star feature of
docker is being able to mount a _file_ from the host to the container.

You can tear down the host server, then recreate it with not much more than a
`git clone` and `docker run`.

2\. Precise test environment. I can mirror my entire production environment
onto my laptop. No internet connection required! You can be on a train, on a
plane, on the beach, in a log cabin in the woods, and have a complete testing
environment available.

Docker is _not_ a security technology. You still need to run each service on a
separate host kernel, if you want them to be properly isolated.

~~~
davexunit
>The absolute star feature of docker is being able to mount a file from the
host to the container.

This is a simple bind-mount and isn't special at all.

    
    
        mount("/foo", "/container/foo", "none", MS_BIND);
    

Also, virtual machines have had things like 9p that allow the same thing.

~~~
alextgordon
I don't think there is enough RAM in my laptop to run five VMs simultaneously
:)

~~~
davexunit
Yeah, containers are much slimmer.

------
danesparza
I'm stunned that nobody has brought up the idea of 'immutable architecture'
\-- the idea that you create an image and deploy it, and then there is no
change of state after it's deployed. If you want a change to that environment,
you create a new image and deploy that instead.

Docker gives you the ability to version your architecture and 'roll back' to a
previous version of a container.

~~~
hueving
Nobody is mentioning it because VM's already did this for more than a decade.

~~~
hosh
This isn't true.

The way VMs handle this doesn't carry the same semantics as the way you can
with Docker. There's a finer-grain composability with Docker that is much more
awkward with VMs.

Docker may not be as great as a virtualization tool as VMs -- security
concerns, complexity, etc. -- but it is a much better package management tool.

------
zwischenzug
Some key points:

\- Docker is nothing new - it's a packaging of pre-existing technologies
(cgroups, namespaces, AUFS) into a single place

\- Docker has traction, ecosystem, community and support from big vendors

\- Docker is _very_ fast and lightweight compared to VMs in terms of
provisioning, memory usage, cpu usage and disk space

\- Docker abstracts applications, not machines, which is good enough for many
purposes

Some of these make a big difference in some contexts. I went to a talk where
someone argued that Docker was 'just a packaging tool'. A sound argument, but
packaging is a big deal!

Another common saw is "I can do this with a VM". Well, yes you can, but try
spinning up 100 vms in a minute and see how your MacBook Air performs.

~~~
arenaninja
Docker is indeed fast and lightweight. It's amazing how much CPU power is
freed up from not running a full on VM in VirtualBox. That said, I'm wary of
running it in production

~~~
zwischenzug
Why? Security?

~~~
arenaninja
Yep. I don't understand the security implications well enough to guard against
them. As much as I like cutting edge tech I prefer to not actually cut myself
with it!

~~~
zwischenzug
PM if you want to talk more.

------
akshaykarle
Docker is mainly an app packaging mechanism of sorts. Just like you would
build a jars, wars or rpms, etc. you create docker images for your
applications. The advantage you get is that you can package all your
dependencies in the container thereby making your application independent and
using the tools provided by docker in combination with swarm, compose, etc. it
makes deployment of your apps and scaling easier.

OpenVZ, LXC, solaris zones and bsd jails on the other hand or mainly run
complete OS and the focus is quite different from packaging your applications
and deployments.

You can also have a look at this blog which explains the differences more in
detail: [http://blog.risingstack.com/operating-system-containers-
vs-a...](http://blog.risingstack.com/operating-system-containers-vs-
application-containers/)

------
jacques_chester
Docker uses the same kernel primitives as other container systems. But it tied
together cgroups, namespaces and stackable filesystems into a simple cohesive
model.

Add in image registries and a decent CLI and the developer ergonomics are
outstanding.

Technologies only attract buzz when they're accessible to mainstream
developers on a mainstream platform. The web didn't matter until it was on
Windows. Virtualization was irrelevant until it reached x86, containerization
was irrelevant until it reached Linux.

Disclaimer: I work for a company, Pivotal, which has a more-than-passing
interest in containers. I did a presentation on the history which you might
find interesting: [http://livestre.am/54NLn](http://livestre.am/54NLn)

~~~
bosse
As an ops guy, I would also mention the benefits of the Dockerfile and docker-
compose.yml, which could be clear sources for information to how the system is
built, and which in most circumstances would build the same system in dev as
in prod. By changing a docker tag in the configuration management, I can roll
out a new version quite conveniently to staging and eventually to production.

The potential minimalism of a container is also an important concept to
mention, with fast startup-times and less services that could potentially be
vulnerable.

~~~
patsplat
Agreed.

Application runtime dependencies are a common source of communication
breakdowns between development and infrastructure teams. Making the
application container a maintained build file on the project improves this
communication.

docker provides:

* a standard format for building container images (the Dockerfile)

* a series of optimizations for working with images and containers (immutable architecture etc).

* a community of pre-built images

* a marketplace of hosting providers

All at the cost of linux only, which is ok for many shops.

------
sudioStudio64
I think the main thing is to provide an abstraction for applications so that
they aren't tightly coupled to the operating system of the server that's
hosting them. That's a big deal.

Some people have mentioned security...patching in particular. Containers won't
help if you don't have patching down. At the very least it lets you patch in
the lab and easily promote the entire application into production.

I think that the security arguments are a canard. By making it easier and
faster to deploy you should be able to patch app dependencies as well. I, for
one, would automate the download and install of the latest version of all libs
in a container as part of the app build process. Hell, build them all from
source.

IT departments need to be able to easily move applications around instead of
the crazy build docs that have been thrown over the wall for years.

------
jtwebman
It's a tool to make over engineering every project even easier! All joking
aside it is a good tool for some teams to make sure the same exact code is
running in production that was tested. I don't think it is for everyone and
can make things much more complicated than they need to be. I also don't think
everything needs to be in a docker.

------
hmans
Docker is the industry-accepted standard to run web applications as root.

~~~
davexunit
It's unfortunate that Docker _still_ doesn't use user namespaces.

~~~
justincormack
No, they are new and have had many security issues. Just run your containers
not as root, you can use capabilities if you like.

~~~
davexunit
But certain namespaces cannot be created without CAP_SYS_ADMIN. Sure, you can
drop privileges later, but a privilege escalation exploit in the container
gives the attacker root access outside of that container, too. Sure, user
namespaces have had issues, but they seem a hell of a lot safer than no
isolation at all. Furthermore, user namespaces allow unprivileged users to
create containers, too, which is particularly exciting.

------
csardi
I like this presentation, as it shows what Docker really is, and also how to
use Docker without Docker: [https://chimeracoder.github.io/docker-without-
docker/#1](https://chimeracoder.github.io/docker-without-docker/#1)

------
mariocesar
The most common pro is "Build once deploy everywhere" even is possible, I
always feel pushing a 500 MB tar image to the production servers is more an
annoyance than being helpful; Yes, You can setup your own registry but
maintaining the service, securing, adding user permissions and maybe use a
proper backend like S3 is an extra annoying layer and another component that
could fail.

If the docker tool will have something like `docker serve` and start his own
local registry will be more than great.

For this case when I switch to Go was a great solution, building the binary is
everything you need.

About docker being helpful for development, definitively yes, I switch to
postgres, elasticsearch and redis containers instead of installing them on my
computer, is easy to flush and restart and having different versions of
services is also more manageable

------
spaceisballer
I know you have some other questions that I am not qualified to answer, but I
recalled seeing a similar question asked not that long ago.

[https://news.ycombinator.com/item?id=9805056](https://news.ycombinator.com/item?id=9805056)

------
dschiptsov
To create a buzzword to attract investors money. It is professional brand
management at work.

------
hosh
You're coming at this from the wrong direction, namely virtualization.

What differentiates Docker is not virtualization, so much as package
management. Docker is a package management tool that happens to allow you to
execute the content of the package with some sort of isolation.

Further, when you look at it from that angle, you start seeing the flaws with
it, as well as it's potential. It's no accident that Rocket and the Open
Container Project are arising to standardize the container format. Other,
less-well-known efforts include being able to distribute the container format
themselves in a p2p distribution system, such as IPFS.

~~~
pekk
Somehow this wasn't well explained, leading to a really persistent
misunderstanding that comes up any time Docker is mentioned; you can see in
this very thread someone claiming to be an expert with Linux and saying that
Docker is no good because it doesn't sufficiently abstract away from the
kernel, as if that were its purpose. There is always someone on hand to claim
that Docker is nothing more than cgroups, again, as if the packaging part of
this didn't even exist.

~~~
hosh
Fair enough!

I ran through the same thing too. I used to work for Opscode. I joined them
because I like the idea of "infrastructure-as-code." I remember when Docker
came around, I was scratching my head. There was a part of me that thought it
has something, and another part that was thinking, why would anyone want to
use this? Wouldn't this set us back to the time when infrastructure is not
code? I couldn't put my finger on it. And what's really funny is that the
"container" metaphor explains this well -- and I had spent time reading up on
the history of physical, intermodal containers and how they changed our global
economy to boot. The primary point of intermodal containers isn't that it
isolates goods from one merchant from another; it is that there is a standard
size for containers that can be stacked in predictable ways, and moved from
ship to train to truck quickly and efficiently. You are no longer loading and
unloading pallets and individual goods; you are moving containers around.
Package management. A lot of logistics companies at the time didn't get this
either.

Most of the literature out there explains Docker as virtualization, or some
confused mish-mash of "lightweight virtualization", or "being able to the move
containers from one machine to the other." They pretty much circle around the
central point of package management without nailing that jelly to the wall.

~~~
nickstinemates
For what it's worth, we use this metaphor a lot, along with the same wording
in pretty much every pitch we do, both public and private.

What I find interesting about Docker is that different people get excited
about different aspects of it.

One of the major reasons I love working at the company - I get to watch them
have the same feeling I did over 2 years ago: the feeling that Docker can help
with something they find painful in their daily work.

~~~
hosh
Thanks for sharing. I remember the pictures of intermodal containers for
explaining this.

Sadly, I also see writeups that focus too much on the virtualization aspect.
The journalists are searching for something to compare it to, so Docker gets
compared to other virtualization and resource isolation tools.

Growing pains, I suppose?

~~~
nickstinemates
The media often looks for conflict; the easiest target is Docker vs. VMWare.
But as you correctly point out, we don't really see it that way.

------
Kiro
For me I don't understand the purpose at all. I have a few node.js and PHP
services. Why do I need isolation and have them in containers? If I want an
identical environment when developing I can use Vagrant.

~~~
SEJeff
FYI: Vagrant can use docker, rendering your argument invalid :)

[http://docs.vagrantup.com/v2/provisioning/docker.html](http://docs.vagrantup.com/v2/provisioning/docker.html)

Docker is about running isolated environments in reproducible ways. I get a
container working just so on my desktop, ship it to an internal registry,
where it gets pulled to run on dev and qa. It works identically to how it
works on my desktop, then I ship it to production. One image that works the
same on all environments. That is what docker was for, developer productivity.

------
pjc50
The description on HN the other day of Docker as a souped-up static linking
system is the most interesting one.

------
tobbyb
OpenVZ or LXC give you OS containers like KVM or VMWare gives your Virtual
machines. Unlike OpenVZ, LXC does not need a custom kernel, and is supported
in the mainline Linux kernel paving the way for widespread adoption.

Docker took the LXC OS container template as a base, modified the container OS
init to run a single app, builds the OS file system with layers of aufs,
overlayfs, and disables storage persistence. And this is the app container.

This is an opinionated use case of containers that adds significant
complexity, more a way to deploy app instances in a PAAS centric scenario.

A lot of confusion around containers is because of the absence of informed
discussion on the merits or demerits of this approach and the understanding
that you have easy to use OS containers like LXC that are perfectly usable by
end users like VMs are, and then app containers that are doing a few more
things on top of this.

You don't need to adopt Docker to get the benefits of containers, you adopt
Docker to get the benefits of docker and often this distinction is not made.

A lot of users whose first introduction to containers is Docker tend to
conflate Docker to containers, and thanks to some 'inaccurate' messaging from
the Docker ecosystem think LXC is 'low level' or 'difficult' to use, Why would
anyone try LXC if they think it's low level or difficult to use? But those who
do will be pleasantly surprised how simple and straightforward it is.

For those who want to understand containers, without too much fuss, we have
tried to provide a short overview in a single page in the link below.

[https://www.flockport.com/containers-minus-the-
hype](https://www.flockport.com/containers-minus-the-hype)

Disclosure - I run flockport.com that provides an app store based on LXC
containers and tons of tutorials and guides on containers, that can hopefully
promote more informed discussion.

~~~
GeertJohan
Docker does not use LXC as default execution environment anymore. They created
their own, called libcontainer¹. But with the new opencontainers movement, the
package has been moved to runc².

[1]
[https://github.com/docker/libcontainer](https://github.com/docker/libcontainer)

[2]
[https://github.com/opencontainers/runc](https://github.com/opencontainers/runc)

~~~
tobbyb
That's why I used the word took, Docker used LXC as a base till version 0.9,
untill it got enough traction, at which point it basically recreated LXC with
libcontainer.

But that was not the point. The point is you have always had perfectly usable
end user containers from the LXC project even before Docker. Then a VC funded
company Docker bases itself on LXC and markets itself aggressively and
suddenly a lot of users think LXC is 'low level' or 'difficult to use'? This
messaging is coming from the Docker ecosystem and the result is the user
confusion we see on most container threads here.

Informed discussion means people know what OS containers are, what value they
deliver, and what Docker adds on top of OS containers so there is less
confusion and FUD, and users can make informed decisions without a monoculture
being pushed by aggressive marketing.

But that discussion cannot happen if you are in a hurry to 'own' the container
story and cannot acknowledge clearly alternatives exists and what value you
are adding exactly on top. I see people struggling with single app containers,
layers and lack of storage persistence when they are simply looking to run a
container as a lightweight VM.

The 'open container movement' is yet one more attempt to 'own' the container
technology and perpetuate the conflation of Docker to containers. How can a
'open container movement' exclude the LXC project that is responsible for the
development of much of the container technology available today. It should
ideally be called 'Open App Container' because there is a huge difference
between app containers and OS containers. OS containers provide capabilities
and deployment flexibility that app containers simply cannot give because they
are a restrictive use case of OS containers. Containers technology as a whole
cannot be reduced to a single PAAS centric use case.

------
theknarf
Docker is a way to create immutable infrastructure, which is a key component
to a) have software working the same in test and prod. (hint DevOps.) and b)
creating servers which can scale both vertically and horizontally.

I think thats the best way I can summarise what Docker _is_.

------
mbrock
I don't know much about virtualization technology, but Docker is nice for me
because it's an accessible, well-known, and rather easy way to make
applications easy and straightforward to run.

Where I've worked in the past, setting up a new development or production
environment has been difficult and relied on half-documented steps, semi-
maintained shell scripts, and so on. With a simple setup of a Dockerfile and a
Makefile, projects can be booted by installing one program (Docker) and
running "make".

You could do that with other tools as well, but Docker, and even moreso the
emerging "standards" for container specification, seems like an excellent
starting point.

------
bfirsh
This explains the difference between Docker and normal virtualization
technology:
[https://www.docker.com/whatisdocker](https://www.docker.com/whatisdocker)

------
corradio
Might be interesting for you: [https://medium.com/using-artificial-
intelligence-to-make-tec...](https://medium.com/using-artificial-intelligence-
to-make-technology/engineering-a-fast-feedback-infrastructure-6f6f132e5807)

------
johnminter
I think one useful purpose was described by Prof. Mine Cetinkaya-Rundel of
Duke at the recent UseR conference. She teaches an introductory statistics
class for non-majors. Docker lets her spin up individual virtual machines for
each student with all the packages they need for the class without all the
sys-admin headaches of getting all the software on everybody's systems. You
can see her slides and evaluation of the alternatives here:

[https://github.com/mine-cetinkaya-
rundel/useR-2015/blob/mast...](https://github.com/mine-cetinkaya-
rundel/useR-2015/blob/master/r_studio_docker.pdf)

------
lgunsch
Simply put, Docker is _operating system virtualization_ :

[https://en.wikipedia.org/wiki/Operating-system-
level_virtual...](https://en.wikipedia.org/wiki/Operating-system-
level_virtualization)

Edit: formatting.

------
somberi
A meta critique after reading 139 comments: I too had the same question as the
parent and from the ensuing conversations, I assume that either Docker is so
thin-layered (not in a bad way) that it is open to so many interpretations or
it is so thin-layered (in a trivial way), that one does not need to get all
worked up adopting it if one is comfortable in using other VM options out
there (like OpenVZ for example).

------
theneb
I find docker is quite good for integration tests where you need to test
against a third party bit of software. Lots of images exist in the hub for
this.

------
justincormack
OpenVZ is not upstream in the kernel; the container stuff that got merged is
what Docker uses. Docker has much wider adoption than OpenVZ does now.

~~~
jdoss
> Docker has much wider adoption than OpenVZ does now.

I don't think your statement is true at this point in time. OpenVZ is used by
a ton of companies in the hosting industry and by large companies such as
Groupon and smaller ones like TravisCI [1]. I would't make a statement that
that Docker has a wider adoption than OpenVZ at this point in time. Maybe in
five years, yes it may have a wider adoption than OpenVZ. OpenVZ and
commercial VZ have been doing full OS containers since the early 2000s and it
has the production track record to do very well in many server applications. I
wouldn't hesitate to use it over Docker in production for my future projects.

[1]: [http://changelog.travis-ci.com/post/45177235333/builds-
now-r...](http://changelog.travis-ci.com/post/45177235333/builds-now-running-
on-openvz)

~~~
justincormack
Travis moved to Docker after that [1]. And the "hosting industry" is not the
thing it used to be since cloud.

[1] [http://blog.travis-ci.com/2014-12-17-faster-builds-with-
cont...](http://blog.travis-ci.com/2014-12-17-faster-builds-with-container-
based-infrastructure/)

~~~
jdoss
Very cool on the Docker move by Travis. I still think Docker has a long way to
go to over take OpenVZ. Docker is gaining steam, but it's adoption rate isn't
wider than OpenVZ. Not yet.

I agree that the hosting industry isn't what it used to be. Most of the larger
hosting providers are not keeping up with the current trends and deployment
methods, but that is mostly due to the fact that they do not need change. Most
people who are buying commodity hosting don't have a team of developers and
operations guys to use all the new cool cloud methods like Docker.

------
xaduha
Someone (Darren Shepherd?) compared Docker to Ajax. It's not a technological
breakthrough, it's another kind of breakthrough.

I think it was here [1], but deleted now.

[1] [http://ibuildthecloud.tumblr.com/post/63895248725/docker-
is-...](http://ibuildthecloud.tumblr.com/post/63895248725/docker-is-lxcs-ajax)

------
tfn
I went ahead and blogged an answer here: [http://blog.tfnico.com/2015/07/the-
sweet-spot-of-docker.html](http://blog.tfnico.com/2015/07/the-sweet-spot-of-
docker.html)

TL;DR: It's better for deploying applications and running them than using
home-made scripts.

------
atsaloli
See [http://stackoverflow.com/questions/29304951/difference-
betwe...](http://stackoverflow.com/questions/29304951/difference-between-
docker-and-openvz)

------
programminggeek
It exists to create jobs in devops.

------
kolyshkin
[Disclaimer: I am the guy who was running OpenVZ since the very beginning, and
if you hate OpenVZ name/logo, I am the one to blame. Also, take everything I
say with a grain of salt -- although I know, use, like and develop for Docker,
my expertise is mostly within OpenVZ domain, and my point of view is skewed
towards OpenVZ]

Technologically, both OpenVZ and Docker are similar, i.e. they are containers
-- isolated userspace instances, relying on Linux Kernel features such as
namespaces. [Shameless plug: most of namespaces functionality is there because
of OpenVZ engineers work on upstreaming]. Both Docker and OpenVZ has tools to
set up and run containers. This is there the similarities end.

The differences are:

1 system containers vs application containers

OpenVZ containers are very much like VMs, except for the fact they are not VMs
but containers, i.e. all containers on a host are running on top of one single
kernel. Each OpenVZ container has everything (init, sshd, syslogd etc.) except
the kernel (which is shared).

Docker containers are application containers, meaning Docker only runs a
single app inside (i.e. a web server, a SQL server etc).

2 Custom kernel vs vanilla kernel

OpenVZ currently comes with its own kernel. 10 years ago there were very few
container features in the upstream kernel, so OpenVZ has to provide their own
kernel, patched for containers support. That support includes namespaces,
resource management mechanisms (CPU scheduler, I/O scheduler, User
Beancounters, two-level disk quota etc), virtualization of /proc and /sys, and
live migration. Over ten years of work of OpenVZ kernel devs and other
interesting parties (such as Google and IBM) a lot of this functionality is
now available in the upstream Linux kernel. That opened a way for other
container orchestration tools to exist -- including Docker, LXC, LXD, CoreOS
etc. While there are many small things missing, the last big thing --
checkpointing and live migration -- was also recently implemented in upstream,
see CRIU project (a subproject of OpenVZ, so another shameless plug -- it is
OpenVZ who brought live migration to Docker). Still, OpenVZ comes with its own
custom kernel, partly due to retain backward compatibility, partly due to some
features still missing from the upstream kernel. Nowadays that kernel is
optional but still highly recommended.

Docker, on the other side, runs on top of a recent upstream kernel, i.e. it
does not need a custom kernel.

3 Scope

Docker has a broader scope than that of OpenVZ. OpenVZ just provides you with
a way to run secure, isolated containers, manage those, tinker with resources,
live migrate, snapshot, etc. But most of OpenVZ stuff is in the kernel.

Docker has some other things in store, such as Docker Hub -- a global
repository of Docker images, Docker Swarm -- a clustering mechanism to work
with a pool of Docker servers, etc.

4 Commercial stuff

OpenVZ is a base for commercial solution called Virtuozzo, which is not
available for free but adds some more features, such as cluster filesystem for
containers, rebootless kernel upgrades, more/better tools, better containers
density etc. With Docker there's no such thing. I am not saying it's good or
bad, just stating the difference.

This is probably it. Now, it's not that OpenVZ and Docker are opposed to each
other, in fact we work together on a few things:

1\. OpenVZ developers are authors of CRIU, P.Haul, and CRIU integration code
in Docker's libcontainer. This is the software that enables checkpoint/restore
support for Docker.

2\. Docker containers can run inside OpenVZ containers
([https://openvz.org/Docker_inside_CT](https://openvz.org/Docker_inside_CT))

3\. OpenVZ devs are authors of libct, a C library to manage containers, a
proposed replacement or addition to Docker's libcontainer. When using libct,
you can use enhanced OpenVZ kernel for Docker containers.

There's more to come, stay tuned.

------
droidztix
reading while eating popcorn ( ͡° ͜ʖ ͡°)

