
Docker Part One: Containers - cpeterso
http://www.theregister.co.uk/2014/11/28/docker_part_1_the_history_of_docker/
======
xyby
I have read a lot about Docker now. Here and elsewhere. I even have been to
Docker talks in my town. Still I do not get what the usecase is.

Everytime I ask that, I get the answer "When you want to run applications
seperated from each other in their own environment". Yeah, I get that. But
what is the real world use case for that? Who wants to do that? Now I will get
answers like "Everybody who wants to run applications in containers" :) Yeah
ok. But who? Do bloggers need Docker? People who run onlineshops? Would
Ycombinator benefit from running Hacker News inside a Docker container?

~~~
drinchev
Imagine you have 20 services that need to run with your website ( 5 database
daemons, 5 nodejs apps, 5 ngnix instances, 2 dns servers, 2 mail smtpd
daemons, 1 git repository ).

Your infrastructure consists of 5+ virtual servers that host your services.
Each virtual server might be database only or db + dns or web + dns or ... you
got the point.

What docker allows you to do is to isolate all those 20 services in separate
containers that if any of them is being hacked your attacker will end up
compromising only that service without any access to the main server.

What docker allows you to do is to version control the configuration and the
container for the different services in a way that if you need to reinstall a
service ( redeploy your app ) it is less of a hassle and since it is
containeraized you will not be compromising your other services while doing
that.

etc.

~~~
davidw
So you have 20 containers each running their own version of the OS? Don't you
have to spend a lot of time keeping all of those up to date with
OS/library/app updates?

It seems like the kind of thing that makes sense if you have a big deployment
anyway, so you might as well hive stuff off, but makes much less sense if
you're a "small" shop.

~~~
msie
No. The containers share the same OS.

~~~
xorcist
Can you make changes in the underlying OS image, and those changes are
directly visible in the application images, without restarting said
applications?

------
SwellJoe
I gasped at this assertion: "Virtuozzo, by the way, is still the best
container system available. Unfortunately, it isn't open source, and that
means you have to pay if you want to use it. Virtuozzo, by the way, has the
distinction of being the only halfway decent container solution to (currently)
run natively on Windows."

I don't even know how to process a comment like that about Virtuozzo. The
fragility of Virtuozzo and OpenVZ is infamous. We get more support queries
about our software running on OpenVZ and Virtuozzo containers than all other
virtualization types combined (our stuff runs on a few hundred thousand
hosting servers, we have a really good sample size).

For years, whenever we get a mysterious "this thing stops for no reason" or
"this error doesn't make any sense" or "sometimes it works and sometimes it
doesn't" support request, literally our _first_ clarifying question is, "Is
this an OpenVZ or Virtuozzo system?" It is, frankly, broken by design, with
regard to memory allocation.

~~~
patrickg_zill
I don't know when they changed from "user_beancounters" to "VSwap" but VSwap
fixes all these problems. It is also a problem with FreeSwitch (the VOIP
server software, like Asterisk). Under VSwap-enabled OpenVZ there is not 1
problem.

~~~
SwellJoe
I wish they would tell their customers. We have not stopped seeing these types
of problem reports. And, I've heard that the problem was solved numerous times
in the past, so I'm hesitant to believe that it is finally fixed now.

------
wmf
I don't think this is a very good article in general and especially not for HN
where Docker has been covered in detail already. (The history of the LXC vs.
libcontainer split is pretty far inside baseball and not even that interesting
IMO.)

Speaking of Docker, I did find some interesting repos today:
[https://github.com/socketplane/socketplane](https://github.com/socketplane/socketplane)
and [https://github.com/ibuildthecloud/only-
docker](https://github.com/ibuildthecloud/only-docker)

~~~
ldlework
These are definitely exciting times for container networking!

------
cgb_
One of the things I've found interesting about rapidly popular opensource
solutions in the last 1-2 years is how quickly venture cap funding comes in
and drives the direction of future development.

Docker has a whole ecosystem of bolt-on github projects that fill-gaps that
the Docker project hasn't had time themselves to fill because they are too
busy organising conferences, withholding features until the keynote of said
conference, and working out how to spend the millions of dollars on future
development.

Even the gap-filler github projects are getting VC funding (see
[http://weaveblog.com/2014/12/03/announcing-weaveworks-and-
ou...](http://weaveblog.com/2014/12/03/announcing-weaveworks-and-our-series-a-
funding/)). Weave will either become the standard networking stack, or be made
irrelevant by an in-tree featureful implementation.

So Docker are on this tightrope trying to innovate whilst avoiding stepping on
the toes of their VC-funded 'partners', so their response was simple-core with
a plugin framework for partners to integrate into. Anyone else picturing
fragmentation in 6-12 months time?

------
markdown
I reached part 3 and still don't know a thing about Docker other than it's
history and how its the best thing since sliced bread and it's amazing that it
is what it is because its amazing.

Does part 4 finally deliver on the promise in the title?

~~~
WestCoastJustin
Bit of a personal plug, but I've put together a screencast about Docker, and
people seem to think it is helpful at giving an overview @
[https://sysadmincasts.com/episodes/31-introduction-to-
docker](https://sysadmincasts.com/episodes/31-introduction-to-docker)

------
polysics
I have a more general question for you: I am a senior developer who is
starting to figure out devops because I need to deploy two different products
we built. I have limited Chef and Ansible experience and pretty good Unix
skills. Should I start out with Docker or am I missing some crucial pieces I
should go through first? The use case is "media processing services" in one
case, and a sports betting game in the other.

~~~
vacri
I used puppet (masterless) in a small company, and in my new company I'm using
Ansible and Docker. Puppet and Chef are great, but they're overkill unless you
have large infrastructure. If there's only one of you doing 'ops', you don't
have large enough infra (in my opinion). Ansible is easy to configure and
learn, and doesn't wrap you in dependency hell like puppet can.

I would avoid Docker until you have a use-case that Docker solves for you. Use
your preferred distro's packaging technique. It can be a bit of a pain (.deb
in particular) to start with, but once you're familiar, it's portable and
manageable with your OS's native tools. Things 'just work', and don't need an
agent to be installed and kept up-to-date. If something goes wrong, it's in
your OS's standard logs, not some strange arcane location. And most
importantly, you have a bigger pool of interneteers to help learn from and
troubleshoot with. I can't complain about the Docker IRC channel, good
experience there, but there's still fewer of them.

If you are a .debber and want your own repo, check out Aptly, which makes life
much easier than previous repo software on offer.
[http://www.aptly.info/](http://www.aptly.info/)

TL;DR: Docker is 'another _thing_ to learn', and it's not trivial. Put it off
until it solves a problem for you; there's plenty of other stuff to get up to
speed with.

------
crooksey
I'm sorry, if you cannot see the benefits to running programs inside a
container, then you have been reading the wrong article. I have been using
FreeBSD jails for years, and now on Linux deployments, docker as well.

Lets say you have a server, it runs a mail server, database server, web server
(proxy) and application server.

If they all run without containers, if one service gets compromised, and a
root exploit is found, that's it game over.

If you have a service that starts eating up memory, with proper configuration,
it can't overload the root server. Basically each jail/container can only see
itself and any exploits cannot effect the host system, or other
jails/containers (when configured correctly).

It also allows for easy expansion, when one jailed/containered service gets to
large for the server, just move it to another server easily and quickly.

It also allows for speedy deployment, with docker you can bundle everything on
your laptop, create an image, then ship this straight to an external host like
EC2/Goole Cloud (for example), with the addition of pre built containers for
Django/Rails/Postgres/MySQL etc, it creates a ready working environment for
developers who might not be to hot on configuring systems. The "shipping"
ability to docker is a by product of the container which is another added
benefit. There are loads more features than what I have stated here, this is
just a very brief summary.

~~~
mrweasel
> if one service gets compromised, and a root exploit is found, that's it game
> over.

For root exploits isn't Docker toast as well? I haven't followed Docker in
much details, but does Docker actually promise that commands run as root will
be contained?

~~~
xorcist
There is a certain level of isolation for root even inside containers, but in
the case of a privilege escalation exploit you would most probably achieve
"real" root even if inside a container.

------
mardiros
Can someone could explain why "If you use Puppet, chances are you love Docker
..." (from Part 2)?

