
Docker will change everything - sathishmanohar
http://sathishmanohar.com/articles/docker-will-change-everything/
======
stox
Yes, it will change everything...back to what it once was. You will be billed
by CPU time, I/O time, network consumption, etc. Just like we did on the old
Mainframes. Hopefully, this will provide the feedback loop necessary to reward
efficient programming once again.

~~~
samstave
BOFH is dead. Long live BOFH.

~~~
entendre
Thanks, Sid Dabster

------
rdtsc
> automates the deployment of any application as a lightweight, portable,
> self-sufficient container that will run virtually anywhere.

Bullshit. How do I run it on RHEL 5, because DoD systems are chock full of
those? What about Windows? I have to install Virtualbox, and Vagrant and then
run Docker inside the virtualized linux environment and then run docker
containers in that.

So "virtually anywhere" means on a handful of systems.

Sometimes technology's biggest "fans" are its biggest enemies without even
realizing.

~~~
kulkarnic
That is unnecessarily harsh. Docker legitimately represents a new idea, and
some claim, a better idea. Yes, it won't run on RHEL5; but Java probably did
not run on DEC mainframes either (back when its selling point was Write once
run anywhere).

The question seems to be this: does this container model represent a step
forward? I think it does. (And is it hyped a bit too much? Probably :-)

~~~
seiji
"new idea" is a huge stretch. More accurate: "new idea to the mass hacker nerd
HN market."

Solaris has had—for years—container based virtualization with local virtual
network and de-duplicating file system snapshots you can tar up and send
across machines.

It's new in that they made a nice abstraction on top of the Linux clone of
Solaris containers then marketed the crap out of it so everybody became aware
of non-VM compartmentalization and fine grained resource allocation.

In short: still awesome, but not "they invented it" new.

~~~
FooBarWidget
That's what I thought too. It's just FreeBSD jails brought to Linux, finally.

But I was wrong. Start playing with it for a while, and you will realize that
Docker is not just about jails on Linux. Docker is about making portable,
self-contained apps easy. It's about making sharing those containers easy.
It's about making using those containers easy.

The core technology consists of a jail, yes. But once you've worked with the
Docker registry and the Dockerfile (and its wonderful caching mechanism), and
experienced just how easy it is to get something up an running, you will see
that Docker > jails. Docker is much more than that.

------
andrewfong
Can someone comment on what I would use Docker for as opposed to what I would
use Chef / Puppet / Ansible for?

~~~
rdtsc
Docker is a very lightweight VM. Or a much nicer chroot. However you want to
look at it.

The underlying technology is LXC. It is very cool stuff.

But the top article is full of bullshit. It doesn't run "virtually anywhere",
it runs on a few systems that support LXC or you have to build a VM that does
support LXC, so you can host your Docker VM, inside that VM.

How does that "Yo Dog" meme go...?

~~~
jpetazzo
"A few systems that support LXC", that's actually most modern Linux systems
now. Anything with a 3.8 kernel is game.

~~~
rdtsc
> that's actually most modern Linux systems now. Anything with a 3.8 kernel is
> game.

So Linux systems, that started shipping just this year?

Well but even that is not true because it was backport-ed to RHEL 6/CentOS 6
which is very cool.

Not all people live dangerously and install the very latest release on their
server.

Ubuntu LTS (which is a common server platform) also doesn't natively support
LXC well. It has an older kernel. So instructions for installing docker is to
install a new kernel. A new custom compiled kernel on a production system.
Hmm, is that still Ubuntu 12.04 LTS, some devops will argue it isn't, some
will say it is fine.

~~~
krschultz
Skate where the puck is going. Is it a comprehensive solution today? No. But I
would rather see Docker double down on making the project great for those that
have bought in than spend their resources building compatibility from people
that have not. It's the difference between asking your customers what features
they would find most useful and asking people that aren't your customers what
features they think you need to build before they buy. The latter set of
feedback is almost always useless.

~~~
rdtsc
> No. But I would rather see Docker double down on making the project great
> for those that have bought in than spend their resources building
> compatibility from people that have not.

I would rather see that too. So we agree there. If you read my comment, it was
about making ridiculous statements about compatibility, not that I want or
need more compatibility.

On a more general level it is about being dishonest and how ardent fans are
enemies in disguise. They make ridiculous statements about the product that
then creator of the product end up having to manage the PR.

------
pjungwir
Docker sounds very interesting, but I have a ton of questions. I'm very
familiar with Chef and Vagrant, and passingly familiar with FreeBSD jails, but
I'm having trouble wrapping my head around Docker. Here are some questions:

\- Why use ENTRYPOINT rather than init.d scripts?

\- When I'm building the Dockerfile, how do I do something like `vi
/etc/apache2/sites-available/my-app`? In other words, how do I drop a whole
config file onto the image, the way I'd use Chef files/templates?

\- Is it possible to parameterize a Dockerfile, so people can use my image but
adjust the nginx timeout time?

\- Do Dockerfile commands need to be idempotent? For example will this cause
problems?: `RUN echo "deb
[http://archive.ubuntu.com/ubuntu](http://archive.ubuntu.com/ubuntu) precise
main universe" > /etc/apt/sources.list`

\- If I'm using docker to host Rails-on-unicorn, do I still use Capistrano to
push a new deploy, or do I spin down the whole docker container then launch
another with my latest code? If the latter, are there any existing tools for
that?

~~~
calgaryeng
If you get the answers to these questions, could you please let me know???
calgaryeng[at]hotmail.com

------
snird
He explains the problem well, but I yet to see someone who explains the docker
solution (into more details) well.

~~~
prodigal_erik
It could have been clearer that the author is advocating containers as an
alternative to virtual machines for isolating your apps from other users on
shared servers. For some reason a lot of other people are advocating
containers to isolate your apps from each other on your own server (or VM);
there's very little reason to do that unless you have a bunch of poorly
managed apps with incompatible dependences which didn't come from your .rpm or
.deb library.

------
xacaxulu
IT WILL CHANGE EVERYTHING!!! Including orbits of small moons around planets we
haven't discovered yet.

------
josh-wrale
IMHO: On Docker, I'd say, it will change the rhetorical everything for
stateless tiers. Stateful tiers will resist this change. I'm not yet
comfortable upgrading to a new version of MySQL on a database using Docker in
application-container mode, that is, without some package management like
scripts (e.g. upgrade the dataset). For stateful tiers, full LXC containers
are changing everything, at least until full full-container support arrives in
Docker (because, duh, it uses LXC).

~~~
shykes
> _at least until full full-container support arrives in Docker_

If by "full-container" you mean booting a full /sbin/init in a container,
Docker supports that. Just 'docker run /sbin/init' :)

> _I 'm not yet comfortable upgrading to a new version of MySQL on a database
> using Docker_

Check out volume sharing `docker run -volumes-from`, it allows you to share
the same data volumes between several containers. You can use that primitive
to upgrade a database, as well as to run short-lived jobs which affect the
dataset: migrations, backups etc.

~~~
josh-wrale
Thank you shykes.

I didn't know about the volume sharing. It sounds cool, but I'll need to think
more about how such a workflow might work in production. My initial thought is
that such a database upgrade would be considerably more difficult to
coordinate than say a yum/apt package upgrade. This is unfortunate but it's
not entirely a deal breaker (with proper DevOps, anyway).

The full container concept is only partially implemented it seems, per the
issues linked below. Parity with LXC for a more full VM-like experience would
definitely motivate me to adopt Docker quicker in more places (e.g. stateful
tiers).

Any chance these issues could be given high priority?

[https://github.com/dotcloud/docker/issues/1960](https://github.com/dotcloud/docker/issues/1960)
[https://github.com/dotcloud/docker/issues/2276](https://github.com/dotcloud/docker/issues/2276)

Thanks, Joshua

------
guelo
"self-sufficient container that will run virtually anywhere"

Anywhere that is Linux.

~~~
shykes
Check out boot2docker:
[http://github.com/steeve/boot2docker](http://github.com/steeve/boot2docker)

It's a 25MB vm which boots a working docker installation in 5 seconds. I use
it on my mac with virtualbox, and connect to it from the native docker client.

That way I can run unmodified docker containers almost anywhere :)

------
vpsserver
> lightweight, portable, self-sufficient container that will run virtually
> anywhere.

It doesn't work on typical cheap VPS servers from
[http://lowendbox.com/](http://lowendbox.com/) because they use OpenVZ.

~~~
Joeboy
That's where I found the cheap (5gbp per month from NECS) VPS I run my docker
instances on. Maybe it is atypical.

~~~
supergauntlet
It'll certainly work with a cheap Xen or KVM instance but those are usually
more expensive.

------
tomrod
I'd love to understand what use cases Docker is used for. I read a lot of
comments where people talk about outsourcing functions their laptop would
normally run, but I'm not sure of the comparative advantages of that method.

I wonder a few things:

1\. Could tighter security be implemented by outsourcing certain functions to
Docker--the base computation can only allow certain routines to run, for
example, disallowing any trojan activity? (Similar to blocking ports, but at
an IO or computing level).

2\. How is this tech beneficial to the consumer market? As the consumer market
goes the business market (as we saw in Microsoft/Apple/iOS adoption--consumers
demanded it then wanted to use it in business situations)

3\. Is the tech Docker using propietary, trade secret, or protected by patent?
In our current environment tech firms face terrible regulatory uncertainty due
to crappy patents. Can a viable competitor to Docker exist?

~~~
nickstinemates
> Use cases

Continuous Integration / Delivery / Deployment, Embedded systems,
Software/Service Distribution, PaaS, SaaS.. A shorter list would be what use
cases is Docker not a candidate for?

> Beneficial to consumers

Efficient Ops allows for innovation to reach consumers faster and more often.

> Proprietary?

Nope.

Docker is open source and open design. Want to change the direction of the
project? Open a pull request and get 2 core maintainers to agree. Want to
change those rules? Open a pull request to change project governance and get 2
core maintainers to agree.

~~~
tomrod
> Efficient Ops allows for innovation to reach consumers faster and more
> often.

So I have an IRC channel I can contact the developers at all time, and rollout
of upgrades is automated? That's good I guess, but hardly groundbreaking.

If I'm Joe C. ComputerUser with Windows XP running on a circa 2007 Thinkpad,
what is Docker and why do I need it?

Good to hear it's open source.

~~~
nickstinemates
If you're Joe C. ComputerUser with Windows XP you're likely consuming services
that are powered by Docker. That's the 'consumer benefit'

Docker isn't Excel or Word or Some 15 year old game. It's server-oriented
software/technology and thus, from a usage standpoint, targeting that
community.

~~~
tomrod
So if things aren't being changed for the masses, how will "everything
change"?

I mean, isn't this the economic equivalent of "if a tree falls in the woods
and no one is around, does it make a sound?" couched in technological lace?

------
fotcorn
Why not just create a AWS/DigitalOcean/Rackspace instance for every service?
When I need different versions of the same software I use another instance or
something like Python virtualenvs.

~~~
runako
Because your utilization on each of those service instances will be lower,
which means that you will be paying for unused resources. The more instances
you have, the larger the waste.

~~~
falcolas
Not really - overcommitting VM resources lets you get similar sharing.

What docker really buys you over individual instances is lower resource
overhead devoted to OS operations.

~~~
ithkuil
Sure, it's just that usually you don't control the instance sizes of <insert
your favourite cloud provider here>.

It's useful to be able to subslice your provisioned resources. That said, you
can do it with plain LXC, no need for Docker.

It's just a nice way to package components and make them run together in a
predictable way and. That's the aspect I miss most from the discussion around
Docker.

------
CmonDev
"an open source project to pack, ship and run any application as a lightweight
container" \- I am so excited! How exactly do I ship and run my XNA/Mono game
on Android, WP, iOS?

------
jamra
How do you scale with Docker? I get the idea of how containers work. I get the
idea of how it shares resources between like containers. That's all very cool.
My question is how do I scale a docker container? It's not well covered in the
tutorial. Do I just add more containers? What happens when I want to scale on
separate servers? I don't see any configurability of the items inside the
containers.

~~~
nickstinemates
> Do I just add more containers

Yep

> Sepeate servers?

Today, manual. Docker Host clustering is a feature that needs to be worked
out.

> Configurability of items inside of containers

The `link` primitive in the 0.7 release is the key to this one. While
primitive currently, it's already extremely powerful and will only get better.

~~~
jamra
Thank you.

~~~
nickstinemates
Anytime :) This is what I do, so please don't hesitate.

------
siliconc0w
I still think there needs to be a bridge for SCM tools and Docker. Docker
allows you to easily share an enviornmnet but there is still value in
documenting how that environment was built - preferably via a flexible
language (chef/puppet/ansible) instead of just sequential bash commands.

Is there already a project out there that does docker-based continuous
integration?

~~~
shykes
Docker does this.

* With _docker build_ you can automate the building of a docker container from source. Simply add a _Dockerfile_ to your source to specify exactly how to assemble it, step by step. This can be done with the high-level build or configuration tool of your choice.

* With _docker history_ you can point to any docker container and get a full record of exactly how it was built.

* We are adding image signature to allow verification of the origin of any particular container image.

~~~
AjithAntony
Do you literally mean from source, or from a base image?

If I understand correctly, this is only up to the base image. All the docker
docs talk about basing images from well known images from the index. I'd very
much like to know how "ubuntu" is prepared for example.

~~~
shykes
Yes, I literally mean from source.

The source, in turn, can either 1) specify a base image to start from (keeping
in mind the base image can recursively be itself built from source), or 2)
start from scratch and contain all the starting bits.

Of course there is always a "turtle at the bottom". That problem exists
regardless of Docker - for example when you download binary packages from an
apt mirror, that package has been built from source - but on which build
environment exactly, with which build dependencies etc? Docker simply makes
this question more explicit by forcing you to ask that question every step of
the way - _what is the base?_ \- instead of sweeping it under the rug, which
we do every day when we install binary packages.

What we're doing for the docker standard library is exposing exactly what the
turtles at the bottom are:
[https://github.com/dotcloud/stackbrew/tree/master/library](https://github.com/dotcloud/stackbrew/tree/master/library)

------
Kiro
>But what happens when there is another rails application that uses the same
stack. you need to setup another instance identical to the previous one to run
the second application in production.

Why not just run them on the same "instance"? What does instance even refer
to?

~~~
sathishmanohar
Multiple applications can be run on same "instance" using some configuration,
but for a elastic setup which requires to spin up and down instances according
to demand, Its not a viable option. Since, it complicates things too much,
which is why I haven't mentioned that in the article.

------
steeve
Hey folks, if you want to try out Docker easily, you use
[https://github.com/steeve/boot2docker](https://github.com/steeve/boot2docker)

It's a 23mb ISO that boots in 6s and allows you to get started right away.

------
mey
FreeBSD jail isn't dead yet.

------
andyl
Question: If I were to deploy a rails app using docker, can I put the whole
stack in a single container (ruby/postgres/redis/sidekiq/etc.) Or do I need a
separate container for each process?

~~~
jamtur01
You can do both. Docker containers by default are designed to run a single
process but you can use a process manager like supervisord or a script to run
many too.

~~~
FooBarWidget
It's nonsense that Docker containers are designed to run a single process.
There's absolutely nothing that encourages having a single process.

In fact, I think you _should_ run multiple processes because running just a
single process is _wrong_. It will not enable many kinds of important system
services, such as syslog. Take a look at
[https://github.com/phusion/baseimage-
docker](https://github.com/phusion/baseimage-docker) for a full explanation.

This is also the reason why I wrote baseimage-docker: to make it easy for
other Docker users to get things right. I highly encourage people to use
baseimage-docker.

~~~
pjungwir
I'll look at your Github repo, but is there any documentation on running
multiple processes per container? The tutorials on docker.io all include just
one process. And having separate ENTRYPOINT and USER directives suggests that
if you want to run several things at once, they must all run as the same user.

~~~
FooBarWidget
My Github repo _is_ the documentation. It teaches you how to run multiple
processes by having them managed through runit. It even provides a script to
make it easy for you to run a process as a different user.

> And having separate ENTRYPOINT and USER directives suggests that if you want
> to run several things at once, they must all run as the same user.

It does not. That's not how the Unix process model works.

When you boot the Linux kernel, it also wants you to specify a single entry
point. The kernel starts that process as root. Yet your Linux system clearly
has multiple processes, running as different users. Why? It's because the
entry point is the init system, which spawns other processes. You can do the
same thing inside Docker.

It's exactly the same as how your host system works! Nothing has changed!

~~~
pjungwir
Okay, thank you for putting all that together. Running everything out of
runit/etc is about what I figured I'd do if I wanted multiple services in one
container, so it's great someone has already done all the work for me. :-)

------
auggierose
Nope, Docker is not going to change everything. I could tell you what is, but
you won't believe me.

~~~
DannoHung
I'll bite, what _will_ change everything?

~~~
jetblackio
Haven't you heard? WolframAlpha's new language!!!
[http://venturebeat.com/2013/11/29/sentient-code-an-inside-
lo...](http://venturebeat.com/2013/11/29/sentient-code-an-inside-look-at-
stephen-wolframs-utterly-new-insanely-ambitious-computational-paradigm/)

