
Is Docker ready for production? - EtienneK
https://t37.net/is-docker-ready-for-production-feedbacks-of-a-2-weeks-hands-on.html?
======
contingencies
This appears to be a good, honest and multi-faceted review against real world
requirements. It is a shame so many replies are "if it works for me and I have
_n_ instances, you must be wrong" instead of truly substantive. The author is
bringing up points of concern, not rubbishing the project. FWIW as frequently
shared my own summary of the area exists at
[http://stani.sh/walter/pfcts/](http://stani.sh/walter/pfcts/) and a general
architectural response to the same sorts of concerns derived at
[http://stani.sh/walter/pfcts/original/](http://stani.sh/walter/pfcts/original/)

------
freshflowers
My general impression of Docker is that for most of us, it adds complexity
with very little upside. Only when you have an already complex orchestration
does Docker help you reduce that complexity.

I'm expecting that to shift with both Docker and the ecosystem of tools and
services around it maturing, to the point where in a few years time Docker may
even be advantageous to the most simple setups.

I'm really excited about Docker, but personally I'm not expecting to have any
use for it in production for another two years.

~~~
robeastham
Checkout [http://deis.io/](http://deis.io/) and follow the Heroku buildpack
instructions to take some of the headache out of all the DevOps questions that
Dockerfiles etc might pose if you are just a humble web developer. Deis run's
on CoreOS which is an excellent base OS that requires you to use Docker.
CoreOS upgrades automatically in the background too.

Deis is a fully opensource PaaS inspired by Heroku. You can dip your toe in
the water by spinning up a Deis cluster and then just following the Heroku
inspired/compatible workflow described in the Deis docs. When you are ready to
experiment Deis let's you use Dockerfiles instead of a Heroku inspired
workflow.

Once you have your Deis cluster up you might even want to sidestep it entirely
and run Docker containers directly on your CoreOS cluster.

Deis can be used with Vagrant and VirtualBox and it can also deploy to
multiple clouds (EC2, Rackspace, Digital Ocean, Bare Metal (e.g. Kimsufi
should be possible [http://bit.ly/1t2PPXB](http://bit.ly/1t2PPXB)). I've been
playing around with it and it looks great so far.

~~~
anentropic
I keep looking at Deis for these reasons... but the getting started guides in
the docs, eg for DigitalOcean, ask you to provision 3 nodes of at least 2gb.
That's $60/month when I only need one small node to run a few basic websites

It seems a too heavyweight solution (Deis cluster has its own instances of
PostgreSQL, Redis, a Docker image registry etc) for the 'humble web developer'
scenario.

Plus it's a bit worrying to use Heroku buildpacks on something that's not
Heroku... you're going to be relying on the Heroku docs and then having to
filter out details that aren't relevant or incompatible

~~~
gabrtv
Deis maintainer here. We understand Deis can be a bit heavy for those who
don't want a multi-node HA setup. For the 'humble web developer' we strongly
recommend Dokku. In fact, we're now sponsoring that project:
[http://deis.io/deis-sponsors-dokku/](http://deis.io/deis-sponsors-dokku/)

------
caw
My coworker and I have been fighting with Docker for the last few weeks like
the OP, so maybe HN can help us here. Right now we use Chef to provision the
hosts and we run services normally. Now we're trying to introduce Docker into
our test environment, so we don't need to replicate a multi-node database
cluster out of machines. In the process, it seems like we're having to repeat
most of our Chef recipes to create the config files, and put them in the
appropriate locations to let Docker mount them as a shared volume.

Are we doing it wrong? Should everything be Docker and all of our Chef recipes
should be written for Docker? Or is this right and just a natural pain of
having both bare metal processes and Dockerized processes? Unlike the article,
so far everything is 1 process or application per container, not a full OS.

~~~
eigenrick
The thing about Chef is that it tries to simplify the process of configuring
and installing many applications that might run on a server.

Docker pretty much removes the the need for such complexity. Don't think of it
as a provisioning tool, like chef. Don't think of it as a VM. Think of it as
an isolated filesystem and process environment that eliminates the complexity
of supporting multiple, disparate application stacks.

The storage for persistent storage with Docker is less compelling, because now
you have shared resources between containers that should be isolated. That
said, we run Cassandra clusters in Docker and have no trouble.

Also, If you want to re-use the work you've done with Chef, there are chef
plugins for Docker which make it easy to provision Docker images with chef
(solo, I think)

~~~
caw
I follow that I'm not provisioning a full node, and parts of Chef
installations are definitely unneeded, but where I'm stuck is I still have a
config file that I need to control.

In the case of Cassandra I'd have certain cassandra.yml tweaks I need to make,
possibly different for beta/staging/prod, and docker-cassandra needs that
config file. I also need to spawn the containers with the right setup, such
that the right ports get exposed and shared volumes get mounted so the configs
get read in. The spawning is a slightly different ops problem (do I want a
homogenous cluster where I just spawn containers somewhere as needed, or do I
want certain containers on certain machine types), but it seems like something
that would fall in the domain of a provisioning tool.

Should I just be building the configs into the container and rebuild the
container if they need to change? The isolation of the process seems at odds
with my desire to have a semi-dynamic, centrally managed configuration.

I'm also considering that Chef itself is overkill to manage Docker containers
and there are other tools more well suited to managing Docker-based
infrastructure, but I'm not sure what they are.

~~~
grosskur
Take a look at confd, a single statically-linked executable that can expand
config file templates using environment variables:

[https://github.com/kelseyhightower/confd](https://github.com/kelseyhightower/confd)

Basically, you add confd into your Docker image and execute it at runtime to
do just-in-time config file generation. Here's an example for nginx:

[https://github.com/grosskur/nginx-confd-
dockerfile](https://github.com/grosskur/nginx-confd-dockerfile)

You can create separate environment variable files for beta/staging/prod and
pass --env-file to "docker run".

This lets you use the same Docker image across all your environments and avoid
the operational complexity of mounting config files in Docker volumes.

------
dedene
Has anyone come up with a way to speed up the "bundle install" step in a
Docker build? The smallest change will cause this step to completely rerun,
which takes a long time for a Ruby application with lots of gem dependencies.

One approach might be to base the final Docker image on another Docker image,
which has a snapshot of all Rubygem dependencies at a certain point. In the
depending image, the 'bundle install' will then do an incremental update and
the Docker build will go a lot faster.

But I was wondering how other people are solving this?

~~~
imdsm
Makes sense to have two docker repositories (images).

In one you install deps, and in the other, which relies on the first, you
install your app. If you change your deps, then you rebuild the first.

I'm not a Ruby dev, but this seems like it should be quite simple?

~~~
IanCal
You can achieve that by letting the caching take care of it for you, there's
no need for two images there.

I think OP wants to not have to reinstall _every_ dependency again when they
update _just one_. For that I think you'd have to use multiple Gemfiles, one
for slow things that rarely change and the other for anything else new.

------
vidarh
Personally I think it shows that this is a "first look" at Docker. Much of it
is much better than what the post indicates.

> The final image is 570MB big. I could not shrink it more unless I remove the
> whole Python and Perl stack. Since both are necessary for many system
> dependencies, starting with apt-get, this was not possible. I still need a
> way I can improve or upgrade my container.

?!? The article starts by pointing out they use immutable servers and
blue/green deployment. In that context, you will not improve or upgrade the
container: You build a new one. And if you want to cut build dependencies from
the final container: Do the build in one container, install the build-
artefacts to a volume, and use the contents of that volume to build a
container without the build dependencies.

It'd be great to get "built in" support for this, but it's not hard to do.

> There’s no easy way logging with Docker.

The standard way of logging with Docker is to log to standard out, which gets
captures and is accessible via "docker logs". If he'd not dismissed systemd
out of hand, he'd also easily have gotten it fed into journald, with the
option of having it relayed to a remote or local syslog as per his
preferences.

> Let’s put it this way: as a way of provisioning a container, Dockerfile is a
> joke.

We don't need more complex provisioning tools. We have plenty of provisioning
tools. Ultimately Dockerfiles needs to be able to specify what should be
copied into the image. Everything else you can do with your standard/preferred
build tools. There's no reason for Dockerfiles to try to become yet another
fully featured provisioning tool.

> Forget your classic monitoring (unless you want to pull your hair with
> network bridges). Everything you’ll be able to monitor within the container
> are ports. That because you run the old school nrpe inside your host, so you
> won’t be able to check you actually have 8 workers running inside your
> container.

This is just flat out wrong. Anything running on the host can see the
processes running in the container. With the right cgroup manipulation (via
nsenter etc.) it can also see the mounted volumes or network space of a
container, and so you can still monitor whatever you like.

> Making your application Docker compliant requires you to rethink the way it
> works.

Making your application take _advantage_ of Docker, rather than treating
Docker containers as sort-of VM's with less isolation requires you to rething
the way it works. It's not something you need to do in one go - you can "break
apart" a larger app environment piece by piece.

> The the tag nightmare begins. If I update my application and add new deps,
> I’ll have to update container #2. Unfortunately, how will I know I have to
> do that?

Uh. How does he know he has to update the machine images he deploys his
applications to today? Personally I use make - tracking build dependencies is
what it is for.

~~~
kitsune_
The main point he makes is valid however,

> Porting your application to Docker increases complexity. Really.

I think the main problem of Docker is that it's sold as an 'easy solution' by
many bloggers who only deal with it superficially and then move on to the next
big thing. There are a lot of gotchas with docker containers and the creation
of clean docker images that are not immediately clear when you start out. A
lot of your standard Linux know-how is not applicable.

edit: Also, there are obvious security issues that are not immediately clear
to most beginners, most certainly not from the tutorials.

One of my favorites: If you provision your database container with environment
variables to create a dba user, and then link your db container to your app
container, voilà, your app container will now most certainly have the dba
login and password inside its environment variables:
[https://github.com/docker/docker/issues/5169](https://github.com/docker/docker/issues/5169)

~~~
curun1r
The reason people call it easy is because it makes a lot of things that are
traditionally hard very, very easy. One need only have written Chef scripts
for any considerable amount of time to appreciate just how much easier it is
to write a Dockerfile. And the things that CoreOS/fleet, Kubernetes and
(hopefully) EC2 Container Service do aren't just difficult to do without
something like Docker, they're basically impossible. And as much as I like our
DevOps teams, the fact that Docker has basically made meetings with them a
thing of the past is a truly wonderful thing.

That's why it's so frustrating to see developers making superficial forays
into Docker and then declare it to be too complex. Yes, the simple and largely
irrelevant stuff does get a bit more complex and you have to do some learning
(and re-learning) before you use it for a production workload. But that's a
trade off that a lot of us are willing to make to make the crazy-hard stuff
significantly easier.

We're developers. Our tools should not be optimized for first use.

~~~
GrinningFool
"That's why it's so frustrating to see developers making superficial forays
into Docker and then declare it to be too complex. "

That goes both ways. It's equally frustrating to see developers making
superficial forays into it and declaring it to be the magic bullet that makes
everything simple.

The basic fact is that building and deploying complex software, managing
dependencies, handling discovery - these are all complex things. There is no
solution that makes it simple because it is inherently not a simple process.

Instead, we can only shuffle the complexity to places we're more comfortable
in managing. For some use cases that's a dockerfile. For others, it's chef
cookbooks [or other CM solution]. For yet others, it's both.

------
Roritharr
Thank you! I was sceptical of Docker for a while just based on a bad gut
feeling, but now that you've shown a few oft the rough spots i think i might
devote some time into it.

~~~
imdsm
What you wrote: Thank you! I was sceptical of Docker for a while just based on
a bad gut feeling, but now that you've shown a few oft the rough spots i think
i might devote some time into it.

What I read: Thank you! I've been looking for an excuse not to like docker for
a while and this gives me great ammunition for my argument against it.

~~~
darklajid
Reading comprehension question (not a native speaker):

I read the GP as 'I was skeptical, but looking at the list of rough points in
the article might make me to give it a try myself' \- as a kind of 'well, if
_those_ are the bad parts it might actually be interesting after all'.

Failure on my side?

~~~
Roritharr
Thats exactly what i meant. I'm wary of things that people shout from the
rooftop about, so some criticism is necessary to evaluate it better. If those
things are the main problems, I think it might worth my time.

------
lelf
Alternatively if you don't Linux-dependent and don't mind much the fact that
not everyone is writing blogposts about it, BSD’s jails is lightweight
virtualisation mechanism that has been production-ready for decades. (And zfs
pools and snapshots are yours too along with that.)

~~~
twic
Jails are great. But - and apologies if i'm teaching my grandmother to suck
eggs here - they're the equivalent of Linux's containers, as implemented in
LXC or libcontainer or whatever.

Docker is a layer on top of that - it's what prepares the file contents of the
jails and looks after them while they're running. I think Docker can even
manage FreeBSD jails, although i'm not certain about that.

It's not the production-readiness of Linux containers that is in question
here; they're fine, although nowhere near as mature as jails. The doubt is
about docker, the layer on top. If you wanted to make a comparison to jails,
it would be to whatever the equivalent of Docker is in the jail ecosystem. I'm
not sure what that is; either there isn't one, or it's an ad-hoc pile of site-
local shell scripts.

~~~
feld
I'm not sure what the benefits of Docker are over using Ansible or Salt to
build and start your Jail. It can automate the entire process, and it's very
repeatable.

~~~
twic
Ah, now Docker and Ansible / Salt / legacy configuration management tools are
also at slightly different places in the stack. Configuration management
tools, as you say, give you a nice repeatable way of automatically building up
a machine. Docker lets you take a machine you've built up and run lots of
copies of it. I believe a typical way to use Docker is to have it run Ansible
etc to build the contents of a container, and then use the result as an image.

I think the point of doing it that way rather than just running Ansible etc on
all your containers is that it means you're running N copies of one master
image, rather than running N images which you hope are the same because
they've been configured the same way. This doesn't seem like a colossal win to
me, but some people seem to like it.

Why it's docker that runs Ansible etc, rather than the other way round, i
don't know. It's a bit like how rpmbuild runs the build tools that make the
contents of RPMs, i suppose. But i really hate rpmbuild so i don't find that
analogy very encouraging.

As something of an aside, i get the impression that Docker is most popular
with people, or organisations, or people in organisations where for whatever
reason developers don't have a lot of control over the configuration of the
machines their software run on. That could be traditional inflexible siloed
organisations, or small, flexible organisations which just don't have an
existing investment in infrastructure automation or the resources to make one.
In those situations, Docker gives developers a way to make control the
environment around their software without having to configure machines. This
strikes me as a workaround to a problem rather than a solution to one, though.

~~~
feld
But I can do that with jails. I can build a master jail and distribute launch
a bunch of jails from it... or distribute it to other machines... extra easy
if I use ZFS snapshots. Handle the distribution/deployment and maybe any extra
configuration with your salt/ansible/whatever and you're fine.

What I see as a problem here for many people is that they don't want to do
this all themselves and maybe docker offers conveniences out of the box. I get
that. Someone needs to work on the jails tooling to make it more approachable
so people aren't inventing the wheel.

------
kimi
Two weeks? we have been running Docker in production for over one year and
have thousands of separate instances. No big issues os far. All that is said
is true (or at least points to an area of concern), it's just a matter of
weighting pro's and con's.

------
mikepurvis
What's the present best practice as far as defining an application as a set of
multiple inter-linking containers? I know of at least Shipyard and Panamax,
and don't especially care for either of them.

Are there others? Is anyone clearly winning in this space?

As it stands, it seems far easier to just create a monolithic container with a
bunch of running processes and a supervisord than to break up the pieces and
then have a more complicated deployment.

~~~
splawn
We are using fig. Just starting to get our feet wet with docker, so I don't
know if its considered "best practice" though. It seems to meet our needs so
far.

[http://www.fig.sh/](http://www.fig.sh/)

------
IanCal
At lot of parts of this don't really make much sense:

> As I’m looking for a way to build a container without having it host the
> whole build environment (such as Puppet modules, ssh private keys to the Git
> repository, etc)

Why would you need to check out something from git within the container? That
sounds like an unusual setup. Normally you'd check out your repo, then run a
build from within it.

> A Dockerfile to build a container with a basic Ruby stack.

> A Dockerfile from #1 to build the deps: checkout the application on Github,
> install the packages, run bundle install, then remove the application. I’ll
> be able to share this container with both applications. Big win!

> A Dockerfile from #2 to download the application from Github, then setup
> everything. So my dependencies are already installed, and it goes fast every
> time I don’t need to update them.

Shouldn't this just be

Application with a dockerfile like this

    
    
        ADD Gemfile
        RUN bundle install
    
        ADD .
    

Check out repo.

Run docker build.

That way it's all cached, and you don't repeatedly build the dependencies. Has
the author not seen docker caching?

------
drothlis
> If I want to keep my containers separated, I can’t have them communicate
> with a UNIX socket, unless I create a shared volume. Once again it’s a no go
> for me.

What do you mean by "shared volume" \-- docker's "-v" to bind-mount the socket
into the container? How else would you like to expose the socket to the
container? Why is it a no go?

~~~
vidarh
I don't get that one either. It seems like he objects to it on the basis that
it reduces isolation, but so will exposing ports via TCP/IP, so I don't get
what he thinks he gains by avoiding volumes in situations like that.
Especially since you can easily enough share just the socket file and nothing
else.

------
Rafert
> The minimal Ubuntu 12.02 image is 627MB. Add your own application layer, and
> your container will most likely weight more than 1GB.

I've read that using Debian (or if possible Busybox) helps in this area. See
[http://container-solutions.com/2014/11/6-dockerfile-tips-
off...](http://container-solutions.com/2014/11/6-dockerfile-tips-official-
images/) and [http://jonathan.bergknoff.com/journal/building-good-
docker-i...](http://jonathan.bergknoff.com/journal/building-good-docker-
images) for tips.

> Every time your application is updated, check the Gemfile md5 and see if
> it’s different from the latest build

Wouldn't you need to check Gemfile.lock? With optimistic/pessimistic version
constraints you could get newer gems without updating the Gemfile.

~~~
calineczka
> Wouldn't you need to check Gemfile.lock?

Exactly. It should be Gemfile.lock that is checked, not Gemfile

------
ptype
This has been submitted previously. Seems like a ? has been added to the end
of the URL which mitigates this being flagged as a duplicate. See
[https://news.ycombinator.com/item?id=8408291](https://news.ycombinator.com/item?id=8408291)

~~~
Xylakant
Adding a query parameter is the accepted way if you want to resubmit a link.
It also seems to be generally accepted to resubmit if the original submission
discussion did not bring a fruitful discussion. Articles sometimes drop off
the homepage very fast even tough they're interesting for multiple reasons,
for example if they're submitted at US night time or if some other,
controversial topic dominates the day. I saw the original submission and I
like the new submission here.

~~~
peterwwillis
This is the third or fourth time this story has been posted
([https://news.ycombinator.com/item?id=8527213](https://news.ycombinator.com/item?id=8527213))
and the post was originally written on 09 September 2014.

~~~
Xylakant
Well, this time it gained traction, I gained insight from the discussion. I'm
fine with that.

------
kimi
Two weeks? we have been running Docker in production for over one year and
have thousands of separate instances. No big issues os far. All that is said
is true, it's just a matter of weighting pro's and con's.

~~~
ubersol
I would be really curious to see how you are actually managing these thousands
of separate instances? Do you have some kind of a management GUI to do this?
Are you managing them through command line? If so are you running into issues
where managing these take up most of your time and adding more complexity into
your environment? I honestly want to see a good example of how you manage your
docker containers where you have thousands of containers instead of say 10.

------
72deluxe
I was surprised to see complaints about building libsqlite3 and Image Magick,
presenting them as tricky. It isn't is it? This is a Linux guy, right?

~~~
twic
The complaint was specifically about building a fully static version, i think.
I have no idea how to do that, but i imagine it's not as simple as ./configure
&& make.

~~~
Sanddancer
Both sqlite and ImageMagick will build static versions of the libraries as
default with ./configure && make.

