
Docker in Production: A retort - crymer11
http://patrobinson.github.io/2016/11/05/docker-in-production/
======
carapace
I've never used Docker, or containers, but I read about things like "Breaking
changes and regressions ... a well documented problem with Docker" and "Can’t
clean old images ... a well known issue" and it just seems to me like a crazy
thing to try to use and depend on this thing/company. Bluntly put they seem
like children.

So nevermind a retort, what I would like to see is a sane, sensible "business
value" cost/benefit, pros v. cons breakdown of just what the heck you're
actually gaining (and losing) using Docker vs. some other
architecture/methodology. Because absent that it's all just hype and kool-aid
drinking in my opinion.

What would help with the above is if people would document what they are doing
with Docker _that works_ , because either they are hurting but not realizing
it, or the author of the article is just "doing it worng" and whining about it
in public. What is really going on with Docker, et. al.!?

~~~
emeraldd
I can tell you one place I've found them very valuable: development systems.
The longer I've been in the field, the more I've learned that polluting my dev
box with globally installed tools makes life painful down the road. Docker
provides a nice way (especially now that they have docker exec) to spin up
environments, or set's of environments with tools/stacks and not touch the
host. It's lighter than a bunch of vm instances and easier to orchestrate on a
small scale than most tools I've found. Docker for Mac makes that even nicer!

~~~
sktrdie
It's great for development if it wasn't 4 to 5 times slower on Mac:
[https://www.reddit.com/r/docker/comments/59u1b8/why_is_docke...](https://www.reddit.com/r/docker/comments/59u1b8/why_is_docker_so_slow_on_mac/)

~~~
gerdesj
(Having read the Reddit thread for a while)

Macs don't appear to have support for containers which is what Docker _is_.
Well that is probably bollocks because they are *BSD based and I know that
FreeBSD at least has a form of containerization and I seem to recall that the
whole container thing was invented on a BSD - "jails".

So Docker can't run native on iStuff. You have to run it within a Linux VM. I
gave up on Googling for "apple mac jail" 8)

The reason why Docker runs slowly on Macs is because it is running under
emulation within a VM.

Unfortunately, Macs are not cool enough to run Docker, so there 8)

Cheers Jon

~~~
anoctopus
XNU (the macOS kernel) may have BSD origins, but it doesn't have features like
cgroups and process namespacing that the Linux kernel provides and the Docker
runtime relies on.

~~~
justincormack
Nor does it have FreeBSD jails.

------
shykes
Docker founder here.

I keep reading articles stating that "the Docker API changes with every
release", but the assertion is never backed by any specific examples. Has
anyone here encountered an actual breaking change? If so, I would appreciate
you sharing the specifics so we can fix it.

Docker is by no means perfect:

\- I remember that in 1.10 the switch to content-addressed registries meant
that older clients could not pull by digest (but all other commands, and even
non-pinned pull, still worked). This was not an accidental breaking change: it
was the result of a difficult tradeoff. In the end we decided that the
benefits of the new content-addressed model outweighed the inconvenience. To
guide our decision we used data from Docker Hub to assess how many clients
would be affected. I forget the exact number but it was a very small minority.

\- And in 1.12 we got bitten by a change in how Go 1.6 processes HTTP headers
(it became more strict and thus rejected headers from older clients). That was
quite simply a screwup on our part.

So we've had our share of screw-ups, no question. But lately I've been reading
the "breaks at every release" meme more and more. Based on the evidence I
have, it seems incredibly disconnected from reality.

What am I missing?

~~~
web007
You're missing the fact that docker clients complain about the API version if
they're different from the server - regardless of actual compatibility
problems.

[http://stackoverflow.com/questions/37617400/](http://stackoverflow.com/questions/37617400/)
sums it up. There's a magic (afaik undocumented) env var DOCKER_APIVERSION you
can set for compatibility, but nobody can find it.

If you can't mix even minor versions then yes, it's a problem. I can't run X
in prod and anything other than X in dev of I'm working with the same toolset.

~~~
teabee89
The issue you are referring to happens only if the docker client version is
newer than the docker daemon version. In the case where the client is older
than the daemon, it has always worked.

Thanks for bringing this issue up though, because we are currently working on
a fix. Here is the PR:
[https://github.com/docker/docker/pull/27745](https://github.com/docker/docker/pull/27745)

As for DOCKER_API_VERSION, it is possibly not well documented, but it is
indeed present in the documentation:
[https://docs.docker.com/engine/reference/commandline/cli/](https://docs.docker.com/engine/reference/commandline/cli/)

Hopefully this was helpful.

~~~
web007
The client will nearly always be newer, so that message will almost always
show up. You'll update your laptop tools regularly, and only bump your
infrastructure occasionally.

How many versions has this been a problem? It's not thinking about things like
this until (apparently) 11 days ago that give Docker haters such ammo. The UX
of the entire ecosystem feels like an afterthought. Yes, it's been evolving
rapidly, but that's no excuse for not having a good, unified user experience
for what's there or what's coming next.

~~~
cpuguy83
That's a fair statement, especially with d4mac/d4win and auto-update.

Posting a pull request and thinking about a problem are two very different
things. The project has chosen to err on the side of safety (ie, user asked
for some feature to be used but it is silently ignored since it's connecting
to an older daemon) in this regard. -- this is not the only problem to solve

------
Johnny555
This seems less of a "retort" and more of a validation that most of the issues
brought up in the original article are valid complaints.

------
nickthemagicman
I love how the major issue, that both this article, and the original article
warn about is: don't use docker on 'CORE APPS'....

That says all you need to know about the trustworthiness of Docker.

EVEN DOCKER PROPONENTS caution against using it in 'important' apps....

What apps are people investing time in that aren't 'important'?

Is there a coffee machine that is ok to use for a docker app somewhere?

~~~
user5994461
> Is there a coffee machine that is ok to use for a docker app somewhere?

Most coffee machines are docker ready.

To guarantee you the best experience, you will need to setup a pair of coffee
machines, plus an orchestration system that will be responsible for swapping
them automatically when one ran out of coffee.

Note: There are only prototype of orchestration systems. Nothing for sale in
the corner shop yet.

\---

More seriously...

Not important: Most internal, development, and test systems

Somewhat important: Web applications, various support micro services. (They
all are stateless, with multiple instances, and reactive failover by their
respective load balancers).

Critical: Most databases (especially the ones without multi-master mode and
automatic failover), trading applications, payment systems, accounting
systems, databases with money $$$

~~~
nickthemagicman
Haha a coffee load balancer. I love that idea. I'm sure Amazon will have a
coffee machine web service they offer for $2000/month. ACS.

This whole ordeal has actually been a really really good lesson for me about
mature vs less than mature apps. It's been really enlightening to read the
posts and discussion.

------
CSDude
> Again, well accepted principle that “thou shalt not run a database inside a
> container”. Don’t do it, end of story.

Sorry, but this is really a bad advice. We have ran and contine to run various
databases inside Docker including MySQL, PostgRedis, Cassandra, Elastic
Search, RethinkDB even HDFS with proper user rights and configuration. We can
maintain the state just as fine. If your only problem is to move the data, all
you have to do is stop, export, tar it, move to another server, just as you
would do in a normal server. Docker is not a magic bullet to solve such kind
of issues. Yes, Docker might have another problems, but just as you could not
run someting with state inside Docker does not mean "thou shalt not run" ,
there are various ways to manage state. Host, IO can get crash regardless of
Docker.

~~~
otterley
What problem does containers solve for you for this particular part of your
infrastructure? Native storage software packages are available for mainstream
OSes that handle dependencies via the native package manager. And since the
storage they manage is usually directly attached, nodes that run this software
are infrequently migrated. And this software is infrequently upgraded under
the maxim "if it ain't broke, don't fix it." It's there to store data on
behalf of the applications you write; it is not a thing to upgrade or migrate
for its own sake unless there is a bug to quash or a new feature that your
application will depend upon; and even then, migrations must be carefully
planned to preserve availability for users.

Docker and the like seem like a solution in search of a problem for this
particular part of a typical service infrastructure.

Or, to put it more bluntly, just because you've gotten away with it (thus far)
doesn't make it a good idea.

~~~
CSDude
> Or, to put it more bluntly, just because you've gotten away with it (thus
> far) doesn't make it a good idea.

What is your typical server infrastructure? We use bare machines and had
kernel panics very rarely (only with experimental MACVLAN). We did not 'get
away' with anything. We have been using Docker for over 2 years, and LXC much
more and know how to handle mounted directories, it is not a rocket science.
We used LXC, now Docker to provide quick isolated mini linux environments.
They are just different type of Virtual Machines when you abstract it in your
head, and do not expect it to do magical stuff like saving your data and send
it to another host upon crash.

> Native storage software packages are available for mainstream OSes that
> handle dependencies via the native package manager.

They are often outdated, we mostly get one of the latest builts, and in some
cases Snapshot builds, where they are not available anywhere. So ability to
run versioned instances of some software independently and isolated is a huge
win for us. We do not want to distrub people using the old version. Yes it can
be solved with VMs. We use our own bare metal servers. We do not like to "pay"
to open VMs, so we turned to OpenStack. It is very complicated for our use
case, we like to use bare metal on our own. But, we still need isolation, this
is where containers; LXC and Docker came to help and saved us a burden.

~~~
xorcist
> They are often outdated, we mostly get one of the latest builts, and in some
> cases Snapshot builds,

That's a very usual thing to say about databases. For critical and
commercially supported things such as your relational databases, nobody runs
snapshot builds. You run whatever builds your vendor supports, and for all
those you mentioned (and most other vendors) that is going to be native
packages.

~~~
CSDude
Native packages are nothing more than packaged software with startup and
maintaining scripts, (and tested) so as long as you can manage these, I really
don't see I have to stick to what my vendor thinks best.

------
rusanu
The HN discussion of the article being retorted:
[https://news.ycombinator.com/item?id=12872304](https://news.ycombinator.com/item?id=12872304)

------
lobster_johnson
Re ECR (the EC2 Container Registry), it has one downside that the author
doesn't mention, which also applies to Google's own registry.

A Docker registry has own authentication system. So does AWS (and GCloud). So
what you end up is one wrapping the other: To access the ECR, you have to run
an AWS command to get a token to put into "docker login". Google has "gcloud
docker login" for the same purposes. Both produce temporary credentials that
time out, so can't be used for long-running things.

This means that any tool designed to work with a Docker registry needs to
support this particular workflow. For example, this affects Drone [1].

It also adds complexity. GCloud is particularly heavy on the authentication
complexity side already (compared to AWS's comparatively simple keypair
approach), and with SSH, GCR and Kubernetes on top it starts to stack up in
ways which can make users' head spin.

Straight Docker Hub is refreshingly straightforward by comparison.

[1] [http://readme.drone.io](http://readme.drone.io) (not to be confused with
drone.io)

~~~
erikgrinaker
At least with Google's GCR, this is no longer the case. They have a Docker
credentials helper [1] which transparently handles authentication for regular
interactive use, and service accounts with permanent keys [2] for use with CI
servers etc. I find both of these to be fairly straightforward to use.

[1] [https://github.com/GoogleCloudPlatform/docker-credential-
gcr](https://github.com/GoogleCloudPlatform/docker-credential-gcr)

[2] [https://cloud.google.com/container-registry/docs/advanced-
au...](https://cloud.google.com/container-registry/docs/advanced-
authentication#using_a_json_key_file)

------
jwatte
Previous article: "This is nowhere near ready for those who just want to get
the job done."

This article: "It'll be better in the future, you'll see!"

The former is verifiable, the latter is a hypothesis.

~~~
DrNemski
No what I was saying was its ready for today in certain use cases. But only if
you look at it as a long term transition and not a short term project

------
girvo
As an example of Docker in production: Expedia are moving lots of their legacy
infrastructure into Docker containers. My third-party contracting team that
works on projects for Expedia (we're brought in so the rules and bureaucracy
don't apply to use, allowing us to rapidly iterate and experiment in ways the
core teams can't) have been using Docker end-to-end (local development through
to autoscaled production deploys)

While there were teething issues, this article does a good job of pointing out
the flaws in the original article, I think. It's been easier to get our team
up to speed on Docker and it's gotchas than nearly any other configuration
management, server management, et al. systems that we tried!

~~~
user5994461
Short version: Nope :p

Long version: I met the DevOps guy who [I believe] is responsible for pushing
Docker at Expedia and we've had long conversations about it.

They were lucky to have had a particular environment and a specific version
that worked, and got it pinned down and frozen very early.

I suppose you are on the dev side and not aware of all that. (Hell, maybe,
you're not even in the same subsidiary of Expedia). I'm glad it all worked out
for you as a dev, my devs are also happy with Docker. (We're probably used as
an example of Docker success story at times).

In the end, there is no free lunch. There is dirty work done and more to be
done. Some of which is invisible.

~~~
girvo
Sorry if I didn't explain it correctly!

We're an external team (completely seperate to Expedia) that is brought in to
build certain products :)

And yeah, definitely. I spent a couple weeks debugging the ECS issues we had
with the Wotif guys, which was fun, but none of the issues were
insurmountable!

We were one of the first adopters for the new ECS deployment, and while some
parts of it weren't fun (Splunk still drives me mental) for the most part it
went smoothly: that's down to how good the ops team is, I think.

Our specific case was somewhat special, in that because we're outside of the
rest of the infrastructure, and had extensive experience with Docker in
production for other clients, our apps were basically ready to rock from the
get-go. I think we were the first PHP deployment within the new ECS
deployments!

Should send me an email, it's in my profile, would love to chat sometime!

------
conradk
Can anyone comment on how rkt compares to Docker regarding the issues from
this article ? And how does rkt compare to Docker in production in your
experience ?

I've been using Docker in production for a single server website and have had
very few issues. I do like how easy it is to reproduce a working environnement
with a "docker build" though.

That being said, I think that just using Ansible on a server is probably an
easier and more reliable solution. Ansible is battle tested and allows to have
reproducible environments too.

~~~
DigitalJack
Is ansible still python 2.7 only?

~~~
okket
Yes, and it will still stay so until RHEL 5 is EOL (next year I think), so the
Python 2.4 support requirement can be dropped. This is the main showstopper.

There is also work in progress:

[https://docs.ansible.com/ansible/python_3_support.html](https://docs.ansible.com/ansible/python_3_support.html)

------
wickedlogic
Related question, what happens when a docker images gets pop'd.... how do you
keep it around for investigation, does it get imaged for later forensics?
Every time I have asked people in IRL doing docker, they seem to focus on
updating/patch... on how easy that is and moving on... but that is not always
an option for every client. Do you just image all docker images before they
get terminated/migrated?

~~~
justincormack
All the containers are available after termination, yes, so you can
investigate.

------
smegel
> So the point is valid, but there are some big names invested in solving it,
> so I’m optimistic we’ll see some stability in the future

And it will still be valid if someone forks Docker. In fact, that would
validate the criticism.

------
pfarnsworth
Are these breaking changes problems caused by Docker itself? I was contacted
by Docker and was considering applying, but it sounds like their engineering
management doesn't know what they're doing. Is this depiction accurate or is
it overblown?

~~~
shykes
I would recommend investigating the matter yourself, and making your own
opinion. What specifically has been broken in past Docker releases? How have
they handled it? How would _you_ have handled it? If you do decide to
interview with Docker, make sure to bring up your findings, especially areas
where you think they screwed up. This has two advantages: you can see how they
react to constructive criticism, and they can observe that you are capable of
making your own opinion and providing constructive criticism.

If you _really_ want to impress your interviewers, back up your criticism with
a pull requests fixing the issue. That will automatically put you on top of
the pile of resumes.

------
pmarreck
Anyone know why Erlang doesn't run well on containerized Docker?

~~~
justincormack
What issues are you having? Containers are not really different from non
containerised environments. Have you filed an issue?

~~~
pmarreck
It's something I read in another HN thread

------
ledil
if I am using mount Volumens to export my data, can I bypass the aufs/overlay
implementation/logic ? do I need to pay attention only if I don't mount the
volumes? thx

~~~
justincormack
Volume mounts do not use the aufs/pverlay drivers, no. These are used for
build, and for constructing the rott filesystem for running containers so
files are shared.

------
corv
Docker seems very limited when it's unsuitable to run databases.

I've never seen this limitation with other container solutions. What is it
about Docker that makes it problematic?

~~~
lobster_johnson
Nothing. It's bad advice. Lots of companies run databases in containers.

------
cuillevel3
Good retort. The original article seemed clueless, the part about aufs was
just wrong, the complains about the apt repo exaggerated. Running docker on
Debian ancient is kind of brave, though. And software is finished after five
years, maybe in the financial industry. Currently development has such a pace,
I'd say after five years it's abandoned and replaced.

------
yawz
"The internet has been a wash with a well written article about ..."

Typo: a wash => awash

I know! The content is more important than the quality of the writing, but
it's a little surprising to see such a mistake jumping at the reader at the
start of an article. We should go back to the first days of the Internet where
"updates" were possible. :) I would have loved to suggest an update quickly.

~~~
DrNemski
My googling failed on this account on the correct grammatical term.

