
Docker Considered Harmful (2016) - signa11
http://catern.com/posts/docker.html
======
andrewguenther
Like any hyped up technology, Docker gets pitched as an "easy, works out of
the box, secure by default solution" but it is almost anything but. But also
like any technology, if you take the time to really understand what it is
doing and what problems it can solve for you, it can be really awesome to work
with.

Every "Docker Considered Harmful" post I've read basically boils down to "Why
would you use Docker if you can use the 10 technologies it wraps around and
manage them yourself instead?" Why would I want to do that if I don't have to?
Docker wraps these things well. There are weird defaults and there are some
popular patterns in the community that are a bit backwards, but you have the
power to work around it. Don't run your containers as root and run a dumb init
process in your container. That's half of the posts complaints gone right
there. Complaining about defaults is one thing, claiming that bad defaults
make a technology "harmful" is just lazy.

~~~
bunderbunder
> Every "Docker Considered Harmful" post I've read basically boils down to
> "Why would you use Docker if you can use the 10 technologies it wraps around
> and manage them yourself instead?" Why would I want to do that if I don't
> have to?

Or if you can't?

One feature driving Docker adoption that I think a lot of people miss is that
it's got fairly workable (if warty in one case and obnoxious in the other)
implementations for OS X and Windows. That removes a lot of friction for
developers who work in companies where IT won't support Linux on employee
workstations.

Sure, your ops team can put together a bunch of stuff manually. And then you
can create a bunch of extra stuff that makes it easier for development to
handle all of that inside virtual machines, and get the network bridging
between those apps and the host machine working properly, and all that fun
stuff. And by the time you've got it completed and working nicely, you'll be
ready to launch your own Docker competitor onto the market.

~~~
ravenstine
> Sure, your ops team can put together a bunch of stuff manually. And then you
> can create a bunch of extra stuff that makes it easier for development to
> handle all of that inside virtual machines, and get the network bridging
> between those apps and the host machine working properly, and all that fun
> stuff.

I think this is one of the biggest reasons people use Docker, although they
tend to forget to mention it since it's not exactly a technical advantage of
Docker; in many ways, it allows developers to bypass DevOps or sometimes
eliminate them all together. No more creating Jira tickets just to have a
specific version of Imagemagick installed on a server.

~~~
jonnycoder
This right here. Can confirm, the #1 benefit of using Docker is that 98% of
our "Ops" issues has gone away.

~~~
geggam
No they haven't. They are lurking around the corner waiting to hit you when
you least expect it. The mugging you are about to get is what your ops team
has had and are trying to prevent.

Enjoy the learnings...

~~~
bunderbunder
There is only one possible place where this can really grow to be that kind of
a problem: It's when ops isn't being involved.

(And if the relationship between development and ops has broken down to the
point that each one is trying to work around rather than with each other,
you're already screwed. The rest is just details.)

If ops is involved, then there's no real reason they can't take charge of
making sure that anything that is running in production is being built up from
minimal images where they can keep track of the technology stack and all the
different versions of xyz lib that are running in production. They've just got
to do it using a different tool chain.

And if ops isn't involved, I'm not sure how different this really is from the
typical status quo, which involves unquestioningly running whatever uberjar
full of unknown (to ops) 3rd-party packages that probably have their versions
being selected using Maven's default version conflict resolution strategy, so
that nobody, not even dev, really knows what exactly is running in production.

~~~
ravenstine
Your last argument is true of any software outside of the Docker ecosystem.
Are you really going to read through every single directory in node_modules/
to make sure you know exactly what you're running? I don't believe anyone who
answers yes, besides in the sense that NPM will produce vulnerability reports.

If ops does their job well, that's great. A lot of people aren't that
effective at their jobs, and if someone in ops is stuck in PHP Land, unwilling
to learn Docker, they're going to become a huge bottleneck in short order.
Some people are incompetent, but there is a lot of people who don't
particularly like their jobs yet get a sadistic joy out of playing the
gatekeeper role, being the ultimate decider of whether someone else gets what
they want. All the worse if upper management sides with them by default since,
well, they're the "webmasters".

Yeah, I'm pretty biased because I've had situations like that on a few
occasions.

I'm not necessarily saying that the production situation is always improved by
Docker, but what I described is not an uncommon situation and I think it often
leads to teams gravitating towards Docker when their last person in ops
finally leaves.

~~~
LoSboccacc
> Are you really going to read through every single directory in node_modules/
> to make sure you know exactly what you're running?

What, you don’t? Each dependency comes with a license notice. Everything needs
to be pinned. The npm mess that comes out of pinning and unbounded versioning
is precisely why we steer clear of it.

------
bbrunner
This feels a bit like "Why use Dropbox when rsync exists?" type of argument.
Sure, you can do everything docker does with shell scripts, and you've been
able to for decades, but many people didn't, because it was "complicated".
There is often a huge amount of value in simplifying things, even if it means
losing some of the power and the end result being objectively worse.

~~~
cremp
Docker has reached the point that it _is_ complicated. A lot of people never
did cgroups just because it is lower, starting to touch kernel stuff. Few
people want to go there.

Docker is one of those things that you can install and run; it takes a small
amount of time to get running. As you said, value in simplifying things.

That being said... Anyone who takes the stance that containers are better than
X/Y/Z are just showing that they don't have the drive to get into the why of
how it all works.

Any argument of 'it saves overhead' can take that argument and run with it
until the cows come home; but they don't understand that the overhead is all
relative. As a programmer; I stopped caring about overhead and starting
worrying about the fact that people _will_ break my stuff, I just need to stop
them from breaking other things using that as the foothold.

~~~
pknopf
I use Docker daily, and I almost never find it complicated.

I use it for build servers and my companies infrastructure.

The alternative would be a nightmare to me.

~~~
cat199
> The alternative

the fact that the singular is used in response to a thread concerning
considering options and diving deep is telling.

~~~
pknopf
Your getting caught up in semantics.

Docker is the best way to have reproducable build environments. I've been
doing it for years with my packages, as well as embedded Linux.

Docker is also a very easy way to deploy applications/servers. I have been
doing it for years and it's been nothing but smooth.

Give me an actual problem that isn't ridiculous and self-inflicted.

------
majewsky
> I think you could reimplement [some random part of what Docker does] easily
> yourself with a small shell script and some calls to mount; but I haven't
> bothered.

Exactly.

~~~
majewsky
Bonus round:

> For most purposes, the main interesting thing that Docker containers provide
> is isolated networking. [...] What else prevents applications from using
> ports? The firewall that you already have installed on your server. Again,
> pointless abstraction to address already-solved problems.

Comically missing the point.

~~~
geggam
until you realize the firewall is the same iptables that you use to isolate
the network in docker containers

~~~
ecnahc515
Not at all. IP tables is used to route traffic between containers, yes, but
the isolation is done by network namespaces.

~~~
geggam
Interesting... so you create a network and isolate it from the rest of the
network ? How do your applications serve traffic ?

Or... you create a network with namespaces and use iptables aka the firewall
to network that to the other networks you created with namespaces ?

Think about it logically... when you use TCP you share the connections... your
namespace isolation is exposed by the very thing that firewalls it... make
sense ?

------
derefr
The point of Docker is basically that the container-image developer is
specifying the sandbox, instead of the sysadmin specifying the sandbox.

None of the things mentioned solve the problem of the sysadmin having to
"design" the solution from the top down. Docker does. A Docker image is an
appliance. You don't architect it; you just configure it. You don't have to
care which OS it's running inside. Docker images running on Windows don't even
care whether it's Windows Core or Ubuntu inside them, even. It's a black box
with defined configurability-points.

The only real comparisons to Docker are

1\. Amazon's AMIs (though nobody hosts a public AMI registry _except_ for
Amazon, so it's not really a good comparison);

2\. Canonical's "snap" format ([https://snapcraft.io](https://snapcraft.io))

Both of these achieve the same things that Docker achieves: developer-
distributed virtual appliances configured by the sysadmin but "managed"
automatically by the runtime.

And both are just as complicated as Docker. The complexity is necessary.

~~~
marios
You _do_ have to care which OS it's running inside if only to know when to
patch it for $vulnerability_of_the_day. It sure is convenient to consider it
as a blackbox that you 'just' need to configure, but that's just pushing
responsibility to the developer(s). In my short experience, the latter seldom
care about security. When a security breach occurs, who is going to take the
fall ? The sysadmins that supposedly run operations, or the developers that
failed to provided an updated appliance ?

~~~
rdsubhas
Let's be clear: This is a _general_ problem with distributed infrastructure.
Not necessarily docker. Any org that scales beyond 100+
engineers/services/artifacts is just not going to hire same proportion of
infra people to toss application configuration over the wall to.

In the past, we have done:

* Let applications write librarian-chef cookbooks, have a chef server aggregate them

* Let applications write ansible playbooks, aggregate them in a central repo using galaxy

All of them carry the same pitfalls. If its not the OS, then you have to
decide how to patch the version of OpenJDK that the developer hardcoded. If
its not OpenJDK, then its maven or npm.

We have seen both sides of the arguments:

* Sysadmins cry "security"

* Developers cry "freedom"

The root cause of both arguments is fear and control. The end goal when these
words (security vs freedom) get thrown in is not to find solutions, but rather
to make the discussion end. Sysadmins will gladly sweep maven/nexus problems
under the rug as long as they are the ones doing automation. Developers will
gladly disregard all infrastructure engineering principles as long as they
have full access to do whatever they want with their application.

Automation is the solution to both. Call it SecOps or whatever bullshit term.
But in the end, automated security practices are necessary one way or another.

------
tgsovlerkhgsel
This article is ignoring the benefits of standardization.

Let's compare it to some other "unnecessary" thing, _actual_ containers:

You can put stuff on ships without them, but turns out that once you start
using them, just the fact that everything is standardized gives you _insane_
benefits.

Of course you could reimplement each part of Docker differently. Of course
it's not magic. Nothing is magic about a metal box, and yet that metal box
completely revolutionized shipping.

~~~
catern
That's a nice metaphor, but it's still not at all certain that Docker is the
standard solution. We've had Linux for 20 years and we still don't have a
single standard package manager; Docker's been around for 5 years and it's
already been abstracted over by systems like Kubernetes, making Docker itself
less and less relevant.

What is certain is that systems like Debian and Fedora are not going away.
Your Docker images couldn't be built without them, after all. And the other
tools mentioned in the article, too, are not going anywhere. So why don't you
just standardize on the real underlying platform?

~~~
breeny592
Because there's standardisation beyond just where the app/server/whatever is
going to run in production - having a standard way to spin up, describe and
control applications & their dependencies that works cross OS lets developers
& devops speak the same language, with the same commands.

Is docker the silver bullet for this? No, there's tons of other options. But
"everyone should use Debian and Fedora" isn't a realistic standardisation.

------
vajrapani666
I think Docker is very useful for beginning developers and for spinning up and
trying out services quickly. The barrier for getting started with web
development used to be so much higher, even for people on Mac OSX. To setup a
rails development environment with a modern JS frontend, you'd have to setup
xcode, mysql, redis, a node build pipeline, homebrew, and futz with system
ruby vs rails-specific ruby. You'd have to setup all the above, without
knowing what each part did, and barely being comfortable with bash vs
terminal. Don't even get me started on how difficult it was for someone with a
windows machine to get started with modern development.

Now novices can just install docker and type "docker-compose up". Even vagrant
didn't make things that easy.

A few years ago, I wanted to try out Pentaho's BI platform. I spent hours
configuring the JDK, Tomcat, installing all of Pentaho's dependencies, and
struggling with configuration errors and outdated documentation.

Today, If I wanted to give Pentaho a spin, I could also just pull the docker
image.

I see your point about most other use-cases for docker, but be careful when
you make a blanket statement like "Docker considered harmful". It could be
discouraging to those that docker has helped getting started with development
and who do find it convenient for certain tasks.

~~~
qqqturing
The author seems a bit out of touch. I'm a junior dev and starting a db for
development is as easy as `docker run redis`. I don't even know half the tools
he mentioned and i will not read the linux source documentation like he
suggested to get isolation for my services.

~~~
dom96
Starting redis manually isn't that much harder though:

    
    
        brew install redis
        redis-server
    

I personally feel that people reach for Docker too quickly. It's worthwhile to
learn how things actually work so that you know what to do when Docker
eventually fails you.

~~~
trowawee
For me, it's less about the ease of starting one thing than it is the ease of
juggling a lot of things. I've got docker containers on my machine for
multiple work projects and a few organizations I do volunteer dev work for.
Between them, I'm running two versions each of Ruby and Python and a number of
instances of Postgres and MySQL, plus local Redis containers for three
separate projects. Shifting between them is as simple as docker-compose
down/cd/docker-compose up. When I'm done developing for the day, I just take
everything down and bring it back the next morning.

------
Steltek
This article could use some work so I thought I'd chime in with my complaints
about Docker...

* Patching security vulnerabilities in container images (aka "the next Heartbleed problem") and auditing for the same.

* docker-compose is installed via a curl to github. Say what? It's like Docker revels in ignoring the system's package manager. Docker does not and should not replace apt-get but people pretend it does.

* Too much config delegated to container entrypoint script.

* Firewall rules get wiped out too easily. More a pet peeve of mine but it'd be nice to solve this without "service docker restart".

* "\--rm" should probably have been the default and the migration to something better needs to start now.

In the grand scheme of things, nothing too negative to outweigh the benefits
although security vulnerabilities give me some pause.

~~~
cpuguy83
> Patching security vulnerabilities in container images (aka "the next
> Heartbleed problem") and auditing for the same.

This is a lot of tooling out there to deal with this, including multiple
complimentary (both as in free and tools which compliment each other)
implementations of security scanners.

> Too much config delegated to container entrypoint script.

Can you explain what you mean here? Config of what? Also, have you peered into
init scripts for any given application? Application initialization often
requires a fair amount of setup that is completely dependent on what the
application is. This is not really possible to abstract outside of forking and
making the image itself less generic, which is perfectly fine.

> Firewall rules get wiped out too easily. More a pet peeve of mine but it'd
> be nice to solve this without "service docker restart".

Using firewalld helps with this as dockerd can (and does) monitor the
firewalld for the need to reload rules.

Watching iptables directly is rather difficult outside of just polling
iptables to see if the ruleset matches what's expected. Not sure it's feasible
outside of "does the DOCKER chain exist? No -> reload"

> \--rm" should probably have been the default and the migration to something
> better needs to start now.

You can't change the default for something like this. It also isn't
necessarily the desired behavior. If you really want this to be default, use
swarm services instead.

\---

Full disclosure, worked at Docker for 4 years (now recently at MSFT) and am a
maintainer on the docker engine.

------
jek0
Docker wins because it's easy to use and becomes a de-facto standard. The
author misses that completely.

I get it, you can do "manually" the same things as docker when you're a good
system administrator. You'll come up with your own unique solution to most
problems addressed by docker. Regardless of the fact that it'll probably be
specific to a single linux distribution, you probably won't have anyone
interested in investing time in learning your unique way of doing containers.

I did not waste time when I learned how docker does it.

I login into a server managed by this author, curse him for using all those
"standard" techniques in his own unique way. Spend hours figuring out the
details.

I login into a server running things with docker: I already know all I need.
`docker ps` will tell me what services it runs, `docker inspect` for more
info, `docker logs` give me the logs of whatever service I need to check, etc.

~~~
mmt
> Docker wins because it's easy to use.

> The author misses that completely.

That's inconsistent with what the article says right in the introductory
paragraph:

> Docker is genuinely more complex and harder to use than the alternatives.

> I'm recommending them because they are simpler to learn and use.

You may disagree with their relative ease, but it's disingenuous to say that
the author misses that.

> You'll come up with your own unique solution to most problems addressed by
> docker.

"Unique" is a pretty extraordinary claim, considering the author is advocating
using pre-existing tools and facilities. Replacing AUFS with Btrfs is a far
cry from writing ones own filesystem entirely.

> I login into a server managed by this author, curse him for using all those
> "standard" techniques in his own unique way. Spend hours figuring out the
> details.

They're not "standard" (with quotes). They're standard (without quotes). They
have man pages. They're well-documented and (one would hope, if Docker makes
heavy use of some of them) well-understood. Again, just because he didn't use
them in the Docker way, doesn't make that way _unique_. Chances are, if it
takes you hours to figure it out, especially if you already know what and
_how_ Docker does it, you're doing something woefully wrong.

> I login into a server running things with docker: I already know all I need

I could make exactly the same statement with the situations reversed, except
with the addition that I now have to learn this "docker" tool with its added
complexity and new syntax to make sure I don't break anything in case I have
to make a change.

 _Already_ knowing a tool fails to address the author's point.

~~~
jek0
> "Unique" is a pretty extraordinary claim, considering the author is
> advocating using pre-existing tools and facilities. [...] just because he
> didn't use them in the Docker way, doesn't make that way unique.

Yes you can make use of the same standard tools as Docker to achieve the same
features. There are a multitude of ways you can integrate those (and multiple
ways to combine your different options). Docker offers one way, the author
describes another way.

The author's way is more unique.

> Docker is genuinely more complex and harder to use than the alternatives.

To make this point, the authors says: "Just read man 7 namespaces. It's well
written and makes it easy to grok the concept".

I did, it's super low-level. The audience for this man-page seems to be OS
developer, not web developers (or other upper stack level developers).

Any non-system admin can understand the docker documentation, be able to
build, distribute and launch container images in less than an hour... Learning
to use btrfs or aufs, chroot, ifconfig, init scripts, ..., is easier? Most
people never heard of IPC subsystems, UTS, ... and don't need to.

~~~
mmt
> The author's way is more unique.

The author's way is merely different. "Unique" is a superlative. The author's
method(s) means more choices. There's an argument to be made that enforcing a
single choice is inherently simpler (to which there are counter-arguments),
but you didn't make it.

> Learning to use btrfs or aufs, chroot, ifconfig, init scripts, ..., is
> easier?

Yes, that is exactly what the author is asserting. Again, this seems
disingenuous of you. It's clear that you disagree, but mere disagreement, with
only this kind of rhetorical language, lacking any substantial backing, would
be a shallow dismissal.

------
ohiovr
I'm not at all convinced that reams and reams of shell scripts to deploy
complex applications is a good idea. But by all means go for it if you think
that is the way to go.

~~~
tormeh
This. Save me from shellscript. I guess it's possible to write good/tolerable
shellscript, but most of it is garbage. Obfuscated disasters waiting to
happen.

~~~
ohiovr
Then OS updates and or application updates can break your scripts. It is also
nice not having to be married to a particular linux distro. That seems magical
to me.

------
ghshephard
Docker by itself probably doesn't make any sense to an engineer until they get
a chance to see kubernetes in action. There is nothing more awesome than
seeing, on the fly, your worker pool being scaled out by 100x by nothing more
than

    
    
      kubectl --context k8s-uw1a-gcp -n customerx scale deploy someapp_worker --replicas 100
    

to make you appreciate what docker can do for you.

~~~
geggam
Whereas my AWS autoscale group does it for me while I go get me a foo foo
drink at $ESTABLISHMENT

~~~
merb
sadly it's not cheap to built aws on bare metal.

~~~
geggam
And you think k8s is simple to run on bare metal ?

To care and maintain ?

Do you understand the kernel tuning and routing Google does to avoid the NAT
hell you are going to run into when you manage k8s on your own ?

~~~
merb
I actually run 3 small bare metal clusters in production.

Maintain is basically just a `kubeadm upgrade` on all nodes + reboot (easily
scriptable), after some updates get announced. OS Upgrades are done through
container-linux-update-operator.

NAT is not a problem if you are below or equal to 50 nodes. Not sure if you
run into problems if there are more nodes, thought. Also BGP+metallb is quite
good.

(P.S.: openstack uses NAT heavily as well. and IPVS for k8s should fix most
problems if you are running into problems)

------
quacker
Sure. I "just" have to go read man pages for days to understand 20 different
commands. "Just" use several commands to isolate my not-a-container. "Just"
use debootstrap (or not on different distros!), or actually maybe "just" use
nix and guix. And "just" carefully use several btrfs-subvolume commands (or
not if you want aufs, zfs, or something else!). And "just" a few more things
after that - maybe "just" use systemd-nspawn (or not, for any non systemd
system).

It's always "just" one more tool I can cobble together to provide what Docker
gives me. This is not simple. Docker is not a bad tool for abstracting away
all of these underlying details.

I'm not going to pretend that Docker is simple or flawless. But it has
reasonable defaults and is easy to use. It is easy to pull containers and run
them. It is easy to install on Linux, OS X, and Windows (or maybe I should
"just" figure out how to run a hypervisor). It is easy to read and write a
dockerfile. Critically, it is just as easy for my coworkers to use Docker as
well.

~~~
pxc
That alternative tools have their own learning curves (which generally compare
unfavorably to Docker's, which is exceptionally gentle) and that you have to
make decisions about how to fit them together is a very valid and relevant
point.

But it struck me that all of your 'just's really didn't seem like a big deal
to me.

I set up a Nix installation on a separate BTRFS subvolume on a Debian-based
system that I got as a multimedia PC for my uncle as a gift a few weeks ago. I
also had to replace the default initial ramdisk image generator with another
one to get the subvolume mounted early enough for systemd to automatically
launch services that lived on the Nix subvolume on first boot.

It really didn't seem like that big of a deal for me. Each piece of it was
just a small step away from a background of Linux administration knowledge I
built up as a teenager when I used to distrohop and play around for fun.

All of this is to say that a difference in background is likely behind this
gap of perceptions.

Someone with a background in ops is likely invested in traditions that have
different strengths and weaknesses than the approach that Docker offers. For
people in positions like that, approaches that are perhaps more involved but
preserve more of the virtues of those traditional toolsets may seem like a
smaller leap than the one to containers.

You've used the word 'just' here to highlight what to you seem unreasonable
levels of difficulty or required background knowledge, which makes sense. But
couldn't I just as well say that Docker advocates would have us 'just' abandon
the very notion of shared libraries, 'just' try to get by without actually
knowing how to build or verify our dependency chains, 'just' grab binaries
from strangers on the net, 'just' download gigabytes of binary data to perform
builds, 'just' virtualize Linux on macOS in order to use software that runs
natively on it, etc.?

At the same time, for things like microservices development, Docker also makes
some serious demands on time and knowledge, e.g., 'just' refactor all of your
legacy services so they can be safely started in any order or 'just' learn
Kubernetes so that your can initialize services in order of dependencies.

~~~
quacker
This is a very late reply, but anyway...

> It really didn't seem like that big of a deal for me. Each piece of it was
> just a small step away from a background of Linux administration knowledge I
> built up as a teenager when I used to distrohop and play around for fun.

> You've used the word 'just' here to highlight what to you seem unreasonable
> levels of difficulty or required background knowledge, which makes sense.

I mean, it's great that it's easy for you. And it would be great if everyone
at my office conveniently had this same kind of the prerequisite knowledge.
But the reality is, most don't. And the worse reality is, most don't enjoy
having to learn extra tools on top of the rest of the things they need to know
for their job.

Docker is one extra thing to learn. It is easier to learn for the majority of
developers that don't have, or don't care to have, any Linux admin background.
It is a cross-platform solution and has good adoption in the industry, which
means I can probably use Docker at one company, and go to another company and
use the same Docker.

> But couldn't I just as well say that Docker advocates would have us 'just'
> abandon the very notion of shared libraries

Why should I want shared libraries though, given I have containers? I can get
an update out by rebuilding an image and rolling the container. Immutable
infrastructure is a good thing.

> 'just' try to get by without actually knowing how to build or verify our
> dependency chains, 'just' grab binaries from strangers on the net, 'just'
> download gigabytes of binary data to perform builds,

Gigabytes is a gross exaggeration for Docker images, but this applies to
almost any package manager. Why is a package in a Debian repository (or
similar) any better from a trust perspective than an image in Docker
repository? Neither of the package/image maintainers are usually authors of
the software.

If you need/want to, you can run your own docker registry, where you only
upload self-built images, which completely removes all your trust concerns.
You could probably even build docker images with nix.

------
kenforthewin
Aside from being a bad argument for the reasons already listed, I'm honestly
getting tired of the "X considered harmful" meme, when the author never
actually makes an argument that docker is harmful, only superfluous.

~~~
carrja99
An old favorite: “Considered Harmful” Essays Considered Harmful
([https://meyerweb.com/eric/comment/chech.html](https://meyerweb.com/eric/comment/chech.html)).

------
lolsal
The biggest complaint that I have is that Docker is a huge leaky abstraction.
I end up having to mess with a lot of stuff in order to get docker, docker-
compose, etc. all working Just Right®. It saves time, but it's very, very
leaky, which makes it a bit of a minefield (which you can learn to navigate).

------
40four
Plus one for the scathing and unapologetic criticism of Docker. I'm admitedly
pretty inexperienced with Docker, but I feel like everything I've read on it
seems to have virtually nothing bad to say about it, so it's nice to hear an
opposing opinion.

That being said, this also feels like a "Get off my lawn" type of rant from an
experienced devoloper who is stuck in their ways, and/or has a little bit too
big of an ego regarding their own skillset/knowledge. It's like the author is
mad at a successfull project just because they knew how to do all these things
before said project came along and combined them all in an easy to use
package.

I'm a junior level developer (just finished 1st year of professional work), so
I look at these things in a totally different way. Maybe Docker is just
reinventing the wheel, but if it makes it easier for inexperienced developers
like myself to do things that require a long, complex tool chain, & concepts
that aren't obvious to someone who hasn't been coding since the old days, then
I'm all for it.

The point is to get work done & get it done quickly. For sure it's important
to learn all the underpinnings, but I think junior developers tend to grind
out passable work first, and fill in the gaps of knowledge slowly but surely
as they go (at least that's my experience). But this is also why I up voted
this & one of the main reasons I frequent HN, because articles like this are
illuminating, & point me in directions I didn't know I should be looking :)

~~~
koffiezet
> Maybe Docker is just reinventing the wheel...

Docker is way more than that, and he's completely ignoring the bigger picture,
and many people seem to be missing that. It's importance is not limited to "my
little server" \- it's the concepts that matter - not the technological
details. Docker standardised a set of concepts, which have been adopted at a
rapid pace for a reason. It's no accident big corporations like RedHat Google,
Microsoft, Amazon are jumping on Kubernetes (which uses the docker concepts,
and massively extends them), for cloud deployments, it's the future.

It enables a standard way to quickly deploy and configure applications in a
relative standardised way, and democratised doing this at cloud-scale. Sure
you can use it locally and for smaller-scale, but that was never that hard,
although it made it easier and faster too in some situations.

I'm a devops/sysadmin guy that introduced docker to quite a few developers,
and most were very hesitant and sceptical at the beginning. But once they saw
the power of what docker can do for them, that attitude quickly changed - and
a lot went overboard with it initially (as did I in the early days I must
admit).

Stuff the devs especially loved was the fact that with a single command, they
could launch the entire application stack of whatever project they were
working on locally, and with another command stop or destroy it. Database
servers, amqp/rabbitmq/..., reverse proxies with path rewriting, ... One
person had to maintain the docker-compose config, all the others just did
'docker-compose up -d'. It also allowed them to easily add dev supporting
services like mailhog, that fakes an SMTP server, where they could visualise
the emails they sent in a webui, add a chaos monkey for testing, ...

For ops it also made stuff easy - the "works on my machine" virtually
disappeared or at least was very quick to fix. Configuration and deployment
was clear and straight-forward, even if eventually it wasn't deployed on
docker.

~~~
mdekkers
_It 's no accident big corporations like RedHat Google, Microsoft, Amazon are
jumping on Kubernetes (which uses the docker concepts, and massively extends
them), for cloud deployments, it's the future._

Do you know the history of Kubernetes? And really, Kubernetes uses "Docker
Concepts"?

~~~
koffiezet
> Do you know the history of Kubernetes?

Yes I do. Sure it originated from Google and was built with inspired by their
internal stack (borg) and a ton of their experience. For a good while though,
there was still competition in the form of Mesos/Swarm and unclear what
platform would get the upper hand, but k8s emerged as the clear winner here -
and in the last year or so - everybody jumped on it.

> And really, Kubernetes uses "Docker Concepts"?

I don't really understand what your problem with that statement would be?

------
XVincentX
In theory he's totally right. In practice Docker is bringing all these things
in a unified and standard tools — basically bringing such technology to the
"masses".

If you can craft your own system, feel free to move forward with that.

~~~
guildencrantz42
The power of docker became clear to me when I handed basic instructions to our
web devs and watched it just work. The tooling was docker's secret sauce,
nothing more. And, yes, you still need somebody who knows what the system is
actually doing keeping an eye on things; you can't just hand a bunch of devs
docker and fire your ops team.

------
ex3ndr
So i should learn every possible combination of init system to convert some
random launch scripts from the internet to run in my particular setup? Instead
of `docker run postgres`? Or how i should do the same in K8s? Invent some
packaging format? Wait a second.. aren't all linux packaging formats are
overly complicated and are really hard to maintain comparing to docker images?
What if some package work only on some ridiculous version of linux?

~~~
titanomachy
I've tried and failed a couple times to make Debian packages. The system is so
old and crufty, writing a dockerfile is an order of magnitude simpler and
there are lots of useful examples and tutorials that were written more
recently than 20 years ago.

~~~
zbentley
> I've tried and failed a couple times to make Debian packages.

That's a real problem. Distributions have really dropped the ball here (to
varying degrees).

But there's a big difference between "Docker is $distribution_package_manager
done right!" and "Docker _at least sucks less_ than the alternatives".

The tradeoff when packaging via Docker is often in integration with other
facilities provided by the OS. Some of the hassles of packaging for a native
OS package manager are senseless, bad UX to be sure. But others are there for
a reason: how to integrate with init systems, standard directory locations,
shared cache locations, or (god forbid) desktop/windowing systems? If your
answer to those is "fuck it, use Docker", you often end up with a user
experience akin to driving a portable mobile home down a small city street:
technically fits and obeys traffic patterns, but doesn't behave in a way that
anyone who has lived there for awhile expects it to.

~~~
mmt
> But others are there for a reason: how to integrate with init systems,
> standard directory locations, shared cache locations, or (god forbid)
> desktop/windowing systems?

Having tried (and succeed) many times in making Debian packages, I can attest
to the value of these "hassles" and ones like them.

Much of the value is just in making sure one _thinks_ about that whole breadth
of issues and how they'll affect your environment. You may decide that a
particular hassle actually is too much effort for not enough benefit, but at
least the decision is conscious and, if it "bites" you later, you know
where/how to go back and change things.

Additional value can come from all the taken-for-granted work that's been done
over those past 20 years. Need multiple versions of something installed at the
same time? Maybe the distro people already have a standard way to do that.

Of course, all of this value can only be obtained after a remarkably huge up-
front investment of learning, and, from what I recall, already knowing RPM
didn't substantially lessen that load when learning debianization. It's tough
to fault the attitude of "fuck it, use Docker for packaging" (for any value of
"Docker", including "tarball", "pip", "npm", etc) for anyone whose career
isn't Ops.

~~~
titanomachy
I hadn't thought of it that way. Thanks for the explanation.

------
jzelinskie
This article concludes that systemd-nspawn is a more unix-style alternative to
Docker. Interestingly enough, this is what rkt uses for its default isolation
(stage1).

What Docker did is bring all of the aspects mentioned in this article together
into something that could be easily understood and used. Now that the ideas
are familiar and there are standards like the Open Container Initiative, I
think we'll see more smaller and specialized tools being built and used. Take
for example CRI-O, which is Red Hat's container runtime that only targets the
execution of Pods for Kubernetes.

~~~
wmf
BTW, were these cool systemd features invented _after_ Docker by any chance?

~~~
majewsky
According to the "Initial release" dates on Wikipedia, systemd is 3 years
older than Docker (2010 vs 2013). And one of the core objectives of systemd
was to take advantage of then-still-new Linux kernel features like cgroups.

~~~
justincormack
systemd did not have a container runtime until much more recently.

------
luckycharms810
I think people who don't prefer Docker find it hard to articulate what is
wrong with it.

To me it is really a question of, how do you vet software before you use it ?
Will you take the time to understand your stack before it's deployed? When you
employ certain abstractions, how much visibility do you lose ? Is it worth it
?

Docker is fine as one of many alternatives, but to turn your nose up at well
written, battle tested software which is part of most Linux distro's is a
little crazy. I would definitely bet that people who chose to learn how
cgroups work, how systemd works will see their skills age more gracefully.

It reminds me of Dev-ops candidates that I have interviewed that laugh at the
idea of ssh-ing to deploy a new version of code, without understanding that
Ansible is generally doing the same.

------
lsc
As far as I can tell, Docker is a lot like amazon AWS in that the primary
reason to use it in corp (I'm talking about corp IT; as opposed to
production/customer facing stuff) is that due to the hype, it somehow got past
security, and you are allowed (by corp security types) to do things in it that
would require filling out forms in triplicate to do on cheaper or more secure
infrastructure.

I've worked places where they wouldn't let us run virtual machines of any
type... except docker. Custom docker images were just fine. In the aughts, I
worked places where spinning up a virtual machine on our internal
infrastructure required manager approval and a day and a half of someone
manually jiggering the thing. (I've been that someone doing the jiggering,
too) - and I could totally understand that when aws opened for business,
people practically fled to that platform.

What's interesting about AWS is that most places still don't have AWS level
provisioning of virtual machines, even though there exist tools like ganeti
that work and are pretty easy to use (though difficult to tie into accounting)

------
markbnj
All of these arguments are well-aired, and the tone of the article doesn't
make this particular presentation of them more useful than any of the others.
I would guess a large portion of the developers actively using containers are
well aware that they are built from existing system capabilities that can be
utilized without docker or any container runtime. I mean this is from 2015:
[https://chimeracoder.github.io/docker-without-
docker/#1](https://chimeracoder.github.io/docker-without-docker/#1). You can
argue that the new way of bundling these things together is not better,
simpler, more reproducible or less error prone, but whatever... the market has
voted. I personally think it is all those things, but we can disagree about
that.

One thing in particular I wanted to respond to and that is the idea of your
container filling up with orphan zombie processes because init is not pid 1.
If I understand the issue correctly this can only happen if pid 1 creates
child processes and then itself dies/exits. I've never personally seen this
issue in four years of working with containers, and since all of our
containers now run on k8s they would be restarted if pid 1 exited abnormally
or otherwise. I'll be interested to see if any other HN commenters have
actually had this problem.

Also re: the supposed absurdity of one process per container... it's just
simpler and works more robustly with orchestration. Containers are
lightweight, so there's no reason to try to pack a whole system into an image.
It's simpler to reason about a composition of containers than a mess of
processes running inside a single container, imo.

~~~
adrianmonk
Yeah, their zombie process argument is weak.

In theory, you are vulnerable because you've eliminated the zombie reaping
mechanism.

In practice, the fact that you're running only one application allows you to
make lots of simplifying assumptions so you can avoid getting into this
situation quite easily. Just stick with a model where parents never exit
before children (and always reap children). You don't need an adoptive parent
if you never create orphans.

And even if it did happen, the resource consumption is tiny. Zombie processes
free all their memory, close all their files, etc. The only remainder is a
small data structure necessary to support stuff like the wait() call returning
the process's exit status.

------
SteveNuts
This feels similar to the infamous HN Comment about Dropbox.

~~~
Sohcahtoa82
This one?

[https://news.ycombinator.com/item?id=9224](https://news.ycombinator.com/item?id=9224)

~~~
SteveNuts
Yes, the comments about how Docker is really just a bunch of tools strung
together reeks of the same ignorance.

------
meddlepal
This post totally misses the forest for some trees. Docker isn't a success
because it is some amazing revolutionary technology that doesn't exist in
another form. Docker is a success and beloved by many because it provides a UX
that doesn't require learning about a lot of this stuff upfront so you can get
a lot of the immediate benefit without knowing a ton.

~~~
ben509
Especially, the ton you don't need to know is the _endless_ configuration
headaches associated with running a VM, communicating with the VM, mucking
about with the filesystem, etc. I've been computering for 20 years, I've seen
my share of OS's, but if I want to use a new one, it's back to square one as
I'm poking around for manpages and other bullshit. I don't feel smarter after
learning yet another deranged mind's conventions.

------
raesene9
This article probably needs a (2016) tag. Docker has moved on quite a bit
since then...

------
aaaronic
Pretty weak arguments all around, IMO. Docker has its issues, but most of them
aren't really even mentioned here.

------
bcheung
I think the author misses the point that Docker is a standard and that is
where the value is. The value is the ecosystem and tooling that is possible
once people standardize.

Most of the article reads like "Programming language X is Turing complete,
therefore all other programming languages are pointless."

------
autotune
Personally, and somewhat surprised it wasn't mentioned, I'm less concerned
about everything mentioned in this post and more concerned about the docker
ecosystem involving people using random and potentially compromised containers
off of DockerHub: [https://arstechnica.com/information-
technology/2018/06/backd...](https://arstechnica.com/information-
technology/2018/06/backdoored-images-downloaded-5-million-times-finally-
removed-from-docker-hub/)

------
Theodores
Silly question, but I have heard that around a fifth of the websites that you
see on the internet use 'Wordpress'. 'Wordpress' is a simple blogging platform
that has a really nice editor that people like working with. There are many
add-ons for 'Wordpress' that enable people to customise what it does and what
it looks like. These can complicate matters, however, at the end of the day,
'Wordpress' is just a neat blogging tool and does not require rocket surgery
to work with.

In this age of fancy build tools and containerisation, is it necessary and
advantageous to develop 'Wordpress' with Docker, Vagrant or any other
containerisation?

I could see this as being helpful if you are only allowed a consumer operating
system, e.g. Microsoft Windows, but is containerisation the way one would
develop a simple Wordpress site if your company's IT department allowed you to
run a linux machine?

Admittedly 'Wordpress' is the Hello World of getting online but I genuinely do
not know if containerisation is what people would do these days for such a
simple use case.

~~~
nisa
Sure, you can do that - [https://themeisle.com/blog/local-wordpress-
development-using...](https://themeisle.com/blog/local-wordpress-development-
using-docker/)

You can also install something like WAMP on Windows and develop locally
[http://www.wampserver.com](http://www.wampserver.com)

As for the benefit... you can switch i.e. PHP version by simply editing the
docker-compose.yml file - this is everything but simple on most Linux
distributions.

------
sebringj
I find myself using it for development more than anything else and it is very
nice for that. For example I have a little universe of very singularly
purposed things that work together in a bundle. Come time to actually deploy
it in production, I would typically have these individual things broke out
into their own clusters of servers or cherry pick managed services that make
the most sense for cost/reliability/ease of use etc. but I'm sure I'm just
inexperienced with all the different scenarios larger orgs would do. At least
for development, it really is beautiful to work with and saves loads of time
fiddling to get some monstrosity of permutation settings just right after
googling many hours to get there. I don't recall spending a day on setting up
an env and thinking that it was worth knowing how to do or time well spent.

------
User23
The reasons for using Docker are social, not technological.

~~~
aespinoza
What do you mean? Are you referring to the Docker Hub in particular?

~~~
mixmastamyk
Partly yes. The other is a tool almost everyone knows, like git. Even if it
isn't the best one, there is value in everyone knowing it.

------
mmagin
I don't know if it was true when the article was written, but at least today
some of what the author writes in
[http://catern.com/posts/docker.html#sec-4](http://catern.com/posts/docker.html#sec-4)
is untrue.

[https://docs.docker.com/engine/security/security/#linux-
kern...](https://docs.docker.com/engine/security/security/#linux-kernel-
capabilities)

~~~
catern
From your link:

>One primary risk with running Docker containers is that the default set of
capabilities and mounts given to a container may provide incomplete isolation,
either independently, or when used in combination with kernel vulnerabilities.

------
sixhobbits
Obligatory link to why "considered harmful" essays might be harmful[0].

I'm not a docker "fan". I've seen one too many "hello-world" projects wrapped
in docker and had to waste 30+ minutes and 2GB HDD space to do something that
should have been a script.

That said, any essay that criticises docker without acknowledging the huge
benefit of a layer between a full VM (more overhead, more space, more time)
and dealing with platform specific issues (oh, if you're using version 3.5 of
X and 4.2 of Y then you need to roll back the latest update of Z and sacrifice
a goat to the Linux gods), is missing something.

I love the plain HTML and that I could read this in the train on a very spotty
mobile connection.

[0]
[https://meyerweb.com/eric/comment/chech.html](https://meyerweb.com/eric/comment/chech.html)

------
aespinoza
What I got out of this post is a very good insight in how Docker works
internally. And it does raise my interest in containers.

I have been disconnected from the cloud for quite some time, but I have always
been interested in sandboxes, mainly for security purposes. I have used chroot
before, seccomp, apparmor and firejail. This is not related to Docker
directly, but the author makes the bridge for Docker to be interesting to me.

~~~
wmf
If you want to learn more about container security I recommend
[https://blog.jessfraz.com/](https://blog.jessfraz.com/) and
[https://github.com/projectatomic/bubblewrap](https://github.com/projectatomic/bubblewrap)

~~~
aespinoza
Thank you very much for sharing. These links have been added to my bookmarks.

------
VeejayRampay
I'm still waiting for a good Docker tutorial with an actual project using
actual bricks that people use in real life (like nginx, an Express app, maybe
some memcache, redis, LBs, etc). All the tutorials you find revolve around
mundane things like getting your environment ready and never on the actual
issues that people face when using the tech.

------
fiatjaf
I'm waiting for someone to invent a simple Heroku-on-your-own-server kind of
thing that you can use to run small apps, maybe on top of Docker.

Dokku is not that option. Dokku is a super-complex operating system by itself,
a mistery of stuff glued into each other by bash scripts, full of bugs and
corner cases.

------
apexalpha
"Wordpress Considered Harmful"

Why not host your own blog with Bootstrap?

...

Every tool trying to simplify or standardise something will inevitably be met
with critisism from someone saying: 'but it's not hard, you just do <insert
entire manpage here>', completely missing the point.

------
AzzieElbab
I absolutely love docker myself. However, in my in enterprise is mostly
valuable to interns and consultants. They show managers how quickly they can
get things running and leave it to ops to actually getting things working

------
djohnston
if you work by yourself feel free to use whatever arcane shell scripting you
desire, but for the love of humanity, if you are going to expose other people
to your code, please use a standardized tool like docker.

~~~
nisa
the result is arcane shellscripting in the docker image...

------
crooked-v
[https://xkcd.com/1988/](https://xkcd.com/1988/)

------
0x7f800000
There are two certainties in software development:

0\. A new technology will arise and it will be named 'X'

1\. Eventually someone will write an article titled 'X Considered Harmful'

------
gsich
Docker is the new way of doing "curl|bash".

------
mesozoic
Its a bit like - Why use C when assembly exists. Why use C++ when C exists.
Why use Python when C++ exists. They all have uses.

------
hrcxxx
With docker you can adhere to the "Build once, deploy anywhere" philosophy.

------
sabujp
the nice thing about docker is that it's been standardized across multiple
platforms, we even have windows native containers now. There's also buildkit
which one can use to create a cached/incremental build system

------
codedokode
I think that Docker became popular because of very limited functionality of
distributions' package managers that doesn't match developers' expectations.

Traditionally in Linux there is no concept of "system" and "applications".
There is only one large "system" and you can add parts to it. In old times,
you downloaded C source code, make'd and installed it. Now you use a package
manager to extend your system with new features, like playing music or editing
images.

There are no "applications". If you download Firefox package from Debian
repository, it is not the Firefox application; it is a version of Firefox,
tuned, patched and customized for integrating into Debian. You probably won't
be able to run it even in Ubuntu, let alone other distributions.

Paths are often hardcoded; you cannot install a program into your home
directory or an USB drive (apt-get might let you do that but it won't fix the
embedded file paths for you).

It might be good enough for a user (as we see with Google Play Store on
Android), but it is very inconvenient for a developer. You often need to have
several versions of a program, for example, PHP interpreter or Go compiler; no
way a package manager lets you do that, compile them yourself. You might want
to use an application with project-specific config rather than the one in /etc
and start it on demand rather than install as a system daemon. That is not
easy too. Apt-get and dpkg has literally hundreds of configuration knobs in
the manual but you cannot choose installation directory.

If you want to run an old application, like Firefox 2.0, get prepared for
troubles. While newer versions of libraries like gtk are supposed to be
backwards compatible, it won't run with them (I tried). You will have to
obtain and build old versions of gtk manually; good luck with that. On the
good side, I can add that Debian maintains an archive of old packages and you
need only to write a custom package manager to install them along with their
dependencies.

Often official repositories have outdated versions of software; you have to
add third-party repositories while giving them full root access to your
machine. Want to install Sublime Text in Debian? You have to trust its authors
because they can replace your sshd and you won't even notice. Also, third-
party repos sometimes break or conflict with system ones.

Package managers are a pain for developers too. They have to maintain packages
for all popular distributions, and even for different versions of those
distribution. Because there is no "apps", you have to integrate your program
into every distibution manually. Of course, you will have to do it again and
again when anything changes in the distro. Distro maintainers often have to do
the same thing, maintaining private patches. And on top of that, different
distributions use different package format.

Maybe one of the reasons for this is a lack of a standard package manager and
build system for C programs. You cannot download from Github and build a C
program with dependencies with a single command. It is disappointing if you
got used to languages like PHP where it has been possible for a long time.

Docker seems to solve many of these problems (and some other, like hardcoded
server port numbers) by using a lightweigt virtual machine with full-blown
distribution in it. Of course it looks more like a quick hack rather than
reliable and well-thought solution. Also, Docker requires many resources:
space on disk to store multiple images, memory and CPU time to run
unnsesessary daemons inside virtual machines. I don't like it.

By the way Linux kernel has similar problem with drivers that are a part of a
kernel rather than separate entities.

~~~
ben509
> ... because of very limited functionality of distributions' package managers
> that doesn't match developers' expectations.

In particular, package managers don't match the expectations of build systems,
because package managers are written by and for ops guys.

A sysadmin working on system serving a production load rarely wants to reboot
the system, let alone blow it all away and rebuild from scratch. Moreover, the
sysadmin needs to administer many systems, so they don't want to have anything
being installed in special places, that's just adding more complexity.

A build system has to account for the fact that devs are tinkering with the
code, experimenting, etc. and need to periodically wipe it all away and run it
with a clean slate.

~~~
codedokode
So basically we need one more distribution, this time targeted at developers?

~~~
zbentley
Brother/Sister, I have come down from the mountain, and behold, I bring news
of a glorious new future, and it is called Gentoo!

Salvation shall be had, in the weeks/years/decades after the first successful
kernel compilation is achieved.

------
Arzh
Anything new since the last time this was posted?

------
IshKebab
I think we've reached the point where "X considered harmful" articles can be
dismissed as easily as headlines that end with a question mark.

------
sleepybrett
... 2016.

~~~
sctb
Thanks, updated.

