
Docker is a dangerous gamble which we will regret (2018) - maple3142
http://archive.is/zkmaE
======
rossdavidh
If I use Docker, I get to put "Docker" on my resume for the next job; if I use
bash scripts I don't. If I use React, I get to put "React" on my resume; if I
use vanilla JS I don't. If I use Kubernetes, I get to put "Kubernetes" on my
resume. If I use AWS, I get to put that on my resume, if I just spin up a
Linux instance using Linode or etc, I don't Of course, I would have to
convince my boss to let me do all of these. But, he also wants to be able to
put on his resume that he managed developers who used these things, so that's
an easy task. Many of these made perfect sense for the FAANG company they were
developed for, but they are used at companies with 0.01% of the number of
servers of those companies, and the basic reason is that both devs and
managers want to pretend they are working at FAANG, so that someday they will
work there.

Again, there are valid use cases for all of these, but there are also valid
use cases for semi tractor trailers. It doesn't mean I should use one to get
my groceries.

~~~
k__
The problem is, as so often, the system.

If HR goes as low-level in hiring as "React Developer" then people have to
play this game.

If HR would hire on something more substantial, it wouldn't matter if you did
Linode or AWS, React or Angular, Docker or Bash scripts.

~~~
mywittyname
Hiring people is difficult, exhausting, time-consuming, and largely a crap
shoot, since hiring consists of deducing a person's technical abilities with
around 1-2 hours of interaction in a very limited scope.

If we could find a better way, we'd be using it. But, as it stands, recruiters
and HR have only a few minutes per potential candidate to determine whether or
not they meet the hiring manager's criteria. It's unreasonable to expect
HR/recruiters to keep abreast of an ever expanding list of technologies and
their analogs because that's not their core competency. Unless they are
repeatedly told that GCP experience is worth like 70% of AWS experience
(replace the values and technologies as you see fit), they aren't going to
know that.

~~~
magduf
The solution seems pretty simple to me: get rid of the HR and recruiters, at
least as having much power in deciding who to interview. That's the hiring
manager's job. He knows what kind of person he's looking for, so let him make
decisions. Why would you let some non-technical person decide who's qualified
to be interviews? It makes no sense. At best, have the hiring manager give
some very basic things to look for on resumes to the HR person, so they can
screen out the people who are obvious wastes of time.

>If we could find a better way, we'd be using it.

It seems to me that the reason it's so bad is because, at many companies, HR
people refuse to admit that they're incompetent at hiring technical people,
and insist on inserting themselves into the process to a degree which is
highly counterproductive. The only thing HR should be doing is helping hiring
managers find the people they need, and otherwise staying out of the way. The
bulk of the HR person's time should be on other tasks: personnel issues with
existing employees, helping new employees get situated, maybe interpersonal
issues, etc. The hiring manager can afford to spend an hour a day looking
through resumes and giving good ones back to HR to recruit.

~~~
mywittyname
Maybe have experienced different approaches, but where I've worked, the hiring
manager told HR what skillset they needed and HR would fire over resumes for
the team to look at and decide who to bring in. Our team decided 100% who to
hire and not.

So, basically what you're talking about already happens at every company I've
been involved in hiring at. HR posts the job posting from the manager,
forwards resumes that look good. The do perform the initial non-technical
interview, but teams handle technical interviews as they see fit. HR is only
involved to present the offer.

Because of that, I took the parent comment to say that HR should be better at
resume fishing. Meaning, they should understand technological equivalencies
better, like MySQL and Oracle are somewhat equivalent or related skills but
Bash and Elasticsearch are not. This is what recruiters are _supposed_ to do;
obviously most are bad at it, but his is largely because few
SWE/SysAdmins/DevOps are going to pivot into recruitment.

------
vorpalhex
Oh geez, having a team of 24 engineers across multiple countries trying to
maintain a common set of bash scripts that work for 12+ unique services sounds
like a disaster.

Instead we have a single workflow for docker, whether it's going out to AWS or
Kubernetes and all the services use an identical flow. Integration testing is
a cinch since it's running actual, version matched postgres/redis/etc on the
CI nodes which get spun up on demand and against the same docker image that'll
get pushed out to production.

We haven't had a case of "Well it works on my machine why doesn't it work on
yours!?" in a really long time since if it works in docker.. well it works in
docker.

~~~
leecarraher
Exactly, docker standardizes something that was an otherwise nightmare to
maintain across platforms and organizations. Next we need to standardize the
orchestrator, my money is on kubernetes, but I wouldn't say it has won over
everyone just yet.

~~~
SaasDeveloper12
The industry has standardized on Kubernetes for the orchestrator. If anything
we are starting to see other container runtimes other than Docker as K8s added
the CRI (Container Runtime Interface).

~~~
vorpalhex
Well everyone at least wants to use Kubernetes. I'm not sure any organization
really knows how to best use it yet, and nobody has a set of standards that
have seen adoption.

Both the pro and the con of K8s is that it's exactly what you make of it and
it's extremely flexible.

------
darkwater
The author has the boldness of saying that anything that does not produce a
single artifact (golang binary or fatjar) is a language that should not be
used an that they were errors of the past and we should get rid of them
(citing explicitly PHP, Ruby and Python).

Now, if we want to think that languages and whole ecosystems (libraries, bugs
and edge cases fixed, developers and companies whose code base is written in
such language) can be changed easily, then the author take on Docker is right.
In such a world Docker doesn't add anything useful. But the real world is
quite different and it's not going to change for the author, so Docker (or
"containers") is still adding something valuable and it's not a "bet" to me,
at all.

------
freeone3000
The author's approach presupposes a devops "person" (side note: if you have a
person in charge of ops, that person is an ops person, not a "devops" person,
as devops is a process; either your org is on it or they aren't). If you want
developers to launch to production with any degree of reliability, you're
going to need either a lot of tooling support for your specific workflow, or
you're going to use docker and a significant degree of creative blindness. The
fat binary approach works great if you _can_ build a fat binary - python and
node are nearly diametrically opposed to compiled binaries in general, much
less ones that include a list of dependencies. The docker image _is_ the fat
binary that our toolchain is set up to build and run.

~~~
matteuan
You are assuming that developers have enough knowledge of Docker to deploy in
production confidently. Why is it different for other ops skills? What "degree
of reliability" does Docker adds over bash, for example?

~~~
freeone3000
It adds full environment reproducability. Essentially, "it works on my
machine" \-- "then we'll ship your machine". Nothing about docker in
particular is meaningful, if they shipped VM images this would be identical,
but docker streamlines creation and management of the entire OS image. It
eliminates all differences between the developer's laptop and the production
server.

------
zeveb
I completely agree. My team has spent the last several years building software
we then deploy with Docker and Kubernetes, and looking back I think that our
software would have been _much_ better deployed as statically-linked binaries
on simple Linux servers.

We're writing in Go, which means we already have the statically-linked
binaries, which means we have a single file which needs to get deployed. What
does Docker buy us? … a single file which gets deployed. We could get the same
effect of Docker Hub by just scping versioned binaries.

Kubernetes is far worse, of course. It is an amazing ramshackle collection of
disparate, ill-fitting parts which make everything 'easy' but nothing simple.
It does buy us some useful things, but I don't think they are worth the cost.

A bunch of EC2 nodes running Debian, with a few dozen-line shell scripts to
copy binaries around, would completely replace our usage of Docker and
Kubernetes and drastically reduce the complexity of our system. We just don't
need a lot of what Kubernetes does (no doubt others do), but we have to pay
for it in complexity anyway.

I really do believe that in the future we will see a few companies be very
successful by foregoing Docker + Kubernetes + Helm (yo dude, I heard your YAML
was giving you problems so here's some more YAML to YAML your YAML into YAMLy
submission!), and then a few years later we will all look back at this mess a
little bit like people look at the Pet Rock craze.

~~~
ex_amazon_sde
Amazon (and other FAANGs) use OS packages and do not use Docker nor Kubernetes
or anything with all that complexity.

This is a deliberate choice, at least in Amazon.

~~~
notacoward
Maybe you know how it is at Amazon, but your statement is definitely not true
for two other FAANGs. It's well known that Kubernetes is the open-source
version of tools that already existed within Google, and Kubernetes itself is
part of some public-facing products. I work at Facebook, and we most
definitely do have our own equivalent of Kubernetes. It's awful in all of the
same ways IMO. We _also_ build fat binaries (and I believe the same is true at
Google) but that's orthogonal. So one point for your claim, two against. Maybe
someone else can fill in the blanks for Apple and Netflix.

~~~
ex_amazon_sde
I'm familiar with different implementation in FAANGs. (I cannot disclose the
names.) They are hardly as bloated and vulnerable as Docker+Kubernetes.

> It's awful in all of the same ways IMO.

> So one point for your claim, two against.

It's awful but at the same time it's a claim against... interesting.

~~~
notacoward
That's not the least bit inconsistent. You claimed that FAANGs do not run
anything with the complexity of Docker or Kubernetes. The best thing I can say
about that claim is that your information is clearly outdated. Some might call
it a lie. Facebook and Google, at least, _do_ use something with equivalent
complexity, and the very thing that makes them awful is also the thing that
refutes your claim. Don't claim to speak for all companies when you clearly
don't know anything about more than one.

------
bilekas
I have to agree a little bit.. One other thing that bothers me is when I don't
WANT to run docker. Some packages/services actually make it tricky to achieve
this. With some time being spent reversen engineering.

Docker is cool don't get me wrong, but it might be a bit over used sometimes.
And I really don't think it needs to be this 1 way solution to everything.

~~~
collyw
The youngsters in my work keep sticking everything on docker to avoid
packaging problems. I just makes running coed in a stepthrough debugger
impossible so far (I know its possible but I haven't had the time to make it
work yet).

One step forward two steps back.

~~~
Frost1x
Recently started working with other younger devs and I feel like every single
application they write starts with a docker compose scaffolding.

Containers have uses but they're being treated like silver bullets and
building new levels of unnecessary complexity in many cases.

~~~
baroomba
I at least put my daemon dependencies (databases, mostly) in a docker-compose
file from the beginning, to keep that cruft off my machine, to isolate
projects, to avoid implicit dependencies on my own platform's package manager
(adding one for the Docker registry, yes, but at least that's not specific to
my environment), and to make the service dependency installation process
super-easy and clean for others. I do this even if I'm running the project
itself directly on my machine, outside of Docker.

------
mattxxx
Docker _may_ be over-used, but, when managing hundreds of instances of
machines, it's really useful to have deploys/redeploys/updates: \- Immutable
(the instance is effectively frozen as an image) \- Has dependencies baked in
(don't have to worry about whether the app was run on xenial/bionic,etc.) \-
Identical (people are disincentivized to make ad-hoc modifications to
instances)

~~~
pc86
> _when managing hundreds of instances of machines_

Well there's the rub. Most developers aren't going to manage hundreds of
machines. My current company has thousands of customers, over a hundred
employees, and makes tens of millions in profit. We've got two dozen servers
(VMs) and obviously less than that in actual machines.

The scale at which you need hundreds of instances is probably when it does
make sense to use something like Docker.

~~~
rjkennedy98
A lot of developers work in big companies.

My last company had 2400 OpenShift cpu cores assigned just to unused OpenShift
projects. I'm sure the number of used OpenShift cores was in the tens of
thousands.

For instance just a Kafka consumer that needs to sync a few billion records to
cassandra and ElasticSearch by itself may use 50 cores.

~~~
baroomba
Also, a lot of smaller (and mid-sized) companies end up with tons of instances
and all kinds of complex server infrastructure because their code is mind-
bogglingly inefficient and falls over without it. You either fix their
mountain of broken shit or you throw "cloud" at it.

Relatedly, I've seen a lot of "we need to avoid & transition away from SQL
database servers, they're slow and don't scale" in the wild when the problem
is actually that the way SQL is being used is not simply not-advanced, but
incompetent. Hundreds to thousands of queries per page load for no good
reason, useless caching, little attention paid to indices, that kind of thing.
Often Rails is involved. :-/

------
forgottenpass
I don't trust people who like their software tools. I can trust some of the
people who believe that their usecase justifies the cost of a tool, as long as
they can also articulate it's limitations. If someone doesn't understand the
environment and problem a tool solves well enough to rattle off a laundry list
of good, bad and ugly design decisions, then their baseline analysis doesn't
have any utility.

It's a subtle distinction, because once someone is primed that these are the
two options, everyone thinks they're in the second category. You have to give
someone enough rope and hope they offer up how much they "love" their toys.

I like this article because the container toolset ecosystem needs criticism to
evolve. And we don't see enough of it.

------
sh87
I notice that once you're invested, time and effort wise, in doing something
one way, and it works, the general unwillingness to learn different or newer
ways of doing the same things increases rapidly. Overcoming such prejudice
requires understanding and accepting the incentives for further investing time
and effort to accustom with newer technology.

Use-case driven technology fit is just one incentive. There are other
incentives like resume enhancement, participation in communities where there
is rapid development, FOMO / fear of being a dinosaur, the reputation of being
known as someone well versed with the latest technologies, monetization via
online courses/tutorials. These are just a few I could think of, I'm sure
there are more.

In my relatively short career, I saw this playing out with NoSql databases,
microservices architecture, SPA (AngularJS), NodeJS, responsive websites,
blockchain and more recently, ReactJS, data-science (AI/ML) and
containerization.

Each one of these, at some point or the other, have been used in cases where
the problem space was completely solvable without using said technology but
was introduced for some other incentive.

Say what you will but this rapid change is what makes and keeps this space
moving, interesting, inspiring and well-paying.

------
marcinzm
Docker lets you have a standard easy to run reproducible description of a
system (service + all dependencies) as code for any service. It's a fat-binary
for anything that is also the build env for that binary. Not all languages
support native fat-binaries and even those that do you still need a way to
reproducibly build that binary (CI doesn't count because local testing builds
can't reproduce it).

------
jpochtar
By expecting Docker users to know more than they do, the author thinks they're
dumber than they are.

Docker users are mostly folks who specialize in other areas of programming, to
whom Docker popularized affecting "fat binaries" out of PHP/Ruby/Python/node
apps via chroot. Chroot was already well known to ops folks, but was arcane to
most webapp devs.

We're not ops experts who have been confounded by marketing, we're ops
amateurs for whom Docker cracked the nut of chroot legibility.

The author hates on Docker in comparison to other fat binary techniques; he
ignores that pro-Docker advocates are comparing it to not using fat binaries
at all.

------
leecarraher
I would like to see more vetting in dockerhub and security in general, yes
it's based on cgroups which are specifically security minded, but not all
docker users are aware of this. A novice user pulling a random docker image,
and running default as root is bound to cause some issues.

~~~
shadowgovt
Docker itself is a good idea; a standard repository of other people's opaque
binaries and an ecosystem discipline of "Just pull in whatever you need; who
cares why it works or who maintains it?" probably less so.

------
magware
Docker (and docker-compose) is super useful for local development
environments. Sure, there are problems with docker on windows and with non-
root containers. But docker is way better then most of the other tools I used
for this (vagrant and whatnot).

A main problem in my opinion is, that old images are no longer published (or
removed from dockerhub). I doubt very much, that you can build docker images
that are written now in 2030.

But I assume the same happens with Packer + Terraform. In both cases you need
to make sure the image is saved and accessible at a place you control, because
the build will stop working at some point in time. Also the nice thing about
docker is, it runs local (developer machine) and in production. I am not sure
how you archive that with AMIs.

Building fat binaries is only easy for some specific languages and tools. In
docker, it is easy for everything. At the end docker gives you a standardized
way to deploy (image, environment variables, network, volumes and off you
go!).

Now about docker in production. I had lots of problems with that. But if you
use docker for local and don't have a ops department, it is the natural way to
go. You run your integration tests with the same image that later runs on
prod, so you don't need to worry, what stack is running on in a different
environment. As a developer with a fully automated CI chain, I can control the
entire deployment from my IDE (Dockerfile, Jenkinsfile, ...). My college can
review the entire deployment in the PR (code-changes and infrastructure
changes). And also docker has for most external software a ready to use image.
Those are big benefits for small dev teams in small companies.

Also cloud providers offer docker tooling like a private image repository and
docker support in most CI tools. I mention all of this, because the tooling
around a container solution is important. You may dislike docker, but it
integrates in most cloud providers and has images for almost everything.

The argument about complexity is true. If something doesn't work and you have
no idea about the parts under the docker layer then its game over. But this is
a very general argument against complexity. Complexity in software makes
everything easy until you need to go down an abstraction layer. Nobody forces
you to run your MySQL or Elasticsearch in docker container. This is definitely
an unnecessary (and potentially dangerous) layer of complexity.

But to run the 10 years old PHP 5 application that needs memcached version
0.1.5, docker is very very useful: you spend a week to put all that legacy
stuff in an docker image, push the image and all the team members can run it
locally. Even the designer that knows a bit of jquery can run "docker-compose
up" on his windows machine.

------
cbushko
Docker is the fatbinary that he is talking about. It supports all-the-
languages and runs just about everywhere.

The article ignores the fact that companies do not have the time, money,
expertise, and desire to re-write their apps in a language that will compile
down into a binary. There is a reason so many successful companies use PHP and
Ruby. They work and make those businesses money. Taking a golang binary and
building it into a tiny scratch docker image would be fantastic too.

In a previous job we ran hundreds of bare metal machines that were built up
using puppet. It was amazing in the sense that it was somewhat standardized
and could scale well. I still wish every service was using docker. We had php,
ruby and golang services and getting all of those dependencies running was a
pain. Not only did you have specific machine images to support each language,
you had to support each service type too. If they were in docker containers we
could have had one machine time and moved the docker images around.

------
rjkennedy98
Plenty of companies:

a) do not want to give people/scripts access to the machine.

b) do not want to employ a bunch of people whose sole purpose is to manage
these machines.

Being able to provision (for instance) an Openshift project in a click to
deploy docker containers solves those problems.

Yes, those are not problems that a startup or a small business has, but these
are problems that all large enterprises with sensitive data have.

Docker is a standard that anyone knows. Is it the best standard? Who knows,
but it is a platform and a standard that has built an ecosystem. That fact
alone will make it better than any product or custom-built deployment solution
that has to be maintained by a team of engineers.

------
phendrenad2
5 years ago I dreamed that IT teams would be empowered to offer us developers
a heroku-like experience. (Or JVM application server for those of you who
remember those). But it seems like Kubernetes, the "new shiny", has distracted
them from that goal, so now they're off in Kubernetes land reconfiguring and
reconfiguring, never increasing the productivity or security of what they
deliver to the rest of the company. Basically, Kubernetes has nerdsniped IT.

------
nojvek
His arguments seem week. I’ve used both docker and k8s for couple of years
now. Sure there is a learning curve but because shit is consistent and sane
we’ve been able to have better uptimes, and our pages went significantly
down(self healing and auto-scaling properties of k8s are a god send).

Before we used to fab-ssh into the servers, manually ask some human at
softlayer to provision new servers, bash script install things, and use native
package managers like pip install. Fun fact. Pip is not deterministic. Until
package-lock came, npm wasn’t deterministic. Things that worked in dev and
staging would end up borking prod and we’d be serving 500s. Other language
package managers had similar issues.

I see docker as just a glorified package manager. Make a docker file, build an
image, run the same image in prod/dev/stage for consistency.

That is a huge timesaver.

------
steeve
The archive link is down and the original article is dead, so I can't read
it...

The comments on the article are not kind, however [1][2].

1\.
[https://www.reddit.com/r/devops/comments/8j9yrn/docker_is_th...](https://www.reddit.com/r/devops/comments/8j9yrn/docker_is_the_dangerous_gamble_which_we_will/)

2\.
[https://www.reddit.com/r/docker/comments/8jk22u/docker_is_a_...](https://www.reddit.com/r/docker/comments/8jk22u/docker_is_a_dangerous_gamble_which_we_will_regret/)

~~~
maple3142
Maybe you are using CloudFlare dns? It seems like 1.1.1.1 can't correctly
resolve archive.is [1]

[1]
[https://news.ycombinator.com/item?id=19828317](https://news.ycombinator.com/item?id=19828317)

~~~
steeve
Thank you!

------
nsfyn55
This might be the most short sighted article I have ever read. One of
humanity's greatest achievements was the creation of the intermodel shipping
container([https://en.wikipedia.org/wiki/Intermodal_container](https://en.wikipedia.org/wiki/Intermodal_container)).
We literally would not live in the world of convenience we have today without
it.

But imagine what the curmudgeon of the day had to say about this. "Why do I
need to use this specific container? This container has all these problems...
and on and on"

~~~
redis_mlc
The shipping container was a solution to a problem.

Kubernetes? The jury's still out on that.

------
RandyRanderson
I'm on a project that only needs a single server, and storage. Yet we have
k8s, docker, 3 other needless technologies with all their ancillary
requirements,Microservices and about 5 more git repos than the entirety of
Google needs. Development of simple features take 5-10x of what it should be.
But then I understand the developers too: recruiters and team leads urged by
their developers to "work on the latest tech or we'll leave" have to adopt and
so ask for ppl with those skills.

------
phendrenad2
From what I've seen, Docker usually gets used when someone has a difficult-to-
run app, usually due to many dependencies or a cryptic barely-documented jump-
through-hoops-of-fire setup. So they dockerize it. Instead of trying to
improve the app, they've essentially thrown up their hands in defeat and
dockerized it. Which is a legitimate strategy in many cases, but too often
it's the lazy way out.

------
birdyrooster
Oof, this didn’t age well. Sounds like the OP didn’t want to understand the
benefits of Docker at scale in a large, heterogenous corporation and has stuck
their head in the sand because their uses cases didn’t need it.

~~~
hacym
I have a nagging feeling THIS comment won't age well.

------
dang
Discussed at the time:
[https://news.ycombinator.com/item?id=17062288](https://news.ycombinator.com/item?id=17062288)

------
jiofih
We should all move to Krubnernetes

------
somurzakov
don't gitlab/azure devops pipelines make docker irrelevant?

~~~
oblio
Aren't those orthogonal (i.e. have little to do with each other)? Why would
Gitlab/Azure DevOps Pipelines (CI/CD tools) make Docker (a container engine)
irrelevant?

