
Docker is a dangerous gamble - lkrubner
http://www.smashcompany.com/technology/docker-is-a-dangerous-gamble-which-we-will-regret
======
raziel2p
I don't disagree that deploying Go or Rust apps (I think the author is wrong
to include JVM in that bunch) is much nicer compared to ones where you rely on
shared libraries system-wide, and may even remove the need for Docker.

However, I think the author trivializes the amount of work required to make
different types of Python/PHP/NodeJS/whatever apps all work in a consistent
way through configuration management, saying "I can just write a bash script
or makefile." or "just download roles from Ansible Galaxy". This is so
painfully ignorant and irresponsible that I fail to take the article seriously
as a whole.

Even if it's just a Jenkins job doing a docker pull && docker run I've still
seen massive improvements in the maintainability of configuration management
code because there are so many fewer edge cases you have to take care of. No
K8s required for that.

The author, writing about languages like PHP and Python (and I assume NodeJS
becuase it also works in a similar way, but he doesn't explicitly mention it
for some reason):

> They arose from an old paradigm, which is not suited to a world of
> microservices and distributed computing in the cloud.

So just because they can't build a single static binary, they're not suitable
for microservices and distributed computing. Got it...

> If the switch away from Ruby/Python/Perl to a newer language and eco-system
> allows you to achieve massive scale with less technologies and less moving
> parts, then you absolutely have a professional obligation to do so.

Wow it's that easy?! Be right back, re-writing all our code in Go over a
weekend hackathon.

This article made me angry.

~~~
endymi0n
The point is: Even _if_ you're using single, static binaries (like we do in
90% of cases with Go), the big benefit of Docker is that you don't just get a
binary, you get a _service interface_.

I can't stress this point enough. If you don't know what's up with a failing
service, any SRE can go into the Dockerfile definition (actually, we're rarely
dealing with Dockerfiles anymore, these days it's more the Kubernetes
definition), look at which ports are exposed, what the environment variables
are, what the service is talking to, etc.

You can harden security between those services much tighter than between
binaries, by dropping capabilities, seccomp, network policies, read-only file-
systems.

Also, all the other advice about the drawbacks is pretty dated: While I've run
into my share of super annoying Docker problems myself in the early days (and
yes, especially on Ubuntu, goodbye Device Mapper...), I've yet to encounter a
single, Docker related bug on Google managed GKE on Container-Optimized OS
since we've switched all of our services there a year ago.

I didn't regret our choice in a very long time.

~~~
molf
Cannot agree with this more; but (as you already hint at) Kubernetes is the
_actual_ win, and I would not be surprised if Docker would eventually be
replaced with another container solution in the Kubernetes ecosystem.

~~~
xkarga00
cri-o is a container runtime optimized for Kubernetes.

[http://cri-o.io/](http://cri-o.io/)

------
zamalek
> Docker doesn't require boiling the oceans.

Look, this is from a Windows perspective but I totally get this pro of Docker.
I am the last line of defense at my company when stuff goes wrong and the
sheer amount of variables that customer environments introduce is frightening.
In one case, some HP server management solution changed a benign environment
variable to something strange; completely breaking MSBuild (which we used
heavily at the time). It took weeks to escalate to me, and hours in a meeting
and ILSpy figuring out what was going wrong. Oceans we're boiled by burning
money.

Docker is plug-and-play. We can mandate that customers can't futz with our
image. Our software becomes an appliance that plugs into their network. We can
upgrade from known states, with no HP BS throwing a spanner in the works. I am
pushing it heavily for on-prem for this reason.

That answers the question cleanly: stupid stuff that happens on customer
environments. Don't get me started on how much this lifts off of the ops team
- our Docker stuff isn't yet in production but ops are salivating over it.
There is no installer to run, no xcopy, you just ask Kube to give you an
upgraded cluster.

I strongly suspect change aversion here. Nobody likes their cheese being
moved.

~~~
rspeer
I understand your desire, but do you think the stupid stuff is going to stop
happening in customer environments with Docker?

Are you still upgrading from a known state when you don't know how long it's
been since they upgraded Docker? What if they're on a different version of the
Linux kernel than you, or a different filesystem, causing Docker to work
differently? All kinds of things about the customer environment that can
affect your software are still out of your control.

The problem I've encountered with Docker in my own experience is that, while
it intends to abstract over fiddly details of the system environment it's
running in, it leaks just as many fiddly details, at a different level. If
your goal is a self-contained appliance, you still need something wrapped
around Docker, like a VM, at which point a cynic can ask why you need Docker.

Now, even though I say this, the ops team at my company (who know a lot more
than me) do put Docker inside the VMs instead of just running the code in
them. There is presumably some value to the abstractions it provides.

~~~
tdbgamer
A tool doesn't need to fix _all_ problems to be useful. Yes, Docker doesn't
cover differences in Linux kernel versions but it will take care of basically
every other difference. Now the search space has been narrowed to Kernel/FS
issues. Sounds great to me.

------
lykr0n
I've been tasked a couple of times trying to debug issues in 3rd party docker
images. Every one of those images did something different. Placed
configuration, binaries, libraries, or logs in different locations. A simple
task on a standard CentOS 7 VM that would take 5 minutes turns into 25 minutes
with most of that googling ("Where the hell is your working directory?!").
It's not a fault of docker per se, but it's the result (I think) of the docker
"it just works!" mentality.

What's great about my examples is that two of the applications were single
binary Go applications that could be downloaded and run on almost any linux
box. But they were distributed with docker ... for reasons?

Docker is great when I want to run a local Redis database in one command for
development. But past that- ehhhh. Each new feature seems like Docker is just
digging itself deeper.

~~~
013a
I think this is really a problem you see in every technology that tries to
make things easier. JS. MongoDB. The list goes on.

When a technology democratizes access to something that was previously only
accessible to those with more specialized skillsets, it will attract people
who don't know enough to know how to use it right. But, well, they're still
using it. Thus, left-pad.

Its uncovering something that was always there; incompetence is rampant in
software engineering. Ask the average engineer to properly set up an Ubuntu
box with CICD for deployment and they'll likely fail, even with all the help
of the internet. Ask them to get a docker image onto a server and they can
make it work.

Wider access to technology is... well, I don't know if its a "good thing" or
not. We as an industry pretty strongly believe in "fail fast"; we write tests
to make sure our code works, we structure our startups to recognize and pivot
around failure, and if that failure comes we stop and say "somethings fucked,
stop what we're doing". So I do get scared when a technology enters the stage
which allows hard failure to be delayed as long as possible. Maybe we need
more technology to be harder, have opinions, and be willing to say "you're
doing this wrong, just stop and go get an adult." Because the alternative...
Equifax? Yahoo? Ashley Madison?

~~~
0xfeba
> Its uncovering something that was always there; incompetence is rampant in
> software engineering.

That's what I took away from the article. Production was setup and they don't
know what kernel version it has, or what JVM version it's using, or which C
library, etc. Or, now we want feature X, so we need to upgrade dependancy Y. I
already did it on my local machine, but Bob from DevOps doesn't like upgrading
things on QA/integration/prod without a change control...

Granted, apps have thousands of dependencies now, so keeping track of it
manually is impossible. Docker came along and promised that only whatever's in
this one config file is what's installed. Easy!

So it eliminated lots of the server-specific configuration management but
created other problems.

~~~
lykr0n
I think it really comes down to developer tooling. Devs, in the absence of
tooling will make their own and default to the latest stuff. If the company
doesn't provide a fully mirrored developer environment, then everyone is
asking for trouble.

Where I work, we have a specific team dedicated to managing this tooling.

We support Python Django, Java, and Go. You can clone the base repositories
which include vagrant scripts to setup everything. You develop in that and it
gets sucked up into the pipeline. You want something newer? If it works
locally in our environment, the SLA says it will work in production. Doesn't
work locally, then tough shit.

In personal projects I do with others, I try and do the same thing. Python?
3.6 is our version target. Java is OpenJDK 1.8.0 latest. Rust & Go are
compiled with the latest versions. It might not be incompetence in software
engineering as a whole, but more of an issue in defining version and software
targets.

------
zzzeek
I have two CI servers for testing against different database versions, and on
each box I run three different versions of Postgresql, two versions of MySQL,
two versions of MariaDB, Microsoft SQL Server for Linux, and Oracle express.
They all spin up from dockerfiles and run in containers. Because they all see
a clean linux host, I can use the standard RPM installers for all of them,
they listen on their default ports, all the datafile layouts are out of the
box from the package manager.

Trying to recreate this system without using containers or VMs or something is
much more difficult, because you can't for example install the MySQL 5.6 and
MySQL 5.7 rpm packages on one host at the same time, you have to instead use a
generic binary or build from source into custom /opt/ locations, as well as
make sure everyone listens on a different port, uses unique config, etc.
Oracle express gets extremely upset if things are not exactly the way it
wants, and Oracle only installs from an rpm or deb package. The system used to
work this way before and moving everything to docker just erased all the
complexity and difficulty, it's now ridiculously easy to maintain, upgrade,
and extend. Postgresql released the 10.x series some months ago, to add it
alongside the 9.x versions all I had to do was add "10" to an ansible file and
re-run the playbook to spin it up on the two CI workers. All of the code I use
to build these servers is easily understandable and reusable by anyone
familiar with Docker and Ansible.

Besides that, I use docker containers for hosts where I have for example
services running that need to use particular VPNs, using the containers so
that each service has its own network environment. Again, I could instead get
multiple OpenVPNs running all at once on the host and mess with the routing
tables and hope that a new routing rule doesn't break one of the other
services, but sticking everything in Docker containers again totally
simplifies everything.

I seem to be good at finding use cases where Docker makes things much simpler
and a lot less work. I'm also using Ansible to orchestrate everything so I
suppose I shouldn't get the author of this post started on that :).

~~~
crystaln
The author's answer would be to reduce your database dependencies to one. Good
luck with that!

~~~
dosethree
Well thats probably something to do regardless

------
nickjj
Seems like a really strange article.

This person doesn't like that you need to spend time Dockerizing your app but
then writes this:

> I would recommend that a company look for chances where it can consolidate
> the number of technologies that it uses, and perhaps use modern languages
> and eco-systems that can do this kind of uber binary natively.

So you want me to re-write my 50,000 line Rails app into Go so I can ship a
binary instead of Docker?

That doesn't seem reasonable, especially not when it takes literally 10
minutes to Dockerize a Ruby / Python / Node / "just about any language" app
once you understand how Docker works.

That and most web applications aren't just your programming run-time or big
fat binary. There's nginx, Postres, Redis, Elasticsearch and whatever other
service dependencies your application needs.

I've been developing and deploying web applications for 20 years and for me
Docker solves a ton of real world problems and am very thankful that it
exists. Nothing that I've tried prior to Docker has made it this easy to get
an app up and running consistently across multiple environments.

It also makes it super easy for me to keep my development environment in order
which is important because I'm always hacking away on new side projects and
need to keep a bunch of freelance related projects in total isolation with
their own set of dependencies. Docker is perfect for that.

~~~
M_Bakhtiari
> So you want me to re-write my 50,000 line Rails app into Go so I can ship a
> binary instead of Docker?

Why can't you compile your Rails app into a binary? Genuine question, I
haven't used Ruby in ages, but I don't see anything special about the language
that would prevent this.

~~~
RVuRnvbM2e
You can. You can use this tool called "docker", which nicely bundles all your
dependencies with your code, ensures every deployment is identical, and even
makes it easy to use kernel-enforced resource isolation.

~~~
darkarmani
Too bad you have terrible disk IO and problems when your code expects that it
has a normal Posix filesystem and logs into the overlay system.

~~~
nickjj
> Too bad you have terrible disk IO and problems when your code expects that
> it has a normal Posix filesystem and logs into the overlay system.

It depends I suppose.

I run Docker for Windows on a 4 year old computer with an SSD and those 50,000
line Rails apps have plenty of speed in development.

Rails picks up code changes in about 100ms. Basically it takes longer for me
to move my eyes to my browser and hit reload than it does for a Dockerized
Rails app to reload the code changes.

Flask, Phoenix and Node apps are similarly fast.

In production this is a non-issue because the code isn't mounted.

------
darren0
There's a sad truth that many of us engineers do not want to accept. The most
successful solutions are ones that do not require quality or diligence from
the practitioner. There are absolutely better way to do things than Docker.
Pretty much everything about Docker is terrible, but it solves one problem in
a very simple way that most anybody can use. That solution is that it creates
a fairly portable software package that is easy to create and run.

Now there are tons of issues in the way things are created and ran with
Docker, but they will all be sufficiently addressed over time. By adding
enough things around the edges (better orchestration and resource management
tools, better security scanners, etc) the ends goals and needs of "business"
will be met. In the end you have a messy solution compensated by a series of
tools. The pros outweigh the cons.

~~~
notyourday
> The most successful solutions are ones that do not require quality or
> diligence from the practitioner.

Until it goes "Boom". In production. With real $$ on the line. And that's
where the CTO/VP of Eng recognizes that his developers don't actually
understand how their system works and hence can't bring stuff back up in
minutes or even hours .

~~~
zemo
you're conflating the success of the technology with the success of its users.

If the measure of success for a technology is that a lot of people are using
it and working on it, then Docker is tremendously successful. If the measure
of success is that the company building it is profitable, I have no idea. If
the measure of success is that the founders can raise a lot of money and then
take some off the table ... gonna bet successful here; they've raised $248m.
Whether or not it's a good way to run applications is entirely irrelevant
based on those definitions of success for the technology.

~~~
ksk
>you're conflating the success of the technology with the success of its
users.

They were operating on a different definition of success.

------
beepbeepbeep1
Docker as in package internal services as a container then having them
deployed to a cluster which handles orchestration for you great. Your internal
service, microservice if you want to call it that just becomes a process
deployed to a bunch of nodes and you have a common immutable deployable asset.

Docker as in downloading random images of Dockerhub and plumbing them
together. Not so great, quality varies and most people don't actually check
what they are really running. Often they are VMs packaged up as a container,
badly.

~~~
jrs95
Yup. And there are other benefits you can get out of it once you have the
cluster -- for example a service mesh can handle metrics, tracing, and circuit
breaking that you'd normally need to implement in each service. That's a huge
pain, especially when dealing with legacy applications and multiple languages
or frameworks.

------
pulsarpietro
I wish there were more people like you around. The real mess I see coming, or,
I should say, which has already happened, is summarised below.

As the IT world loses the freedom through the "productarisation" of everything
and the wish of an unreasonable, lofty simplification our jobs become un-
skilled, payed less, and a bunch of inept lead the way towards the disaster.

The rub is that there is not single answer, each IT project has its own
peculiarities and history, I mean, of course there are patterns but Docker
won't solve all problems, nor Python nor Go. It depends.

To unravel the complexity of the current problem at hand you need expert
people, not only products and buzz words seller/buyer.

I argue that sometimes by hiring good people you would probably save the money
you spend an all the hassle, licenses, bug, that those "mature" products come
with.

There are some wise of companies around but the reality I see is that they
over-spend on their IT infrastracture as the management, who has no idea,
follows the craze.

All this will come back as more and more of those companies go burst, but we
need to wise up not to get hurt ourselves.

The rub is that to run a middle-big infrastructure, or to run a software
company, you do need competent people and there are not too many of them
around, I am afraid.

~~~
untog
Your post seems to suggest that we shouldn't use Docker and instead we should
hire... smart people? Those are two entirely separate things.

> The rub is that there is not single answer, each IT project has its own
> peculiarities and history, I mean, of course there are patterns but Docker
> won't solve all problems, nor Python nor Go. It depends.

Docker goes a long way to solving exactly that problem, though. It allows you
cover any variety of peculiarities and history by tailoring each and every
app's environment appropriately. You seem to dislike Docker because it is
buzz-worthy. Sometimes the buzz is onto something.

~~~
pulsarpietro
We are far apart in our view of the industry and that's OK, but I feel I work
in the fashion industry, not an engineering one, which is what I wanted to
stress.

There is too much confusion around and way too much marketing, it all seems
new and it is not, which does not mean it is good, it depends.

------
mcguire
Some good points. One I liked was...

" _However, when you consider the hoops you have to jump through to use Docker
from a Windows machine, anyone who tells you that Docker simplifies
development on a Windows machine is clearly joking with you._ "

(By the way, can anyone give me the 20 second version of how to get a
container to be able to talk to an existing remote database?)

On the other hand, his comments on language seem... schizophrenic. Some other
languages are not "cloud ready", but Go is? Python has always had a
concurrency _thing_ , but jumping on Go or Elixir and expecting the magic
fairys to come out of the walls and fix your stuff is delusional.

~~~
sundvor
1) I didn't get the Windows thing. Maybe I'm just using it too simply?

2) Existing remote db: I haven't tried, but recently came across docker 'run
--net="host"' when finding host.docker.internal as how to access services
running on the host from the docker image. Would that do the trick?

[https://stackoverflow.com/questions/24319662/from-inside-
of-...](https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-
container-how-do-i-connect-to-the-localhost-of-the-mach)

~~~
Too
Net=host basically defeats most benefits of network isolation you get with
docker. It should only be used for quick hacking. Use forward and reverse args
instead. And as others said, connecting out from the container is possible by
default, with the only confusion that localhost might not be what you think.

~~~
sundvor
Ahh, very good, appreciate that.

------
ironjunkie
Docker in itself is not that revolutionary.

Docker AND Kubernetes combined though is such a powerful pattern. They bring
the best of orchestration and binary management. It becomes a single API to
scale horizontally and vertically.

Kubernetes is really what will make container work.

~~~
jrs95
I'm a big fan of Kubernetes and that's what I use in production, but to be
honest I probably wouldn't if there weren't good managed solutions available.
Nomad & Consul seems much simpler and achieves many of the same things (though
you'd need to do a bit of extra glue around DNS to avoid doing SRV lookups in
your service clients)

~~~
tdbgamer
Did you look into Google App Engine, Heroku, etc.? Those were pretty seamless
managed experiences for me, of course you pay for the convenience though.

~~~
jrs95
Not really, those solutions weren’t quite flexible enough to do what we needed
to do. Google Kubernetes Engine has been great though, not much more effort
than something like Heroku but way more powerful.

------
TimJYoung
...Shaking head...sometimes I just do _not_ understand what is going on in the
industry anymore...

Building self-contained .exes that install with minimal dependencies has been
a pretty easy task under Windows since the mid-90s. My entire company was
founded on this premise, and has been developing and marketing embedded (as
in, compiles directly into the .exe) database engines for Delphi/Free Pascal
for 20 years. Our customers range from small shops to very large corporations,
and they all need one thing: easy packaging, branding, and distribution. Some
of our customers distribute their application to _thousands_ of machines, and
most of the time the entire thing is comprised of one (or just a few)
executable(s). And these are machines that the vendor has zero control over,
so the product has to work in some of the most "hostile" environments one can
imagine.

Under Windows, it's simply:

1) Use a standard installer to package your application executables/DLLs.

2) Make sure that you install all binaries into \Program Files.

3) Store your configuration information under the user's application data
directory (local or roaming, your choice) in an .ini, .json file, etc.

In most cases, that will get you an installation (and application) that works
on any Windows machine back to Windows XP. The only exception is if you need a
Vista+ or Windows 8+ API, but you can code your application to fail gracefully
in such environments, such as falling back to a different API or just trying
to dynamically load it and display a decent error message if the API isn't
available.

So, after all that: why is there such a reluctance to do the same thing on
Linux ? Why does everyone want to over-complicate things ?

~~~
root_axis
Getting an arbitrary binary to run successfully on Linux is trivial and IMO
much easier than on Windows. The problem comes in when you're trying to deploy
a service that is composed of multiple sub-services e.g. an application
server, a database server, a caching server, and an http server. It's simply
false to suggest that coordinating the deployment of these types of systems
into a production Windows environment is easier than Linux, even without
containerization... _with_ docker there is simply no comparison. There is a
reason why the vast majority of production software systems run on some
linux/unix variant.

~~~
TimJYoung
You're simply making a statement without any proof. I specifically showed how
easy it can be to deploy _any_ application on Windows going back over 15
years. Can the same be said of Linux distributions over the same time period ?

~~~
root_axis
Ok. I'll give you the same steps for linux.

1) Install dependencies via package manager.

2) Execute binary.

That's it. I don't need a "standard installer", I don't have to install
application files to an arbitrary location blessed by the operating system, I
can store configuration information in the home-directory or in the
application directory or anywhere that makes sense for the application. This
works in almost every linux system "going back over 15 years".

Now explain to me how I'd deploy an application to Windows with a dependency
on, e.g., two specific version of IIS, a MSSQL Database and a node application
server to any Windows machine in the last 15 years.

~~~
TimJYoung
That's certainly not been my experience with Linux. Simply trying to get
something to work over multiple distributions is a very frustrating exercise,
especially if we're talking anything with a UI, and backwards-compatibility
with prior versions is a big issue:

[https://unix.stackexchange.com/questions/137434/are-there-
an...](https://unix.stackexchange.com/questions/137434/are-there-any-linux-
distributions-that-focus-on-binary-backward-compatibility)

Edit: also just found this, which does a way better job than I could of
describing the issues:

[https://github.com/phusion/holy-build-box#problem-
introducti...](https://github.com/phusion/holy-build-box#problem-introduction)

Re: Standard installers: you aren't required to use an installer, but it makes
things easier. You could just copy the .exe to a directory and run it. Most
utilities work that way.

Re: \Program Files: you aren't required to install your application there,
it's just good practice.

Re: installing other products with dependencies: you would install them just
like any other product and would use their installer. It's up to them to make
sure that they keep their dependencies in order. I, for one, certainly won't
defend MS in terms of how they distribute their applications. I personally
think they're a rat's nest of overly-complicated dependencies, but that is not
determined/caused by Windows itself.

~~~
root_axis
There is a big difference between deploying code you control and deploying
_other people 's_ code you do not control. There are plenty of windows
binaries that fail to run on one version of Windows or another and obviously
DLL hell is a thing so I just presumed you were referring to the resources at
a developer's disposal for packaging their own software for ease of deployment
into a production system and not suggesting that any ol' windows binary "just
works" on any version of Windows, because that is definitely not true.
Regardless of the OS you have to support your target platforms.

~~~
TimJYoung
I was referring to deploying one's own application.

Obviously, anyone can screw up anything, so the fact that an application
installation won't work on a particular OS instance/version can very well be
an issue with the application, and not the OS. But, that's basically my point:
it's okay if the application screws something up, but the OS should present
consistent and backward-compatible APIs for application binaries, and any
application-specific libraries should be bundled with the application and
installed into application or user-specific locations.

You mention DLL hell: this really stopped being an issue in Windows XP because
of two things:

1) MS made it so that you cannot very easily drop DLLs into system directories
anymore, and _strongly_ discouraged anyone from doing so going forward.

2) MS made an effort to add features like assemblies to allow versioning, etc.
to be used in the case where you absolutely, positively needed to do the
above:

[https://msdn.microsoft.com/en-
us/library/windows/desktop/aa3...](https://msdn.microsoft.com/en-
us/library/windows/desktop/aa367757\(v=vs.85\).aspx)

However, almost no one besides MS uses assemblies (.NET uses them extensively)
because they're complicated to manage and they're not necessary (this is the
lesson that Docker advocates are not learning). Global, shared user libraries
are a feature for a past that no longer exists where disk space was at a
premium.

Linux distributions need to a) figure out what a standard Linux API consists
of, and b) make the changes necessary to keep these standard APIs in place
across all distributions (with backward-compatibility). The browser vendors
were able to do this pretty well, and JS in the browser wouldn't work at all
without it.

Finally, my motivation here isn't to bash Linux because "yay Windows !",
rather it's my frustration of watching this go on year after year with Linux,
each year hoping that this kind of thing would get resolved and that I might
be able to start targeting Linux wholesale. I just cannot understand why this
is not a priority...

------
regularfry
Docker is just "write code and throw it over the wall to ops" 20 years after
J2EE. It was a bad idea then. It will be instructive to see if the differences
in how the seams have been cut are enough to prevent it from being a bad idea
now.

------
nurettin
Someone could have easily written the same article with a slightly different
title "fat binaries are a dangerous gamble" and go on to swap the opposing
ideas in every sentence. And the icing on the cake; in fat binaries you need
to write and maintain wrappers for services, embed files, stream resources and
do a lot of other attractions as soon as your system is more complex than a
game of tipcat.

------
reacharavindh
I suppose Docker is good as a software packaging solution because the OS
packaging systems "failed" to be simple enough.

If there was an easy way to create a software project -> Build -> distribute
the result to several OS -> Expect it to run the same way config wise, then
there would not be a need for a tool like Docker.

But, as with all new technologies, there are always idiots who misuse the new
power. Building Docker images for deploying a single Go binary is about as
idiotic as it gets.

Many Docker use-cases can be solved by a basic script that sets up the
software, runs it, and cleans up after.

~~~
tannhaeuser
Yes, Docker is mainly a consequence of Linux package management practices, and
in particular shared libs and dependeny management, being generally
misunderstood and perceived unfit for the job. Docker is perceived as simpler,
because it doesn't solve the central problem at all: that of maintaing shared
libs for security updates. Using Docker is equivalent to using statically
built apps (up to the fact that Docker needlessly runs as demon and with root
permissions).

*) That, and tens of millions of investor money/ad spend to implant the idea into people's heads that you need Docker

~~~
kakwa_
Docker is a bad solution to a real problem: Software engineering is still
immature and we are not capable of maintaining stable interfaces/API.

As a consequence, it's a nightmare to update a system without some kind of
regressions, it's also a nightmare to make sure environments are close enough
to be representative of production.

Docker kind of solve this issue by bundling every dependencies (a bit like a
big java .war file, a python virtualenv, ruby bundler, or even some
LD_LIBRARY_PATH trickery or static compilation with C/C++).

But this approach is wrong. Very soon, you have 69 frameworks in production
each in 42 versions at least, with 13 installation patterns, so you cannot
really scan the containers, and you start giving up on maintaining this huge
matrix.

And at one point, you get some old (CentOS 6) containers failing to boot
anyway because you updated the underlying OS (the last bit you're able to
manage), and it disabled something this old container was relying upon (like
its libc needing the old and somewhat dangerous vsyscall).

~~~
mping
Stable APIs are not strictly an engineering problem, it's also a business
problem. In fact, most of the engineering that occurs is due to business
problems. Most of the instability around software engineering comes from
business requirements - they pay engineers to do things faster/less
errors/better. So we ditch the old version and build new ones. And the world
goes round.

------
danharaj
Tools like guix and nix are better conceived over the long run.

~~~
arximboldi
I came to say the same. While they are not exactly competing solutions, for
the most typical use-case (i.e. packaging, configuring and deploying software
without dependency hells) Nix is simply so much better and less wasteful than
docker. Pure bliss.

------
exabrial
I think the most value in docker is "infrastructure as code", basically the
way to configure a unix system in a consistent language, but that was a side-
effect of docker, not it's main purpose.

I see two things missing:

Systemd can already apply cgroups to a process. If could also do a process
firewall and/or use vnet interfaces for a process that would be incredible.

The next thing is a cross-platform definition language for installing and
initializing a process: "I need a ldap server with this init script setup".
Basically bring the IOC, DI, and Hollywood Principles to IAC.

~~~
coredog64
> If could also do a process firewall and/or use vnet interfaces for a process
> that would be incredible.

Please, for the love of FSM, don’t give Poettering any more ideas for
responsibilities to stuff into systemd.

------
jrs95
Docker has been a huge improvement for me over the past 2 years or so. Both at
a relatively larger startup that made the transition from using some of the
tools mentioned in this article, and more recently on a small team _that doesn
't even have anyone dedicated to doing operations_. We've all pitched in where
we saw opportunities for improvement, but pretty much right after adopting GKE
we had a pretty nice way of managing our deployments.

------
caleblloyd
In 5 years I think that we will have a container ecosystem that is just as
stable as the Cloud ecosystem. Cloud platforms were revolutionary because they
made setting up VMs, virtual networks, load balancing, etc possible to do in
an automated and on-demand fashion. Container orchestration allows for many of
the same concepts with a smaller footprint.

The ecosystem is still maturing, but many organizations are adopting
containers as a catalyst for advancing their DevOps efforts. Some may want to
wait longer than others for the ecosystem to mature, some may want to abstain
altogether. It's not a radical concept though, so I disagree that most will
"regret" containers being mainstream in 5 years just like most don't regret
Cloud platforms being mainstream today.

------
nicodjimenez
This article ignores the fact that 95% of the budgets of most software
companies goes towards writing code, not dev ops. Assuming Docker is indeed a
leaky abstraction that is painful to use in production, this still doesn't
matter as long as it makes software developers sufficiently more productive,
by lowering the friction of developing new software. The development use case
for Docker is so undeniably strong that it cannot be ignored. Even if you're
running Chef scripts for production servers and have no intention of switching
to something else, Docker can still be a fantastic investment for dev teams
who have to deal with lots of dependencies. It does seem to me like dev ops is
a dying art, for most companies at least.

~~~
zwischenzug
Got a source for that 95% figure?

~~~
nicodjimenez
Haha I do not, sorry for the random stat. I think my point still remains
though. For most companies, more value is created by software developers than
by dev ops engineers. You disagree?

~~~
zwischenzug
People have told me before that maintenance costs 10x as much as development.
That was based on some studies IIRC. My experience has backed this up. But it
depends what you mean by 'dev' and 'devops' also, I assume.

------
JonasJSchreiber
I respect the author's argument, and I am persuaded a bit, but my team is
already using Swarm in production and it's stable. Would the author, or
proponents of the author's argument, suggest we backtrack and if so, how?
What's the migration path?

------
ropeadopepope
This may be a little off topic, but has anybody watched Casey Muratori's
proposal on SoC ISAs[0]? The video is long, but it's worth watching all of it.
It sounds to me like a SoC ISA could make docker obsolete. If it ever happens,
anyway.

[0]:
[https://www.youtube.com/watch?v=kZRE7HIO3vk](https://www.youtube.com/watch?v=kZRE7HIO3vk)

------
KaiserPro
Docker on its own is fairly innocuous.

The problem is docker on its own is a gloriously complicated chroot with a
cgroup wrapper. If it was just that I think it'd be ok. However the horror
around storage (overlayfs, the mangling of DM, and the avoidance of a real
filesystem designed for snapshots) is annoying as hell ()

However what people think of docker, isn't, and thats the orchestration layer.
The problem is, there is no one orchestration system that fits all. For
example AWS batch is a reasonable orchestration system if you just want to
fire off a bunch of vaguely related tasks. However its terribly limited
compared to Pixar's alfred or tractor(1/2)

K8s is a mess of config, instantiation, orchestration and logging, its a
complex beast, which is difficult to tame cheaply (unless you use GKE, but
then you still need to program _for_ it.)

I am currently trying to tame AWS batch to perform actual batch jobs, and
write the support tools needed to make it useful. Of all the things that
docker does, the only _useful_ thing it provides is the chroot+tar wrapper.
However thats not worth the massive penalty of using AWS bollocks lvm+dm
horror for storage.

Which leads me onto storage.

Docker, dm+lvm is the worst of both worlds, it gives you the appearance of
thin provisioning, but doesn't actually give it. You have the penalty of IO
redirection, but no gain. I now have to waste money by creating an AMI that
uses ECS and ZFS

------
sjellis
I think that it is telling that Red Hat have been working for quite a while to
get Docker out of the stack: containers are a great idea, Docker is not great
technology. The last piece is Buildah, a tool which builds and runs containers
without needing a constantly running service.

There's a recent presentation on Kubernetes without Docker, Buildah and the
rest of this work here:

[https://www.youtube.com/watch?v=BeRr3aZbzqo](https://www.youtube.com/watch?v=BeRr3aZbzqo)

------
nathan_f77
I think the author should take a look at Convox [1]. Convox is really the
"holy grail" PaaS that I've spent many years looking for. It's a self-hosted
Heroku on your own AWS account, and it uses Docker. You can set up a
production-ready deployment in about 10-15 minutes, including RDS, SSL,
logging, high availability, and auto-scaling. The cost starts at $67 per
month: 3x t2.small instances for your highly-available cluster, and 1x
db.t2.small RDS instance. (convox/rack [2] is a free and open-source project.)

I've worked with OpsWorks (Chef), Saltstack, Puppet, Ansible, and Capistrano,
and Convox is a breath of fresh air. I hate Chef with a passion, since I've
had to maintain a cookbook for a client that is using OpsWorks. It's ugly,
painful, and extremely slow.

Docker is a great tool, but it's not a PaaS. You need a layer on top to make
it useful for production. But I do love having a CI image that is identical to
production.

[1] [https://convox.com](https://convox.com)

[2] [https://github.com/convox/rack](https://github.com/convox/rack)

------
blauditore
I'm still trying to wrap my head around what Docker really is, and how/when I
should use it.

Say, I have a JVM-based web application along with a database, like Play/Scala
with some MySQL or H2. Is there any good reason to dockerize anything, or am I
better off running them in bare form on a server?

The applications themselves (or the JVM as platform) should already act as a
good enough abstraction from then underlying platform, right?

~~~
crdoconnor
It's occasionally useful when you have what I call "misbehaved" applications
that have certain picky requirements regarding environment - system libraries
(e.g. specific oracle version), absolute path locations, etc. and fulfilling
those environmental requirements may conflict with other applications if you
installed them both on the same box.

Better behaved applications (statically compiled golang apps being one i
guess) will not be so dependent upon a specially configured environment.

I think it gained popularity originally because this is a big problem on
development environments and docker was the first (possibly only) technology
to really try to solve it. It doesn't _solve_ the "works on my machine"
problem and it doesn't do what it does particularly elegantly, but it gets
part of the way there.

I think the reason people then ported it on to production was because they
liked consistency between environments, not because it is especially well
suited to production environments. The creators obviously encouraged people to
use it in prod (so now it's "best practice") because there's more money in
your prod environment than there is in your dev environment.

I still think the whole ecosystem is somewhat shoddily put together and the
culture is cargo cultish, so I try to avoid it where possible.

------
strken
I think I'm missing something here. What is it that makes docker complicated
or dangerous for deploying applications?

I don't understand how the argument goes from docker unnecessary and a little
bit silly, to docker being actively dangerous.

~~~
jaequery
Well it can be dangerous if the image maintainer were to put something
malicious in the build, like log all requests and send it somewhere else.

Although the same can be said for any package management libraries (npm, apt,
etc), because Docker works at a server level, it opens up a whole new ballpark
of exploitations on your app.

~~~
robbyt
Running untrustworthy code can be risky? Nothing new here. This is not
Docker's fault. Anyone can download and run untrusted code, even without
Docker.

~~~
jaequery
Yea, and that was exactly my point

------
mattbee
In the middle of this ramble:

"I wish developers were more willing to consider the possibility that their
favorite computer programming may not be ideal for a world of distributed
computing in the cloud. Apparently I’m shouting into the wind on this issue."

Maybe understand someone's job before telling them what tools they should be
using? Sure, simpler is usually better engineering. But results are important.
Docker is helping a lot of developers get results, and the author is clearly
not one of them.

And in conclusion:

"Docker strikes me as a direction that one day will be seen as a mistake."

Name one technical movement that wasn't! I'm not sure why it's such a shock
that programmers are "fashion-conscious". It matters more that we put our
collective effort behind _something_ with a few years vision, not necessarily
what that something is.

(I speak as a member of the minority waiting for the phone industry to get
over its obsession with black glass rectangles and get back to making proper
phones for grown-ups that come with proper keyboards, rather than pandering to
the teenage dem~~increasingly loud static hiss~~)

------
zshrdlu
The article starts off with the standard _right tool for the right job_
mantra, but I think the author severely misjudges the number of scenarios
where container orchestration is a superior tool.

The author also seems to downplay the simplicity and ease that container
orchestration brings to the deployment and management of distributed
infrastructure.

------
Ceezy
This article is very long, and the arguments very thin. I m against Docker in
production. Still I think it's great/quick localy like vagrant.

------
markrendle
I'm wondering if I don't get your point because I'm too dumb or too smart.

I may be too dumb[1] because I'm unable to infer for myself all the reasons
Docker is bad that you haven't bothered to include in your article.

Or I may be too smart[2], because I found setting up Docker for Windows
incredibly easy (I downloaded an installer and ran it), I find Dockerfiles
easy to write (most fit on one screen), and I find running Kubernetes in
production (on Azure AKS) to be straightforward (write YAML files, Google
stuff occasionally).

I honestly had to scroll back to the top of your post from halfway down
because I thought maybe it was written three years ago.

[1]I'm not.

[2]I'm not this either.

------
chunkyslink
"Docker. The source of, and the solution to, all of our platform architecture
problems."

------
agentultra
I don't think I have much to agree with here. I do agree that one should be
aware of the trade-offs of any given solution. And I agree that there is no
such thing as a free lunch.

I do disagree that "devops" is a job role and the sole responsibility of
particular engineers on your team.

So what does Docker bring to my team? Well it means repository maintainers
have control over how their software is packaged and our deployment systems
don't have to worry about configuration management on the host systems we
deploy too. That saves our business time and money. It has instead allowed us
to focus on high-level A/B canary deployments.

Though Docker _today_ is much better than Docker just a few years ago.

I've used those third-generation configuration management systems to deploy
services across hundreds of machines across two data centers. Usually without
a hitch. It works great. But we were deploying a homogeneous suite of services
that were all written in the same language ecosystem. Never had a problem.

This is why I think it's important to think of _devops_ as an approach to
structuring teams. When the people who run and operate the software are on the
same team as the people who write it then the trade offs that make sense
become much clearer.

------
Annatar
None of these problems would exist if developers packaged their software and
the configuration in operating system packages. Then it’s just a matter of
plugging those into their OS’s respective provisioning technology. I suspect
the issue isn’t that, but lack of shell scripting expertise. In the end one
has to shell script anyway, even inside of systems like Docker, so prolonging
the inevitable is irrational.

------
jchw
If you're talking about Docker vs. fat binaries, you're missing the best part
of Docker; container scheduling.

If it were just about isolation you could always tarball up some chroots in a
very similar fashion to Docker. But it's not just about application delivery -
it's about the whole process from development to production, code to ops.

Beyond that, I don't think I'm going to "regret" Docker. Yes it's overly
complicated, has a bad track record, and so forth. So? It's the best option we
have right now. Something better will probably emerge and we'll move to it. I
was banking on rkt myself.

But until then, I'd rather get the best we have even if I know it could be
better.

~~~
KaiserPro
Docker doesn't have a scheduler. K8s does, ECS does, Fargate and GCE does.
Docker does not. (swarm counts, but its horribly limited like ECS)

~~~
jchw
Well, Docker does have a scheduler, which you mentioned. Swarm is in the same
binary as Docker. But I'm talking about the ability to _use_ a scheduler,
something a fat binary doesn't give you.

------
zabana
The author mentioned bash scripts and make as an alternative to Docker in
production environments. Does anybody have experience with these techniques,
(or can perhaps link us to articles / tutorials on how to implement them)
Thanks in advance.

------
mdekkers
This essay is excellent, and to me it clearly highlights the difference
between an experienced architect that has been there, done that, and got the
battle-scars, and those that have not. The attitudes and behaviours Lawrence
describes are depressingly common in the industry - where the bandwagon is the
most common form of transport - to the detriment of us all.

I recently interviewed a candidate. When discussing languages in use, I
mentioned a language this person probably didn't like all that much, or didn't
have that much proficiency in. His comment wasn't "why did you chose to use
that" or something along those lines, but "when are you getting rid of it".
The interview came to a close pretty quickly after that. There was little
interest in why these choices were made, or why perhaps that language was
deemed to be the best tool for that particular job. I really wish this was an
outlier, but it is the prevalent attitude. Every day I work with developers
that have little regard as to what tool is the best for the job at hand, and
approach problems with the idea of "Well, I have this hammer here, and I
really know how to use it well, I'm pretty sure I can ram this screw into the
board with it" and subsequently proceed on a mission to convince everyone that
the hammer is the only tool worth considering. I suspect that the deeper issue
is one of either supreme overconfidence and arrogance, or masks a deep
insecurity.

Whichever one it is, the issue remains. I have had devs insist that "docker is
the _only_ way in which can deploy my app, we need Docker RIGHT NOW".
Container orchestration is being deployed at $work and we have published a set
of guidelines to make sure that developers have an idea on how to ensure their
apps will function properly in this environment - it is much along the lines
of 12-factor apps, with some tweaks. The pushback I have received from some is
that "with docker it shouldn't matter, and whatever I do on my PC you can just
pick up and drop in place", with a complete lack of regard for the fact that
their PC is not that same as our production environment. After discussion it
becomes clear that this is rooted in a combination of ignorance ("I don't know
how to do that") and laziness ("I made this app, it works on my PC, it is not
my problem anymore").

Doing battle with the tech-fashionistas is a regular thing. No, we are not
going to re-write all out applications in C# because we have an intern that
can only use C# (an actual discussion). No, we cannot randomly deploy
everything on Docker because that is the current hype du-jour. An now, we are
not going to back up Petabytes of data to the cloud because everyone is doing
it - laugh all you want about tape, but it more cost effective, fits our use-
case better, and is orders of magnitude more cost-effective.

One of the things that Lawrence mentions really resonates with me:

" _The guiding rule should be “What is the simplest way to do what we need to
do?” If the older technology gets the job done, and is the simpler approach,
then it should be preferred. But if there is a new technology that allows us
to simplify our systems, then we should use the new technology._ "

I wish we could all get behind that....

~~~
cookiecaper
> The pushback I have received from some is that "with docker it shouldn't
> matter, and whatever I do on my PC you can just pick up and drop in place",
> with a complete lack of regard for the fact that their PC is not that same
> as our production environment.

Yes. This misconception is the main promise made in Docker's marketing, so
it's no surprise that people who've gobbled it up are unhappy when confronted
with the fact that the pixie dust and the lands of eternal rainbow-sunshines
are fantasies. (Sidenote: it's sad how non-admin-capable developers interpret
the promise of demoting ops to mindless button-pushers as fantasy fulfillment,
but that's an issue for another time).

Docker had a massive bonfire of VC money to perpetuate these false beliefs,
and the message has been magnified by BigCos like Google, who've placed it at
the center of their corporate strategy to retake cloud.

Unfortunately, in most cases, the Direction-Setter is not going to take the
credit for being misled and needing to double back and fix it. They just lob
it over the wall to RealOps, courtesy of DevOps(TM), and expect them to be
grateful for it.

Meanwhile, RealOps's job is much harder than before, because you have the
ignoramus shouting "What are you talking about?! You just don't know how to
use this revolutionary new technology from DOCKER and GOOGLE! Are you smarter
than Google, Bob? I coded up a whole Dockerfile over lunch!"

We have a lot of naive, self-important dilettantes in tech these days, trying
to act like the 4 minutes they spent skimming the Docker manual make them
smarter than the greybeard who just doesn't have the time for every painfully
empty fad on the block.

We should talk about how we can stop that infiltration.

~~~
Annatar
Yes, we should. Docker is just a red herring. This is the real issue, and the
root cause of Docker.

------
dvfjsdhgfv
> You can have thousands of CentOS servers, and you can use an old technology,
> such as Puppet, to be sure those servers are identical.

I surely do feel old reading this. I actually remember being excited about
Puppet as the new thing.

------
jmspring
Them: “This app requires Nginx and PostGres and Redis and a bunch of
environment variables. How are you going to set all that up without Docker?”

Honestly if the developers are thinking at this level and management doesn't
understand pushing back and separating things out such that the app developer
should only worry about "redis" or "postgres" or maybe both, but they should
be environment variables to that part of the app...The company isn't doing a
good job of understanding it's stack.

~~~
the_new_guy_29
Keep it in mind it was an example privided by the OP

------
hguhghuff
Docker is why I have lost interest in working in software development.

~~~
dstroot
Explain.

~~~
hguhghuff
I don’t like docker, it’s a complex kludge, but it now pervades every aspect
of development and operations. I just don’t want to work with it, which means
I don’t want to work in most computing environments.

~~~
bearjaws
"I don't like _technology_ , it's a complex kludge, but now it pervades every
aspect of development and operations" See how silly this sounds?

All new technologies are inherently more complex than the last ones, solving
some problems and introducing others...

~~~
hguhghuff
I totally disagree.

I think that technology is driven mainly by waves of simplification, not added
complexity.

Indeed this is exactly why I don’t like docker because it introduces fresh new
deep complexity.

~~~
talonx
Surface simplicity often comes at the cost of hidden complexity underneath. As
long as users are not exposed to the underlying complexity, everything is
rosy. But for things like Docker, I don't think that's a reasonable
expectation.

~~~
placebo
No, surface simplicity should not mean sweeping the complexity under the rug.
It should mean exposing the bare minimum in the level of abstraction being
used, and doing same for the level beneath it, etc. This is how you get a
scalable and maintainable structure which can do very complex things.

I only know Docker from a high level (the concepts and the problem it tries to
solve) so I'm not in a position to give an opinion about it's implementation,
but I think a reasonable expectation from every technology is that it should
be as simple to use as the use case requires.

Of course, sometimes the problem isn't the technology but the way it it used.
You won't try to make a commercial airliner suitable as a quick means of
transportation to your local convenience store. It's not even overkill - it's
just the wrong tool for the job. The problems start when that's considered the
cool thing to do. Unfortunately, technology isn't always about what's good,
but many times about what is trendy, but things will eventually converge to
what works best, even if they have to go through the longer path.

------
dosethree
This article seems to totally go against my personal experience, that
maintaining a suite of applications of varying languages in prod and dev is
made a lot easier through containerization and standardizing on top of that.
The same can be done with VMs but the container orchestrators (kube/etc) are
usually closer to the level of abstraction the developers need (docker ->
puppet)

------
Lazare
My comment from the earlier discussion was quoted:

"Let’s say I’m working on an existing code base that has been built in the
old-style scripting paradigm using a scripting language like Ruby, PHP, or
(god help us) node.js.

…I can just about see how we can package up all our existing code into docker
containers, sprinkle some magic orchestration all over the top, and ship that.

I can also see, as per the article, an argument that we’d be much better off
with [uber] binaries. But here’s the thing: You can dockerise a PHP app. How
am I meant to make a [uber] binary out of one? And if your answer is “rewrite
your entire codebase in golang”, then you clearly don’t understand the
question; we don’t have the resources to do that, we wouldn’t want to spend
them on a big bang rewrite even if we did, and in any case, we don’t really
like golang."

And the reply was:

"In this example a company has had a PHP app for a long time, and now it needs
to Dockerize that app. Why is this? What has changed such that the app needs
to be Dockerized now? This feels like an artificially constrained example. How
did the app work before you Dockerized it? What was the problem with the old
approach, that you feel that Docker will solve?"

But I think that totally missed my point. Lawrence has written a compelling
argument about how wonderful what he's now calling uber binaries are. I'm
sold! I want them! But _I cannot make an uber binary for a PHP app_ (as far as
I'm aware). I can dockerise it, and sure, an uber binary is much much better
than a container, but maybe a container is still slightly better than a
traditional app. Dockerisation is _possible_ (for most people); uber-
binarification is impossible (for many people). If you're writing an article
about how uber binaries are better than containers and you miss out on the
biggest advantage containers have over uber binaries for many people, your
article is not going to engage with people the way you're hoping.

And for the record: We don't think we need to Dockerize our app, we haven't
dockerized it, and and we're not dockerizing it. We have zero containers of
any type in production; we rely on some Puppet templates and a small handful
of hand-rolled shell scripts, and it works great. We're docker skeptics and
agree 110% with everything Lawrence is saying here, including the bit where he
talks about how great uber binaries make deployments.

"What was the problem with the old approach, that you feel that Docker will
solve?"

Man, don't ask me, you're the one who wrote an entire blog post about it!
(Admittedly, the post was about how uber binaries are even better than
containers as solving these problems, but as above, I can't _use_ uber
binaries. I can use containers. And the more you talk about how uber binaries
are like magical super-containers, the more it makes me wonder if we're
missing out on something with our repo of Puppet templates and bash scripts.)

~~~
ballenf
I just took his point to be, much simplified:

Moving to docker is not free and the team should honestly evaluate whether
those resources would be better spent making fixes to the underlying problem.
Instead of using docker to sweep the problem under the rug under rug while
getting credit for being modern and utilizing the best and latest tools
available.

His final response was not to you, but to all the people dockerizing php apps
without honestly tallying up the real cost/benefit due to a biased affinity
and comfort with languages no longer suited to the changing world.

That's just my tl;dr of his argument, not an endorsement.

~~~
Lazare
Yes, and I agree with that point, but my original comment was made in the
context of an article that (in my view) was making a very _different_ point,
and I don't think my comment makes sense out of that context, nor do I think
the author really engaged with what I was saying. Certainly the rhetorical
questions which immediately followed his quotation of my comment were totally
irrelevant to anything I believe I actually said.

------
intellix
Definitely agree that we should move towards languages that allow you to
bundle/package/binary the whole thing.

Damn docker is so good for local development at least. Was showcasing a script
for putting metrics into InfluxDB. The guy looked at me like I was a freaking
sorcerer when I had it running in seconds with one command: `docker run
influxdb`

------
math0ne
I think of docker as a learning tool, it is a repository of systems others
have created that you can learn from. I agree you can use any build system or
write scripts by hand and get similar results but docker has created a
standardized ecosystem you can quickly learn how to do just about anything
from.

------
sundvor
TFA: "when you consider the hoops you have to jump through to use Docker from
a Windows machine"

What are those hoops? I'm curious / slightly worried as to what I missed.
Surely author didn't mean enabling Hyper V / virtualisation? Thanks.

~~~
davegardner
I've been using Docker on Windows for well over 2 years. It was certainly
difficult setting it up on Windows in the old Docker Toolbox days (circa
2016), so perhaps the authors experience is from then.

Since the Docker for Windows client was released I've had no problems at all.
(In the interest of full disclosure there were a couple of early beta issues
that were quickly fixed, however that's to be expected).

~~~
sundvor
Ah, cheers, so it was probably that - running "Docker Toolbox" in Virtualbox
was not a very good experience indeed (the mappings were a royal pain), but
since this was a 2018 article I assumed we'd all be talking about Docker for
Windows. Author lost me with that, then.

It's been very smooth for me as well, especially after figuring out how to do
write docker compose files. Which is arguably not hard at all, once you get
used to them. :)

------
sorisos
Regarding "uber binaries" vs containers. Doesn’t containers like docker
provide other license options? i.e. you can mix GPL and proprietary licences?
I'm no expert on this, so please correct me if I'm wrong!

~~~
lmm
What matters from a copyright law point of view is whether you've formed a
derivative work or not. There are a wide range of legal views on when exactly
that happens. But almost everyone agrees that the technical details of how
your program talks to some other program that it makes use of are irrelevant.
If you're linking into a single executable but the parts only invoke each
other using a simple, generic interface, there is most likely no derivative
work and license compatibility doesn't matter. If you're running two separate
processes communicating via pipes, but this is a deep integration where you
pass complex datastructures back and forth, then most likely they form a
derivative work and you do need to think about license compatibility.

------
mamcx
If not docker then what? I heard about kubernetes but it show less polished.

My use case: I wish to deploy easier Postgres and python/.net core apps. That
is.

------
csomar
Docker is messy, containerization is full of s##t and bugs. But I'll take the
gamble.

The world moving toward standardization is a _BIG_ deal. Docker doesn't add
complexity. Docker reveals the real complexity of the problem when you require
a stable solution.

Let's take this simple example. Let's say you setup shop with a Virtual Server
that has Ubuntu xx.xx and you write a simple PHP code that sends an email:

    
    
       mail( poor@soul.com, "spam" );
    

It works. It works fine not because it is simple, but because it hides the
complexity.

-> It is not clear what it sending the email here.

-> It is not clear what PHP is using to send the email.

-> It is not clear how your Virtual Server is relying the email.

-> It is not clear what will happen if it fails.

-> It is not easy to change after you have coded it.

That single line is very simple and works. But it is a mess as you keep
progressing. The reason I keep investing in Docker is this: I have been burned
to the core by complexity that just start pooping out on my face as my day
moves.

Imagine you have this line of code instead:

    
    
       mail_service->send( poor@soul.com, "Still spamming" );
    

You have the following:

-> Consul managing your nodes.

-> Mail nodes that expose a mail_service

-> The Mail service needs configuration. Tricky. Can setup and configure an SMTP server, load up and build some template docker file made by some other guy, or pull the MailChimp docker container and give it my API keys.

-> Maybe I'll have three instances. Prioritize the most efficient one and leave the others just in case.

-> Mailchimp goes down? Consul moves traffic to my other nodes.

-> Mail failure? Report to my logging node. Maybe we need to standardize this. So I just load my mail node that integrates with my logging node.

-> Moving from MailChimp to MailXYZ doesn't require any code modifications. No updates for the code. No downtime.

So does it work now? Uh, tough call. Yes, it is a gamble. It is a big gamble.
But it is a big gamble that will make us all better off. So let's invest in
it. Remember that the tech sector was mostly made out of technological
gambles.

TL;DR: Docker is messy. But not because Docker or containerization sucks. It
is because they expose the messiness of the real world. Your development WAMP
hides a ton of this complexity that only reveals itself once you start scaling
and going through practical tests.

------
M_Bakhtiari
I think we should look into why we need all this garbage in the first place.
Sane, reproducible package and configuration management would go a long way,
I've been meaning to see whether NixOS fits the bill.

Until then, static binaries seems like the least bad option, and for the
containerization you can just use the native OS features. And I don't
understand how people take this to mean they have to write all their software
in Go or Rust, any language can be compiled into a static binary.

------
magoon
I disagree with this article because I see successful Docker deployments every
day.

------
ianwalter
> If you are using Docker and Kubernetes for an ordinary website, then please
> stop. There are simpler ways to run a website. We’ve been building websites
> for 25 years now, and we didn’t need Docker.

Sure, if that's the only kind of workload you ever plan on running. No one is
__really __doing this though so this is just useless outrage. This is like Fox
News for developers.

~~~
philipodonnell
"[blank++] isn't necessary for ordinary websites because you really only need
[blank]" is always such a generalized context-less statement that it barely
deserves being written at all. :-)

