
Boycott Docker - Artemis2
http://www.boycottdocker.org
======
chrisfosterelli
Oh boy, as someone who's been using Docker for a while there seems like a lot
of misinformation here.

> _strict restrictions like single process per container_

This isn't a real restriction, you can run as many processes as you want under
a container. However, in _most cases_ they recommend against it because it
usually means you're doing something wrong.

> _Software developers forced to vendor lock-in their software_

I've made Dockerfile's for quite a few different apps now, and I have never
once had to actually change the application. The applications still work
completely independent of Docker, so I'm not sure where the "vendor lock-in"
comes in here.

> _Docker won 't be able to run even Postfix (or FTP daemons)_

[https://github.com/tomav/docker-mailserver](https://github.com/tomav/docker-
mailserver) [https://github.com/atmoz/sftp](https://github.com/atmoz/sftp)

> _Docker is designed with cloud computing providers in mind exclusively_

I would disagree, although this is a matter of opinion I suppose. Having used
both VM's and Docker for deployment, Docker is ridiculously more developer-
friendly than a standard VM architecture.

> _No way to escape dependency hell_

I don't worry about dependencies anymore. If I want to try a new app, I
`docker run` it. No dependencies. My apps themselves comes with all of their
dependencies so I don't have to screw around with dependencies on a host
provider. Sure, it doesn't completely solve dependency problems (you still
need to write a Dockerfile) but it makes my life a whole lot easier compared
to without Docker.

Docker isn't the right fit for everything, and there's a lot of ways that I
can think of where it can be improved. It strikes me like this was written by
a frustrated sysadmin who hasn't spent the time to learn the tools (which is
understandable, the learning curve is high and most things are different from
traditional sysadmin tools).

In fact, in _alot_ of cases Docker is needlessly complex and overkill.
However, spreading FUD doesn't do anything to address these concerns and might
scare away a lot of developers who could benefit from Docker.

~~~
eganist
> However, in most cases they recommend against it because it usually means
> you're doing something wrong.

Docker-novice question for you:

What's the best practice for this? What assumptions are at play for said best
practice? I can understand separating DB from App, but what about other cases
where an app might naturally have to have multiple processes running?

•••

 _Edit: I hate calling this out, but if you 're going to downvote questions
asking for elucidation of the_ reasons _surrounding best practices for a
platform (something I 'm frankly having a hard time finding in the case of
Docker), keep in mind that doing so does nobody any favors and only
discourages productive discourse. You've got over 500 karma enabling your
capacity to downvote. Be responsible with the practice of doing so. Nothing in
this comment (except perhaps this edit) warrants one._

~~~
hippiefahrzeug
There really isn't such a restriction.

I sometimes need containers with several services, and that's perfectly fine
if it makes the setup simpler. Services can be dynamically enabled/disabled
for instance with daemontools. As a matter of fact, docker brought new life to
djb's daemontools for me. This makes it possible to conditionally add even an
sshd to my containers (just by mixing in a directory with the run
configuration), so I can go into them and check what's going wrong.

And when things need to be hardened up, just drop the corresponding service
directory.

~~~
julg
I prefer exec to installing an sshd inside the container.

    
    
      docker exec -it "id of running container" bash

~~~
hippiefahrzeug
great tip, thanks!

------
profmonocle
There are some major factual errors in this post. A few of them:

> Docker has lower memory footprint, because of forcing to run only single
> process per container

Docker doesn't force you to use a single process per container - that's a
design choice that some people espouse, but not a requirement. For instance,
we run the official GitLab Docker image and it consists of many processes.

> Docker won't be able to run even Postfix

I'm running Postfix in a Docker container right now and it's working fine.

> But we are talking about Docker: it does not support IPv6 at all.

IPv6 support is incomplete, but not non-existent. I'm running IPv6 in my
containers right now.

------
awinder
"because of forcing to run only single process per container"

Which isn't really true, because you can still fork processes / run an init
system. A lot of people don't do that, for reasons I don't really want to get
into, just to say that this is a factually-invalid point, there is no forcing
going on here.

------
okket
Every few hundred days...
[https://news.ycombinator.com/from?site=boycottdocker.org](https://news.ycombinator.com/from?site=boycottdocker.org)

~~~
unethical_ban
As they say on reddit, "I haven't seen it yet!" \- but reposts are a bit
annoying.

That said, only one of those accounts seems scammy at all, so it is a harmless
repost.

------
codezero
This would be a lot more compelling if it included testimonials from actual
projects bitten by any of the arguments against docker. On its own it just
looks like a thorough but not necessarily real world rant.

Also it doesn't provide practical alternatives. The link to Guix at the end
says not ready for production. I am having a hard time accepting the veracity
of the contents if they recommend something that's not production ready.

~~~
user5994461
> This would be a lot more compelling if it included testimonials from actual
> projects bitten by any of the arguments against docker.

Disclaimer: I am not affiliated with the author. I just experience endless
issues with docker in production. Enough to write a book. (just testing future
audience for the book)

That, I can provide in a separate blog post! :D

What selection of bugs would you be interested in?

\- Critical linux kernel bugs killing the host when running docker

\- Containers crashing and corrupting all data on the mounted directories
[from the host drive]

\- Worldwide outage of docker.com killing all CI pipeline in the world for 7
hours straight

\- More critical linux kernel bugs destroying the docker host

\- Major breaking changes in docker and docker API (usually between all minor
versions), breaking all existing stuff

\- Randomly dropping critical features, distribution and filesystem support
(to the point where there is actually NO filesystem supported by Docker on
some distribution. No kidding xD)

\- Various DNS fuckups (special issues on Ubuntu)

\- No build-in commands to free disk space of old containers/images (this one
is major hassle)

~~~
codezero
Awesome, you _should_ write a blog post if you have time :)

I'm particularly interested in what you did to get around all the bullet
points. Are you still using Docker, with some bandaids, or did you move to
something different?

~~~
user5994461
Hope you're still here. Comments and feedback welcome.

[https://thehftguy.wordpress.com/2016/11/01/docker-in-
product...](https://thehftguy.wordpress.com/2016/11/01/docker-in-production-
an-history-of-failure/)

~~~
codezero
Thanks :)

------
dspillett
Anyone who cites not being able to run an FTP server as a reason for not using
something else these days turns me off to the rest of their arguments
regarding that something else, _especially_ when their core argument is about
what they consider the right/wrong tools for given jobs in a modern
environment.

FTP was a pain to work with back in the day when I wrote and sold client
applications using it, when there wasn't an alternative for many use cases,
but these days there are few (if any?) reasons to want to run an FTP server
where there isn't a better alternative.

~~~
easytiger
isn't the point that anything that fork()s can't run?

~~~
dspillett
With regard to forking it was specifically talking about tasks that fork in
order to change their identity at runtime as is common in daemon processes
that can act as or on behalf of many different users. Processes within docker
can fork more generally otherwise how would a great many things (like bash at
all) work?

The second point where FTP was specifically mentioned is the use of dynamic
port allocation. That is a bad point as that is an existing problem for many
firewall & related solutions, generally and with regard to running in virtual
architectures, so not specific to docker at all. Depending on what you might
want FTP for there are many FTP daemon options that don't need to do this, and
better protocols supported by services that don't need to do this either.

Using FTP as a bad example here is silly. What compelling reason is there to
_want_ to run an FTPd for which there isn't a much better solution anyway? If
you are wanting to run an FTPd then you either have a very specific
requirement or you are stuck in the past.

------
joepie91_
I don't like Docker, I feel that most people who claim it's "easy" don't
really have sufficient experience with stacks that do _not_ use Docker, and I
think it's overall an overhyped piece of rubbish that shouldn't be recommended
to people other than in some very specific usecases.

That having been said, this site is terrible. It suggests a conspiracy without
ever actually providing any evidence, and it's full of claims and assertions
but very little to back it up. Docker has plenty of legitimate problems to
address, and I don't really understand why somebody would half-ass their
criticism like this.

------
raesene6
This page looks seriously out of date now, as a result its criticisms are
inaccurate in a wide range of areas.

To pick one in particular, Security. the claim that Docker doesn't know
anything about AppArmor and/or SELinux just isn't correct.

and the other security claim about having to trust docker hub is also now not
correct, you can use signed builds to avoid that concern.

------
fasteo
What a rant about Docker! Would love to hear the motivations for setting up
this site.

It seems that the owner of the domain [1] likes boycotting [2] a lot. Not that
a disagree with him.

[1]

Registrant ID: XGX4VME-RU

Registrant Name: Sergey Matveev

Registrant Email: stargrave@stargrave.org

[2]
[http://www.stargrave.org/Boycotting.html#Boycotting](http://www.stargrave.org/Boycotting.html#Boycotting)

------
agentgt
The vendor lock in thing has embarrassingly been a concern of mine as well. It
seems presently you are either locked in with your cloud provider or your
locked with docker (e.g. using Google container engine vs Google deployment
manager).

As a JVM shop I just can't see the value of using Docker when we can provision
faster images using the cloud providers API (ala libcloud, jclouds, or
directly using gcloud) and just build exectuable uber jars. That is in JVM
land you only need to install Java (and for Go and Rust I suppose you have
even less dependences).

I understand the value in the OSS world of providing docker images but I'm not
sure I can justify the complexity of switching over yet. Are there any major
companies (especially JVM) that have fully dockerized (ie converted and not
started off using docker)? I know Netflix started too but I'm not sure they
every did fully.

~~~
user5994461
Use packer for EVERYTHING [https://www.packer.io/](https://www.packer.io/) =>
better tools + avoid locking

As a benefit, you use ansible/chef/puppet for the installation and
configuration (instead of Dockerfile). You can reuse your scripts.

As a benefit, you generate any output DockerImage/AWS-AMI/VmWare/VirtualBox.
You have more flexibility. (e.g. Make an AMI for auto scaling groups + make a
docker image for test environment).

~~~
elsonrodriguez
I dig packer, but it's getting pretty hard to find a cloud provider or on-prem
solution that doesn't take a Docker image as an artifact.

------
chrisper
Okay. I will boycott Docker and use LXD instead if that makes you happy. ;)

Anyways, I am so glad I converted all my VMs to LXD containers. If I want to
test something, I can spin up a new container in seconds as opposed to have to
install the OS in a VM (or do clones / snapshots etc.).

------
mmgutz
The minor gripe I have with Docker is how they're tying swarm into the docker
tool. I can see a future where sysops will use the tools built into Docker
because it's included just like how Windows made IE6 the default browser.

~~~
profmonocle
> The minor gripe I have with Docker is how they're tying swarm into the
> docker tool.

This is actually what I was expecting the article to be about when I read the
title. Swarm integration has become a big controversy, and there's been
serious discussion of forking Docker over it: [http://thenewstack.io/docker-
fork-talk-split-now-table/](http://thenewstack.io/docker-fork-talk-split-now-
table/)

------
forgetsusername
Why not simply choose not to use it if it doesn't fit your personal use-case?
What is the purpose of inciting a boycott?

------
wje
I am the former owner of boycottsystemd.org. I think I may have started
something terrible.

~~~
qwertyuiop924
No, you actually had a valid point.

------
djhworld
I was hoping for some reasoned arguments from experience, but this just seems
like a rant from someone who has read about Docker.

------
qwertyuiop924
Is this a joke? There is so much very inaccurate about this.

------
draw_down
"Everyone is a stupid noob chasing shiny things but me."

------
chx
> strict restrictions like single process per container

Oh boy. Have you seen phusion/baseimage ? It supports boot time init scripts
and then uses runit to let you add as many daemons as you want. It even has a
"Wait, I thought Docker is about running a single process in a container?"
brief chapter in the readme.

------
mohanmcgeek
The author makes too many claims without actually supporting it.

But I do agree with the author on that Docker these days doesn't seem to
respect Unix philosophy. It wants to be everything. Case in point :
integrating Swarm into docker.

------
altern8tif
I'm new to Docker, but from what I know, it makes a microservices architecture
extremely easy to set up. Right?

~~~
jon-wood
It makes a microservices architecture less painful to set up. Nothing is going
to make one _easy_ to set up, and even less so easy to run.

