Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Boycott Docker (boycottdocker.org)
105 points by louis-paul on Oct 24, 2016 | hide | past | favorite | 49 comments


Oh boy, as someone who's been using Docker for a while there seems like a lot of misinformation here.

> strict restrictions like single process per container

This isn't a real restriction, you can run as many processes as you want under a container. However, in most cases they recommend against it because it usually means you're doing something wrong.

> Software developers forced to vendor lock-in their software

I've made Dockerfile's for quite a few different apps now, and I have never once had to actually change the application. The applications still work completely independent of Docker, so I'm not sure where the "vendor lock-in" comes in here.

> Docker won't be able to run even Postfix (or FTP daemons)

https://github.com/tomav/docker-mailserver https://github.com/atmoz/sftp

> Docker is designed with cloud computing providers in mind exclusively

I would disagree, although this is a matter of opinion I suppose. Having used both VM's and Docker for deployment, Docker is ridiculously more developer-friendly than a standard VM architecture.

> No way to escape dependency hell

I don't worry about dependencies anymore. If I want to try a new app, I `docker run` it. No dependencies. My apps themselves comes with all of their dependencies so I don't have to screw around with dependencies on a host provider. Sure, it doesn't completely solve dependency problems (you still need to write a Dockerfile) but it makes my life a whole lot easier compared to without Docker.

Docker isn't the right fit for everything, and there's a lot of ways that I can think of where it can be improved. It strikes me like this was written by a frustrated sysadmin who hasn't spent the time to learn the tools (which is understandable, the learning curve is high and most things are different from traditional sysadmin tools).

In fact, in alot of cases Docker is needlessly complex and overkill. However, spreading FUD doesn't do anything to address these concerns and might scare away a lot of developers who could benefit from Docker.


> However, in most cases they recommend against it because it usually means you're doing something wrong.

Docker-novice question for you:

What's the best practice for this? What assumptions are at play for said best practice? I can understand separating DB from App, but what about other cases where an app might naturally have to have multiple processes running?

•••

Edit: I hate calling this out, but if you're going to downvote questions asking for elucidation of the reasons surrounding best practices for a platform (something I'm frankly having a hard time finding in the case of Docker), keep in mind that doing so does nobody any favors and only discourages productive discourse. You've got over 500 karma enabling your capacity to downvote. Be responsible with the practice of doing so. Nothing in this comment (except perhaps this edit) warrants one.


It sounds like you have the right intuition. A lot of the time when beginners are starting out they will try and put their entire application stack (say Postgres/Elasticsearch/their app/etc) all on the same container. This is not great because it is impossible to scale these services independently when configured like this.

I think advocating "process-per-container" was a poor choice, when really "role-per-container" was what the intention was (imo). You may have a separate container for Postgres, Elasticsearch, and then your app. But if every now and then your app needs to spawn a child process to do some sub-task related to the app, that's fine too.

It really depends on what the multiple processes are, so you should ask yourself "is this a logically separate job that I would want to scale and manage the lifecycle of independently from the main app?" and that'll help you make the best decision.


Postfix/qmail are cases where you have more than one process doing only one task and good. Like for instance you may want to deliver some mails locally, plug your own delivery system, or pass your mail through an anti spam/virus proxy from a 3rd party. You may even have process some to make custom authentications, routing ...

For this you may prefer to use unix domain socket, or even (if you use maildir) just the give the name of the file to a local process and avoid the extra fat of network communications.

And for scaling postfix prefers a multiprocessing approach to a multithreading approach. Sometimes you can communicate better as distinct processes than as distinct threads especially when you use the normal unix convention of using SIGHUP for reloading. (HINT, kill and threads are no friends)


There really isn't such a restriction.

I sometimes need containers with several services, and that's perfectly fine if it makes the setup simpler. Services can be dynamically enabled/disabled for instance with daemontools. As a matter of fact, docker brought new life to djb's daemontools for me. This makes it possible to conditionally add even an sshd to my containers (just by mixing in a directory with the run configuration), so I can go into them and check what's going wrong.

And when things need to be hardened up, just drop the corresponding service directory.


I prefer exec to installing an sshd inside the container.

  docker exec -it "id of running container" bash


great tip, thanks!


If you could improve your daemontools life inside a Docker container (presumably you are running svscan as the primary container process) what would that improvement be?


To the other responses I'll add that very often the right model is to compose containers within a system, not processes within a container. Most of the OP's concerns in this area stem from the basic misconception that containers are sort of like VMs, and so it's sort of wasteful to only run a single process in one. Containers are very lightweight jails to run a process in, and all the containers running on a system can communicate with each other over the docker bridge with one hop.


For VM like experience, you should probably be using LXD. It's basically like a VM running in a container (with a few differences obviously). I always thought of it like this:

Docker: One App per container

LXD: One system per container (I do not want to say one OS per container, since technically they all share the same OS (kernel))


> it because it usually means you're doing something wrong.

You always want to have at least two processes running. Docker doesn't re-parent processes properly (unless something has changed in this regard) so you would at least want an init (often supervisord or something like) and then your application.

You will often find strange recommendations online, such as the single process one which is just plain wrong. It's a young technology and best practices change regularly (the onion mounts is perhaps the most prominent example), so you have to keep up and engage your brain.

> I'm not sure where the "vendor lock-in" comes in here.

Vendor lock-in may not be here yet, but it's real and it's the simplest thing that can keep Docker Inc.'s value afloat so my guess is that we'll see more of it in the future. Why would anyone buy services from Docker when/if it is one of many components bundled with Red Hat? Unless getting acquired is a reasonable exit strategy they need to avoid commodization, while still keeping file formats etc. in the open. It's not obvious how they'll manage. Most things Docker do would be better suited to have either in systemd or in a more generic orchestration tool anyway.

We've already seen lots of reimplementation in Swarm of things that Kubernetes already do better. The main reason to invest in Docker's toolset is if Docker gets ubiquitous, their tools are likely to be everywhere.

> Having used both VM's and Docker for deployment,

But that's very much an apples-to-orangest comparison. Docker is not a VM. You don't even use them to solve the same problems. The only reason to compare them is because VMware is a belivable story how to make a living while the underlying technology gets commoditized. Application distribution is a nut VMware never cracked. How often have you even distributed your applications as vmdk's? Better perhaps to compare Docker to Vagrant.


> I don't worry about dependencies anymore. If I want to try a new app, I `docker run` it. No dependencies. My apps themselves comes with all of their dependencies so I don't have to screw around with dependencies on a host provider. Sure, it doesn't completely solve dependency problems (you still need to write a Dockerfile) but it makes my life a whole lot easier compared to without Docker.

A problem I frequently run into is that while package/linking dependencies are usually fixed by Docker, I still have issues due to system dependencies. I can't just docker run grafana, I have to install Grafana, Graphite and from memory StatsD or something too. Dependency hell is still a thing, it's just moved up to a higher level now.


Lock in could mean you can only use Docker to run your images, but that is untrue. CoreOS rkt can run Docker images.


Also, UNIX domain sockets work in docker just fine with volume mounts.


There are some major factual errors in this post. A few of them:

> Docker has lower memory footprint, because of forcing to run only single process per container

Docker doesn't force you to use a single process per container - that's a design choice that some people espouse, but not a requirement. For instance, we run the official GitLab Docker image and it consists of many processes.

> Docker won't be able to run even Postfix

I'm running Postfix in a Docker container right now and it's working fine.

> But we are talking about Docker: it does not support IPv6 at all.

IPv6 support is incomplete, but not non-existent. I'm running IPv6 in my containers right now.


"because of forcing to run only single process per container"

Which isn't really true, because you can still fork processes / run an init system. A lot of people don't do that, for reasons I don't really want to get into, just to say that this is a factually-invalid point, there is no forcing going on here.



As they say on reddit, "I haven't seen it yet!" - but reposts are a bit annoying.

That said, only one of those accounts seems scammy at all, so it is a harmless repost.


This would be a lot more compelling if it included testimonials from actual projects bitten by any of the arguments against docker. On its own it just looks like a thorough but not necessarily real world rant.

Also it doesn't provide practical alternatives. The link to Guix at the end says not ready for production. I am having a hard time accepting the veracity of the contents if they recommend something that's not production ready.


> This would be a lot more compelling if it included testimonials from actual projects bitten by any of the arguments against docker.

Disclaimer: I am not affiliated with the author. I just experience endless issues with docker in production. Enough to write a book. (just testing future audience for the book)

That, I can provide in a separate blog post! :D

What selection of bugs would you be interested in?

- Critical linux kernel bugs killing the host when running docker

- Containers crashing and corrupting all data on the mounted directories [from the host drive]

- Worldwide outage of docker.com killing all CI pipeline in the world for 7 hours straight

- More critical linux kernel bugs destroying the docker host

- Major breaking changes in docker and docker API (usually between all minor versions), breaking all existing stuff

- Randomly dropping critical features, distribution and filesystem support (to the point where there is actually NO filesystem supported by Docker on some distribution. No kidding xD)

- Various DNS fuckups (special issues on Ubuntu)

- No build-in commands to free disk space of old containers/images (this one is major hassle)


Awesome, you _should_ write a blog post if you have time :)

I'm particularly interested in what you did to get around all the bullet points. Are you still using Docker, with some bandaids, or did you move to something different?


Hope you're still here. Comments and feedback welcome.

https://thehftguy.wordpress.com/2016/11/01/docker-in-product...


Thanks :)


Isn't Nix(OS) ready for production? As far as I know it does the same as Guix, but without LISP.


Sounds like it, I don't know enough about Nix to know if it addresses the same scope of need that docker does, hopefully someone who is savvy can chime in :)


It's "production ready" if you're willing to do a lot of work yourself. It has a lot of solid theoretical and practical advantages, if you're willing to do so -- as well as some downsides. Advantages include things like real reproducibility, a "unified language" that can describe everything from your packages to server networks, atomic upgrades and rollbacks. More minor advantages are things like powerful development environments, automatic support for distributed/remote builds for your software/deps, easily supports customized software (it's very easy to patch your kernel and have all kernel dependencies rebuilt automatically, stuff like that etc), and other stuff. It's probably one of the most forkable/customizable distros, just purely due to the way Nix works, and fact everything is in one git repo, making 'wide changes' as easy as a branch. Downsides include the fact that the security story is still ongoing on several fronts (userspace/kernel hardening, quickly shipping userspace vulnerability fixes) and that compiling everything -- when it happens -- can take some time. (I only list these two, but obviously these aren't small problems and IMO are two of the biggest ones).

NixOS certainly isn't ready for "ordinary consumers" or anything, even among people who might use Linux regularly. It also just factually isn't going to be to everyone's exact taste, in terms of some design decisions (hence Guix exists, and Guix is great!) But for advanced users and operators, it can certainly be well worth it, and confer some unique and powerful advantages.

Note that I'm specifically skirting the question of "does it address all the same things as Docker" (in fact, they aren't even fully competing projects, in many ways) -- because that's a fairly complex question given all the ways people use both projects, and it could fill a blog post (or several) in its own right.


Awesome, thanks, this is really helpful.


Anyone who cites not being able to run an FTP server as a reason for not using something else these days turns me off to the rest of their arguments regarding that something else, especially when their core argument is about what they consider the right/wrong tools for given jobs in a modern environment.

FTP was a pain to work with back in the day when I wrote and sold client applications using it, when there wasn't an alternative for many use cases, but these days there are few (if any?) reasons to want to run an FTP server where there isn't a better alternative.


isn't the point that anything that fork()s can't run?


With regard to forking it was specifically talking about tasks that fork in order to change their identity at runtime as is common in daemon processes that can act as or on behalf of many different users. Processes within docker can fork more generally otherwise how would a great many things (like bash at all) work?

The second point where FTP was specifically mentioned is the use of dynamic port allocation. That is a bad point as that is an existing problem for many firewall & related solutions, generally and with regard to running in virtual architectures, so not specific to docker at all. Depending on what you might want FTP for there are many FTP daemon options that don't need to do this, and better protocols supported by services that don't need to do this either.

Using FTP as a bad example here is silly. What compelling reason is there to want to run an FTPd for which there isn't a much better solution anyway? If you are wanting to run an FTPd then you either have a very specific requirement or you are stuck in the past.


I don't like Docker, I feel that most people who claim it's "easy" don't really have sufficient experience with stacks that do not use Docker, and I think it's overall an overhyped piece of rubbish that shouldn't be recommended to people other than in some very specific usecases.

That having been said, this site is terrible. It suggests a conspiracy without ever actually providing any evidence, and it's full of claims and assertions but very little to back it up. Docker has plenty of legitimate problems to address, and I don't really understand why somebody would half-ass their criticism like this.


This page looks seriously out of date now, as a result its criticisms are inaccurate in a wide range of areas.

To pick one in particular, Security. the claim that Docker doesn't know anything about AppArmor and/or SELinux just isn't correct.

and the other security claim about having to trust docker hub is also now not correct, you can use signed builds to avoid that concern.


What a rant about Docker! Would love to hear the motivations for setting up this site.

It seems that the owner of the domain [1] likes boycotting [2] a lot. Not that a disagree with him.

[1]

Registrant ID: XGX4VME-RU

Registrant Name: Sergey Matveev

Registrant Email: stargrave@stargrave.org

[2] http://www.stargrave.org/Boycotting.html#Boycotting


The vendor lock in thing has embarrassingly been a concern of mine as well. It seems presently you are either locked in with your cloud provider or your locked with docker (e.g. using Google container engine vs Google deployment manager).

As a JVM shop I just can't see the value of using Docker when we can provision faster images using the cloud providers API (ala libcloud, jclouds, or directly using gcloud) and just build exectuable uber jars. That is in JVM land you only need to install Java (and for Go and Rust I suppose you have even less dependences).

I understand the value in the OSS world of providing docker images but I'm not sure I can justify the complexity of switching over yet. Are there any major companies (especially JVM) that have fully dockerized (ie converted and not started off using docker)? I know Netflix started too but I'm not sure they every did fully.


Use packer for EVERYTHING https://www.packer.io/ => better tools + avoid locking

As a benefit, you use ansible/chef/puppet for the installation and configuration (instead of Dockerfile). You can reuse your scripts.

As a benefit, you generate any output DockerImage/AWS-AMI/VmWare/VirtualBox. You have more flexibility. (e.g. Make an AMI for auto scaling groups + make a docker image for test environment).


I dig packer, but it's getting pretty hard to find a cloud provider or on-prem solution that doesn't take a Docker image as an artifact.


Okay. I will boycott Docker and use LXD instead if that makes you happy. ;)

Anyways, I am so glad I converted all my VMs to LXD containers. If I want to test something, I can spin up a new container in seconds as opposed to have to install the OS in a VM (or do clones / snapshots etc.).


The minor gripe I have with Docker is how they're tying swarm into the docker tool. I can see a future where sysops will use the tools built into Docker because it's included just like how Windows made IE6 the default browser.


> The minor gripe I have with Docker is how they're tying swarm into the docker tool.

This is actually what I was expecting the article to be about when I read the title. Swarm integration has become a big controversy, and there's been serious discussion of forking Docker over it: http://thenewstack.io/docker-fork-talk-split-now-table/


Why not simply choose not to use it if it doesn't fit your personal use-case? What is the purpose of inciting a boycott?


I am the former owner of boycottsystemd.org. I think I may have started something terrible.


No, you actually had a valid point.


I was hoping for some reasoned arguments from experience, but this just seems like a rant from someone who has read about Docker.


Is this a joke? There is so much very inaccurate about this.


"Everyone is a stupid noob chasing shiny things but me."


> strict restrictions like single process per container

Oh boy. Have you seen phusion/baseimage ? It supports boot time init scripts and then uses runit to let you add as many daemons as you want. It even has a "Wait, I thought Docker is about running a single process in a container?" brief chapter in the readme.


The author makes too many claims without actually supporting it.

But I do agree with the author on that Docker these days doesn't seem to respect Unix philosophy. It wants to be everything. Case in point : integrating Swarm into docker.


I'm new to Docker, but from what I know, it makes a microservices architecture extremely easy to set up. Right?


It makes a microservices architecture less painful to set up. Nothing is going to make one easy to set up, and even less so easy to run.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: