
Docker isn't perfect – Issue #216 - timthelion
https://github.com/subuser-security/subuser/issues/216
======
evanosaurus
_A number of developers from RedHat were once very involved in the project.
However, these developers had a very arrogant attitude towards Docker: They
wanted docker changed so that it would follow their design ideas for RHEL and
systemd. They often made pull requests with poorly written and undocumented
code and then they became very agressive when those pull requests were not
accepted, saying "we need this change we will ship a patched version of Docker
and cause you problems on RHEL if you don't make this change in master." They
were arogant and agressive, when at the same time, they had the choice of
working with the Docker developers and writting quality code that could
actually be merged._

THIS. It was both amusing and sad to watch this happen time and time again. My
favorite is what happened (or, rather, didn't happen) around CoW filesystems
and how they decided to just use a FUSE-based one instead.

~~~
rconti
Upvoted because every opportunity for me to vote against RedHat's insistence
on the nightmare that is systemd is a good opportunity.

Stop breaking Linux's usefulness as a server OS in a misguided attempt to make
it a desktop OS. (see: systemd, networkmanager, etc)

~~~
ambrice
Why do you think that systemd is not useful on a server OS?

~~~
rconti
Because it does nothing that anyone in my organization has ever asked for, nor
does it do anything that anyone in any organization I've ever worked for has
asked for, in 20 years of working on Linux systems.

I've never heard of any problems it solves other than things described in
abstract terms that might make sense to a developer but that makes no sense to
Operations/DevOps teams.

Nobody likes learning something new for zero benefit. I keep trying, though.
But the more I am forced to learn about it, the more arcane it gets. It's not
a learning curve, as far as I can tell. It's a learning wall with no payoff.
It makes everything we do more difficult, the commands make no sense, and it
brings no tangible benefit. It's change for change's sake. It reminds me of
DJB's insistence on making logs in hex "just because".

Just a simple example. Why is it "systemctl list-unit-files"? What the hell
are unit files? How is this in any way logical? Why is this an argument and
not a flag? Why is it that I can use a systemctl command, and it tells me to
use the -l flag to view non-ellipsized output, but then when I use the flag,
it gives me an arcane error that tells me nothing about what happened? It's
the tip of the iceberg in an endless string of junk that simply doesn't make
any sense or work properly.

So, I like to vote against it whenever I can, because supporters like to keep
insisting that it's the way of the future, and implying anyone who hates it is
a luddite.

~~~
ambrice
It's hard to argue with someone who claims that systemd has zero benefit.

You claim unit files are confusing. Let's say we took a Windows programmer who
was a sophomore in CS but didn't have any Unix or Linux experience, show him
both systems. Explain how runlevels are handled in systemd vs sysvinit. Show
him a systemd unit file vs the equivalent bash script. Explain how systemd
tracks the process tree vs sysvinit pid file hack. You spent five minutes
trying to figure out systemctl and declared it terrible. It probably took you
more than 5 minutes to find /etc/init.d your first time

~~~
rconti
Perhaps. My complaint is that systemd has not started making any more sense in
the year(s) I've been wrestling with it. And it hasn't given me any benefit
yet.

To most of us, it's change for change's sake, which makes life worse due to
the learning curve. If it made life easier for the average end user or the
average sysadmin, it would have more defenders.

------
kordless
There was a good discussion last night at the SF Microservices meetup about
how early we are with containerized applications. The company I work for,
Giant Swarm, provides a containerized stack solution and we run people through
a short survey when they sign up for our shared cluster offering. The survey
shows that about 15% of companies (at least for peeps showing up on our site)
have some type of containerized application in production. That's not a big
number. Yet.

The problem being described in this issue is one person's experience with
Docker, but I've heard similar stories from others about similar projects in
the past. Looking at you, OpenStack. Yes, things are breaking fast with Docker
- I've experienced them myself. Yes, new companies and offerings are joining
the ecosystem daily - which can be maddenly confusing for users trying to
understand the technologies. This creates a sense of instability when so much
is going on at once.

I think the problem is that, when _new ways_ of doing compute things spin up,
they become highly disruptive to both the people using older technologies AND
the companies developing the new technologies. Given the new way of doing
things usually gives an advantage to those using them, the challenge comes
from trying to put those technologies into production before they are
finished, all the while keeping an eye on shareholder value.

I call this the "problem cloud". It's really a people problem though, so maybe
it should be called "problem people". :)

~~~
timthelion
This is especially complicated in an open source project like Docker. Docker
seems to be locked between a rock and a hard place in that:

a - They are post 1.0, and there is an expectation that they won't break
compatibility. This has meant a real slow down in development, and it is now
very hard to get a pull request through.

b - Docker is totally imature and those pull requests are needed. It is
lacking in basic functionality like the ability to mount and unmount volumes
at run-time. It is also slow, and suffers from a codebase that was thrown
together practially overnight. So it would actually be best if Docker Inc.
went back to breaking things and getting things done.

Unfortunately, what seems to be happening is that Docker is failing to not
break things and at the same time they are still paralized.

For subuser, there is yet a third problem. And that is that Docker is Docker
Inc. And subuser being free software, it's not a nice feeling to know that my
project is at the mercy of a single company.

------
cwyers
> (This is in response to the fact that Docker writes to cgroups, and systemd
> would like to be the "sole writter to cgroups" some time in the future.)

"Would like to" understates it a bit:

[http://lwn.net/Articles/555920/](http://lwn.net/Articles/555920/)

The kernel plans on deprecating the API that allows multiple writers to
cgroups, thus requiring there to be a sole writer to cgroups. (Also, the
systemd developers say this wasn't what they want, it was in response to the
decisions the kernel maintainers have made in changing the API.)

~~~
timthelion
@cwyers I understand this problem. However, this wasn't an acceptable
situation for Docker to be in, because there is seemingly no choice "B" in
which Docker comunicates with cgroups in a non-init system specific maner. I
accept the concept that this is a hard technical problem, preventing race
conditions while allowing more than one program to run on your system (to put
it flatly), but tying Docker to systemd just wasn't an acceptable option.

I think that RedHat IS at fault in this, because their design choice of
designating systemd as the single writer was extremely anti-standardization.
Imagine if KDE wanted to be the single cgroup writter. Would people accept the
fact that they would have to use KDE and interface with KDE in order to do
containers on Linux? RedHat could have created a service "cgroupwriter" which
would have communicated via D-bus or something. A "cgroupwriter" service would
be far less divisive because it wouldn't be divisive on other issues such as
choice of init system.

~~~
jkyle
> I think that RedHat IS at fault in this, because their design choice of
> designating systemd as the single writer was extremely anti-standardization.
> Imagine if KDE wanted to be the single cgroup writter.

This isn't a very good analogy.

> I think that RedHat IS at fault in this, because their design choice of
> designating systemd as the single writer was extremely anti-standardization.

This doesn't make sense. When the changes hits the kernel, all OS maintainers
will need to choose a parent process for all cgroup control. Supporting this
feature in systemd makes perfect sense and fits very well in the process
hierarchy.

> Would people accept the fact that they would have to use KDE and interface
> with KDE in order to do containers on Linux?

Users and distributions who do not want to use systemd (their choice!) can use
any other init system they want. They are also free to implement any other
solution as their cgroup controller. Their situation is not changed one iota
by this additional feature in systemd. They would have had to implement a
cgroup controller either way.

In short, you are not required to use systemd if they make this choice. You
may not have the benefit of someone else implementing the solution for you.
But unless you're paying developers for the work, the guarantee of others
doing work for you in the way you want them to is never provided.

> RedHat could have created a service "cgroupwriter" which would have
> communicated via D-bus or something.

They could have, but it would have added needless complexity. A significant
number of tasks systemd needs to perform includes dependencies between
cgroups. So mine as well manage it in the init system.

This isn't like a hard dependency on systemd. It just means that someone will
need to author other solutions if systemd is not wanted.

~~~
timthelion
> Users and distributions who do not want to use systemd (their choice!) can
> use any other init system they want. They are also free to implement any
> other solution as their cgroup controller. Their situation is not changed
> one iota by this additional feature in systemd. They would have had to
> implement a cgroup controller either way.

The trouble is, that Docker doesn't get to choose what cgroup controller
"everyone" will use, it has to support them all. And the idea of interfacing
with systemd on cgroups wasn't very appatizing because it was guaranteed to
lead to Docker needing special compat code for whatever non-systemd version of
cgroups control came out. While I agree with @cwyers that kicking the RedHat
folks out doesn't help, it certainly felt to me like that particular
discussion could have been phrased differently so that no conflict was
created. For example, if the RedHat folks have said that there is going to be
a single writer policy and systemd is a cgroup writer on systemd systems, that
would have been far less combative than saying "and systemd is that writer". I
know that the distinction is subtle, I'm not sure if I should try to find the
origional thread. It was written as comments on diffs in a pull request. It
would be therefore rather hard to find, and also I feel a little bad about
pulling out exact names of the people involved because I think that the actual
people working for RedHat are fine people who I don't want to attack and that
this is somehow a problem of RedHat corporate policy and not bad apples.

~~~
cwyers
> And the idea of interfacing with systemd on cgroups wasn't very appatizing
> because it was guaranteed to lead to Docker needing special compat code for
> whatever non-systemd version of cgroups control came out.

Unless Docker is planning to move to running on BSD jails or fork the kernel,
they have to do this no matter how unappetizing they find it, once the kernel
moves to single writers on cgroups.

~~~
mkw39ekdm
FreeBSD is working on Docker support:
[https://wiki.freebsd.org/Docker](https://wiki.freebsd.org/Docker)

~~~
fatherlinnux
Have fun with that. The Linux User Space will not work on a BSD kernel, so you
would be forced into a BSD user space.

Now you are back at Square Zero. There is nothing wrong with the BSD, but it's
just another different thing to learn and work with. I am skeptical this is
the right way of looking at the problem.

Bottom line, the Linux Kernel changed, so Docker has to deal with it.

~~~
ibotty
> The Linux User Space will not work on a BSD kernel

That's wrong, more or less. Read about Linux emulation on BSDs. Of course some
low-level utilities won't work, but most of userspace does.

------
justinsaccount
Docker isn't perfect, but you can't argue with the fact the container
ecosystem was rather stagnant until recently.

For years I knew of openvz (weird? needs custom kernel? supported?) and
lxc(seems to be low level? how do I get started?) and just used xen/kvm on
servers and virtualbox/vagrant locally.

It's funny when Bryan Cantrill talks about how solaris+zones had solid
containers years ago.. and he is right, but the end user UX was abysmal [1].

And then docker comes along and shows...

That this can work:

    
    
      $ docker run -t -i --rm centos:7 /bin/bash
      [root@2214e639debe /]#
    

That building containers can be easy using Dockerfiles instead of 'make random
changes to this container and then clone it forever'

That with a few cli options you can have persistent volumes and locked down
networking.

So, sure, docker may have some issues. In a year we may all be using rkt or
rocker or who knows what. But for now, docker is here and people are using it
for things.

[1] I ran opensolaris at home for a few years. I think I only ever made 2
zones. I never got patching to work and either had to choose between sparse
zone that wouldn't work right or a gigantic full zone that I couldn't manage
to patch.

~~~
timthelion
I'd like to point out, that subuser is not moving away from Docker. It seems
pretty clear to me that we don't have any good other options. But the problems
that I mentioned remain, and these problems really are causing subuser to
break. I guess that's why the title of the bug report is "Docker isn't
prefect" rather than "Docker svcks 1111" ;)

------
Ianvdl
I don't use Docker so I don't know how opinionated this issue is, but it
paints a bleak picture of the project.

Breaking stuff after the 1.0 release should generally be reserved for major
versions (i.e. 1.0 -> 2.0), and incorrect basic documentation and specs are
embarrassing for a project of this size.

Is this just a case where the project has been adopted too soon and developed
too fast? It seems like Docker still has maturity issues past version 1.0
(based on a number of other negative responses to the project I've read
elsewhere as well).

~~~
dimgl
I can clue you in: it's extremely opinionated. After 1.0 there haven't been
huge breaking changes.

~~~
toomuchtodo
DevOps/Infrastructure guy here at a startup that uses Docker in production.
Docker has consistently broken in point releases after 1.0.

"Why yes, I'd absolutely love to use your immature containerization ecosystem
to manage mission critical infrastructure".

To be fair, it was my mistake for not pushing back hard against Docker. Lesson
learned.

------
willemmali
If I got it right, the points made in this article are:

* Docker breaks other software

* Docker breaks it own API at random points between major versions

* Documentation is incorrect

* The project's management is bad at working with the community

I was thinking of trying Docker, but I think I'll stick to LXC for now.

~~~
ninkendo
It breaks its own api in _minor_ versions, actually. There hasn't been a major
version since 1.0.

~~~
mattkrea
I think that is what the op meant. Between major versions meaning breaking
changes happening in updates happening between 1.0 and 2.0 i.e. 1.1, 1.2, etc.

That said.. is Docker using semver? It doesn't look that way so is it written
somewhere that this is breaking some standard?

~~~
timthelion
@mattkrea honestly, it never occured to me if Docker uses semver. I had simply
assumed that it did :P

~~~
mattkrea
I have to admit my use of Docker is quite a bit different than yours and while
I definitely understand the concerns / complaints if they aren't using it can
we really be made if there are changes like this?

I only use Docker on AWS Elastic Beanstalk and they only update for security
issues most often (Beanstalk is using 1.6 as I type this).

If I recall even the config file format changes between 1.6 and 1.7

------
jessaustin
Is it odd that no one from the company has responded to this in two days? I
don't suggest "entertaining" responses. Rather, they could have chosen a
particular complaint or two and said, "here's how things will work differently
going forward, and here are the new issues in our tracker that reflect this
commitment".

It could be that they just haven't seen this. No one was @-ed.

ISTM coreos/rkt may be the way to go for subuser. In my admittedly-shallow
tests, it seemed simpler, easier, and better than docker, and it can consume
docker files.

~~~
dandandan
Don't worry, they'll have a overly-aggressive response soon enough.

------
upbeatlinux
Escape Docker, board your rkt and blast off for unikernel space. Be sure to
avoid the interstellar systemdaemons as they are now sole proprietors of earth
bound cgroupings.

In all seriousness I had never heard of Subuser before this thread and it
looks very promising.

------
nikanj
I wish the solution suggested would be something else than "Rewrite Docker".
[http://www.jwz.org/doc/cadt.html](http://www.jwz.org/doc/cadt.html)

~~~
timthelion
@nikanj, Gnome1 vs Gnome2 vs Gnome3 wasn't about control. "Rewrite Docker"
would be a question of control. To some degree, I was publicising this issue
in order to gauge intrest in such a break away project, which would solve
problems with Docker and take controll away from Docker Inc. I also publicised
it, because everyone loves issues like this and there is no such thing as bad
press :D

