
Ask HN: Is it just me or why does Docker suck so much? - antocv
For all the massive hype about docker, I am hugely dissapointed.<p>Not being able to change &#x2F;etc&#x2F;hosts, &#x2F;etc&#x2F;resolv.conf of a container, ugh. Requiring some really ugly hacks just to actually provide an real &quot;containement&quot; of an entire applications environment - &quot;uh yeah except hosts and resolv, cant do that&quot;.<p>The command syntax is lies, docker rmi can untag an image not really remove, and who came up with the name rmi? docker images already exist, docker images -rm someId would be sane.<p>The biggest flaw though, is that its a pain in the ass to setup a private repository and actually use it.<p>Isnt there some saner alternatives, like lxc with images and shareing?
======
corford
I've been waiting for the hype to die down a bit and for the project to
stabilise before properly playing with it but, from the outside looking in, I
must admit I struggle to see how it's gaining as much attention as it is.

I can see the advantage for dev boxes where a developer might want to setup a
load of containers on their machine to emulate a staging or production
environment. But I don't really understand why you'd want to base your entire
production infrastructure on it.

What's wrong with setting up kvm "gold" images for your various server types
(db server, redis instance, ha proxy server etc.) and then just dd'ing those
images to host machines and using ansible/puppet/chef to do any final role
configuration on first boot? At least that way you've got all the security and
flexibility a proper vm implies with not much more admin overhead than if
you'd used docker.

~~~
emeraldd
That works if you have physical access to the machine/vm. What about EC2?
Linode? Digital Ocean? etc.

~~~
corford
Fair point. I guess for serious stuff I just find it bizarre that you'd want
to run a load of containers in a vm (rather than just spooling up additional
dropelts or linode instances). For non 'weekend project' stuff, surely it's
cheaper and more efficient to lease a dedicated server and stick your own vms
on that?

~~~
eropple
At Localytics, we wouldn't need to lease a dedicated server. We'd need to
lease dozens. And we'd need to have the ability to spin up new hardware to
accomodate more load in a minute or two.

You don't get that with physical hardware unless you want to overpay. We could
overpay for depreciating plant assets or we could overpay for variable costs
that we can more directly control. The latter makes sense to us.

~~~
corford
Ok, that makes sense. For your workload (large and elastic), I guess I can see
the advantage of using docker to quickly provision a newly created vm
(assuming the vm's role is completely provided for by just that one
container).

~~~
eropple
Totally. As mentioned in my other comment, it can also let us deploy a _bunch_
of applications on the same virtualized node really quickly via Mesos or Flynn
or whatever.

That said, I use Docker at home too[1] because it does make thinking about
things easier. I dump my blog in a thin container not because I urgently
desire security (though with Wordpress I kind of do worry...), but because it
lets me develop and deploy using the same tools.

[1] - [http://edcanhack.com/2014/07/docker-web-proxy-with-ssl-
suppo...](http://edcanhack.com/2014/07/docker-web-proxy-with-ssl-support/)

------
lgbr
Setting up a private repository is a PITA? It's one command:

    
    
      docker run -p 5000:5000 registry
    

Want to back it with S3 or another storage provider? It's still one command:

    
    
      docker run -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=acme-docker -e STORAGE_PATH=/registry -e AWS_KEY=AKIAHSHB43HS3J92MXZ -e AWS_SECRET=xdDowwlK7TJajV1Y7EoOZrmuPEJlHYcNP2k4j49T -e SEARCH_BACKEND=sqlalchemy -p 5000:5000 registry
    

What's so pain in the ass about this?

~~~
rabino
did you just post your s3 secret key?

~~~
kachnuv_ocasek
No, it's an example from the readme: [https://github.com/docker/docker-
registry#quick-start](https://github.com/docker/docker-registry#quick-start)

~~~
kordless
It's faster to type the question than it was to look it up, like you did.
Thanks for spending the time doing so!

------
tethra
Libvirt has support for lxc these days, if memory serves. I'd recommend it -
docker just seems heavily marketed.

What you're mentioning with hosts/resolv etc is a problem that has been
"solved" with tools like etcd and zookeeper as someone else mentioned.

I tried docker with a couple of things, and found that it is an environment
that (at the time I experienced it, maybe six months ago) was so unhelpful as
to appear completely broken. It isn't for systems administrators, or anyone
who knows how to do things the unix way; it's for developers who can't be
bothered to learn how to do things sensibly. Half the unix ecosystem has been
reimplemented, probably not that well, by people who didn't know it existed in
the first place. That's my conclusion so far.

 _prepares to be flamed_

~~~
vidarh
libvirt seems horribly over-engineered to me. I can't stand it. One of the
great appeals of Docker to me is the combination of simplicity, and the
index/registry.

As someone managing hundreds of vm's, and who's being doing Linux sys-admin
stuff for 20 years, Docker is the best thing that's happened for a _very_ long
time.

~~~
antocv
We can make it better.

SystemDs containers just need a registry of them tarballed.

Or lxc expanded with a registry/easy to copy root fs.

------
mmgutz
Maybe Docker helps the cloud provider be more efficient but as far as the
actual Docker container v Debian VPS we deploy to, we haven't found any
advantage. We ran into pain points like you describe and quickly dismissed
Docker.

Unless you are building the next Heroku or infrastructure as a service I would
be hesitant to recommend Docker.

------
artursapek
[https://terminal.com](https://terminal.com)

~~~
mooism2
Are you trying to say that their service is implemented using Docker?

~~~
artursapek
I'm offering them as an alternative to Docker.

------
opendais
Discussion about the issue is here:
[https://github.com/docker/docker/issues/2267](https://github.com/docker/docker/issues/2267)

 _That said:_

With a DHCP server, you get this warning when you try to edit
/etc/resolv.conf: "Dynamic resolv.conf(5) file for glibc resolver(3) generated
by resolvconf(8) DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE
OVERWRITTEN"

Docker works in a similar fashion to assign IPs so this shouldn't be
surprising.

[http://askubuntu.com/questions/475764/docker-io-dns-
doesnt-w...](http://askubuntu.com/questions/475764/docker-io-dns-doesnt-work-
its-trying-to-use-8-8-8-8)
[http://docs.docker.com/installation/ubuntulinux/](http://docs.docker.com/installation/ubuntulinux/)

You are supposed to modify /etc/default/docker and use a consistent group of
DNS servers per-host. Its simple and it works, honestly.

DOCKER_OPTS="\--dns 8.8.8.8"

Can you tell I disagree? ;)

/etc/hosts shouldn't need modification if you control your dns server...since
you can just place whatever you need there.

 _As for alternatives..._

Docker is popular because all the alternatives are much larger PIA to manage.

I'd suggest, if you dislike their private repo system, you just use git to
manage the files for each docker image and create it locally on the host.
[e.g. git clone, cd, docker build .]

I honestly fine that works well enough and means I don't have to maintain more
than a single gitlab instance for my projects.

~~~
josteink
All the other alternatives pain in the ass to use?

Pardon me, but like op I found docker to be quite unusable, and LXC a breeze
to use.

Why go with the bad abstraction when you can have the real thing? Lxc has
everything you want, with zero obscuring "magic" which you don't need anyway.

LXC is definitely recommended. For max benefits you probably want to couple it
with a fancy filesystem like btrfs, bit it's by no means required.

------
h43z
Some problems I currently have with docker.

1\. Dockerfiles are to static

You cannot start the docker build process with an variable (eg. a var that
holds a specific branch to checkout) in your dockerfiles.

2\. Managing the build process,starting and linking of multiple
images/containers

I started with bash scripts then switched a tool called fig. Even though fig
keeps things in a simple config file for whole setups I cannot use it because
it does not wait for the DB container until it's ready to accept connections
before starting a container that links to it. So I'm back writing bash
scripts.

3\. Replacing running containers

Restarted containers get a new IP, so all the linking does not work anymore. I
had to setup a DNS server and wrap the restarting of containers in bash
scripts again.

I had no big issues creating a private registry after the ssl certificate was
created correctly.

------
j_s
A lot of tips for DNS and Docker were discussed two weeks ago:
[https://news.ycombinator.com/item?id=8107574](https://news.ycombinator.com/item?id=8107574)

\- serf, consul, etcd, dnsmasq, zookeeper, rubydns, skydock

------
PetrolMan
Docker certainly has its pain points but I've actually really enjoyed working
it into our build and deployment process.

The command syntax is sometimes a bit clunky but they are making regular
improvements.

Not to repeat lgbr, but the registry is pretty easy to get running. I had some
problems initially, but it was mostly because I didn't really understand how
to use a container all that well. That sounds somewhat silly, but it
ultimately was true.

Finally, the hype is ultimately a good thing. There's a lot of focus on the
project right now, which (hopefully) means we can expect a good deal of
improvement and stability in the near future.

~~~
stormbeta
The private registry is easy to get running, but actually using it is very,
very clunky.

I understand they want the public index to "always work", but the requirement
to tag images with FQDN for docker to even try to use a separate registry
breaks a lot of a very common use cases, such as transparent caching and
mirroring of images.

------
kimh
Docker is good to run application container, not suitable for virtulizing OS.
Therefore, playing with system configuration files such as /etc/hosts of
/etc/resolv.conf is not something that you want to do in Docker. You should
use LXC, instead.

I agree that Docker is suck sometimes, but most of times are because existing
application is not built in a way that docker expects. I am optimistic that
more are more applications are designed to be used for container environment.

------
ninkendo
After having used docker for "real" things for the past 8 months or so, I
definitely agree with you that it kinda sucks.

Docker's strengths come from the workflow you get to use when you use it...
"Run /bin/bash in ubuntu" and it just works. For developers that's great. For
a backend that does the heavy lifting for you when you're developing a lot of
operations automation (like a PaaS), it starts to break down.

Just some of the things I've come across:

* Running a private registry is awkward. You have to tag images with the FQDN of your registry as a prefix (which is braindead) for it to "detect" that it's supposed to use your registry to push the image. "Tags" as an abstraction shouldn't work that way... they should be independent of where you want to store them.

* Pushes and pulls, even over LAN (hell, even to _localhost_ ) are god-awful slow. I don't know whether they're doing some naive I/O where they're only sending a byte at a time, or what, but it's much, much, much slower than a cURL download to the same endpoint. Plus if you're using devmapper then there's a nice 10-second pause between each layer that downloads. btrfs and aufs are better but good luck getting those into a CentOS 6 install. This is a major drawback because if you want to use docker as a mesos containerizer or otherwise for tasks that require fast startup time on a machine that hasn't pulled your image yet (ie. a PaaS), you have to wait far too long for the image to download. Tarballs extracted into a read-only chroot/namespace are faster and simpler.

* Docker makes a huge horrible mess of your storage. In the devmapper world (where we're stuck in if we're using centos) containers take up tons of space (not just the images, but the containers themselves) and you have to be incredibly diligent about "docker rm" when you're done. You can't do "docker run --rm" when using "-d" either, since the flags conflict.

* In a similar vein, images are way bigger than they ought to be (my dockerfile should have spit out maybe 10 megs tops, why is this layer 800MB?)

* The docker daemon. I hate using a client/server model for docker. Why can't the lxc/libcontainer run command be a child process of my docker run command? Why does docker run have to talk to a daemon that then runs my container? It breaks a lot of expectations for things like systemd and mesos. Now we have to go through hoops to get our container in the same cgroup as the script running docker run. It also becomes a single point of failure... if the docker daemon crashes so do all of your containers. They "fix" this by forwarding signals from the run command to the underlying container but it's all a huge horrible hack when they should just abandon client/server and decentralize it. (It's all the same binary anyway).

The other issues we've seen have mostly just been bugs we've seen that have
been fixed over time. Things like containers just not getting network
connections at all any more (nc -vz shows them as listening but no data gets
sent or received), changing the "ADD /tarball.tgz" behavior repeatedly
throughout releases, random docker daemon hangs, etc.

If as we're using docker for more and more serious things, we end up getting
an odd suspicion that we're outgrowing it. We're sticking with it for now
because we don't have the time to develop an alternative but I really wish it
was faster and more mature.

~~~
antocv
I ran into dm issues too, something with grsecurity stopping a bruteforce.

Oh and that ADD this /that will chown 0:0 that. Come on.

~~~
shykes
Hi, the reason ADD applies a `chown 0:0` is to avoid applying the uid/gid of
the files on the source system, which could be anything and would introduce a
side effect in your application's build.

There is a pull request for letting you set a the destination uid/gid
determinastically, which is the right way to do it.

Just a reminder that we accept bug reports and patches :)

------
cpuguy83
And /etc/hosts, /etc/resolv.conf are now writable.

docker run -d regisry

`docker rmi` can be tricky, but it can and does actually remove an image. The
problem is if there is an image with multiple tags at the same commit
(layer)... in this case it just untags until there are no other tags pointing
to that commit. Alternatively, you can use the ID, I believe that always
removes as expected.

------
dz0ny
Q: Why do you want to change /etc/hosts and /etc/resolv.conf (besides hacking
DNS for testing purposes)?

~~~
est
some of us use HOSTS to write config

e.g.

    
    
        10.0.12.34 mysql_host
    

This allows _some_ degree of flexbility

1\. boot up new instances quickly as long as HOSTS is correct

2\. Don't have to hard-code actual mysql server IPs.

3\. Make mysql master/slave failover much easier.

~~~
ironchef
I think most folks doing devops end up using things more like zookeeper,
consul, etc. instead to perform the above as opposed to hosts.

~~~
ajdecon
Remember you may have to deploy third-party processes as well as apps whose
code you control. Most existing software uses OS-standard APIs for doing
things like resolving hosts, and you can't just point it at a path in zk. That
means running a DNS server or configuring /etc/hosts.

------
vsipuli
You might want to try systemd-nspawn
([http://www.freedesktop.org/software/systemd/man/systemd-
nspa...](http://www.freedesktop.org/software/systemd/man/systemd-
nspawn.html)). It is much more bare bones than Docker, but in some use cases
that might actually be an advantage.

~~~
antocv
I like systemdnspawn much more than docker. Thanks.

But ubuntu is still anti systemd, garh.

------
contingencies
[http://stani.sh/walter/pfcts](http://stani.sh/walter/pfcts)

------
SeoxyS
I've found docker to be a revolutionary abstraction. By itself, it's not a
silver bullet. But combined with the right tools, it completely shifts how you
think about devops.

I recommend looking into: Quay for private image hosting and building, CoreOS
as an environment to run in production.

------
netcraft
this seems to suggest that the /etc/hosts, /etc/resolv.conf problem is being
fixed:
[https://github.com/docker/docker/pull/5129](https://github.com/docker/docker/pull/5129)

------
jenrzzz
It may just be you, but given Docker's youth and novelty, you should expect
some frustration. If everything about it seems like a pain in the ass, it
might be the wrong tool for your use case.

------
rabino
I've adopted Docker in my main dev workflow and I'm super happy. And the 4 or
5 co-workers I showed how it works have since adopted it too.

I think it's very possible it's just you.

~~~
Ayey_
Would you mind sharing that work-flow? I'm interested in how this is done,
since i have failed to do it on several occasions.

~~~
monkey26
I have failed to adopt it as well, and am still using Vagrant, but as my
desktop is Linux, vagrant is a little bit of overkill.

The primary stopper for me is run vs start I think - and persistence of the
most recent change I made in an environment.

~~~
bnjs
`run` creates a container from an image and tries to start it. If it doesn't
start you have a stopped container in `docker ps -a`. If it does start, you
have a started container in `docker ps`. You could stop a started container if
you wanted, and later start it again without having to use `run` all over
again.

Sometimes running fails to start the container because there's a problem.
You'll then have a stopped container.

------
simonebrunozzi
I'm interested to hear what's everybody's take on how Docker
compare/compete/integrates with virtualization (e.g. VMware, KVM, HyperV).

~~~
antocv
Its not the same thing, containments and virtualizations.

Abit like comparing apples to oranges. Security wise KVM beats docker out.

I like lxc and systemds machinectl containers. But docker dissapointed me. It
fails to be a full container.

------
lelf
[https://github.com/pannon/iocage](https://github.com/pannon/iocage). What?
You asked for the alternative.

------
samirmenon
The conspiracy theorist in me says this is someone at Docker looking for
competition.

~~~
antocv
If noone else steps up to it, Ill do it myself.

------
brogrammer90
The truth is there isn't that much work to do anymore so the tech employee
convinces their boss' boss that some hyped up tech will make them even more
efficient. After a minimum of 3 months to integrate the new tech into the
workflow, that same tech employee will AHDH his way into some other tech seen
at Velocity and leave the internally wiki documented mess to some poor sap to
maintain.

------
geekbri
I personally would say "It's just you."

Docker has been huge for us. We can run the same containers locally that we
run in production. Dockerfiles are SIMPLE and easy for people to create and
understand.

Fig has really made using docker for local dev rather pleasant.

Are there some hiccups here and there? Yes. The project is young and they are
actively trying to smooth over a lot of issues and pain points.

I feel like most people who dislike docker have not actually tried it or used
it. That could just be a wrong opinion though.

