Hacker News new | comments | show | ask | jobs | submit login
Docker 1.2.0, with restart policies (docker.com)
187 points by julien421 878 days ago | hide | past | web | 72 comments | favorite



Hi all, no World-changing features in this one, but we believe that over time, relentless incremental improvements can make a huge difference.

This week we are freezing all feature merges and focusing on refactoring, code cleanup and generally repaying as much technical debt as possible.

We are also considering a gradual slowdown of the release cadence (we currently cut a release every month), to give more time for QA. Even though we work hard to keep master releasable at all times and run every merge through the full test suite, in practice there can never be enough real-world testing before a release. An 8-week cycle (which is roughly what Linux does) would allow us to freeze the release 1-2 weeks in advance and do more aggressive QA.


Are you kidding?. Ability to modify /etc/hosts and specifying exact Capabilities is awesome. Thanks for the great work. I for one waited very long for this :)


Hi Solomon! I agree with prudhvis here - I think this is quite a nice add!

Keep up the good work!


Now that needs to be possible in docker build:

https://github.com/docker/docker/issues/1916

That said, yes this release looks interesting!


Glad I'm not the only one. I maintain an embedded system and a bunch of its apps - at every point in the Docker file where mknod & friends must do their thing, I have to cut the Dockerfile right there and do docker run --privileged for that build step... and so on.

It hasn't bothered me enough to comment on that issue though, Docker has really improved my workflow enormously.


You guys are doing a fantastic job, anything that can give you a bit more of a breather between releases and increase quality is something I'm certain the community will embrace.


I've spent literally the last week working out restart strategies for our critical containers, so it's fair to say this has something pretty world-changing as I might be able to ship next week now. Seriously, I was writing bash wrappers to listen for exit codes and all sorts to make my own pseudo exit hooks on the host, so this is just awesome.

Thanks guys!


I literally needed writable /etc/resolv.conf just this morning. You couldn't have timed the release any better.


ravenkat: in case you see this, it looks like you've been hellbanned. Since your question was a reasonable one, I'll copy it here:

"Currently if we restart docker daemon, all the running containers also restart. Are there any plans to keep the running containers running even if we upgrade our docker daemon?"


Definitely a reasonable question.

The answer is yes, there are plans to do exactly that :)


This seems to be the real 1.0 release...


This is excellent news. The lack of a container restart policy was the main reason why I was spending a bunch of time learning CoreOS and fleet.

Trying to get CoreOS installed on VPS providers is a huge pain[0], and fleet and etcd are technically not labelled as production-ready (only CoreOS used as a base OS is)[1], so I'm really glad I can go back to vanilla Docker.

[0]: http://serverfault.com/a/620513/85897

[1]: https://coreos.com/blog/stable-release/


> I was wasting a bunch of time learning CoreOS

I'm pretty sure CoreOS and Docker are different solutions to different problems. So... one is not really inter-changeable with another, and thus you were not "wasting" your time.

So, perhaps you didn't understand the problem you were trying to solve? Docker is about application environment isolation/portability (not! virtualization! -- there is no security provided here), where CoreOS is about scaling and HA.


You're right, "wasting" was really harsh - edited. I do think it's a valuable skill to have and I didn't waste my time, I just spent it.

However, right now I don't have a Mesos-type of cluster with a bunch of horizontally scalable services that would truly benefit from CoreOS, Consul, etc. I've got three basic services that are going to be pinned to specific boxes. I don't need an elastic scaling solution right now - I just need container restarting, and I'll handle provisioning and upgrades myself with Ansible. Basically I don't have cattle right now, I have kittens.

I had been looking for a way to manage my containers with the usual supervisors - supervisord, systemd, etc., but didn't see any resources on anyone else doing it. After I started adding systemd on the host to manage container restarts, and a playbook for managing Docker daemon upgrades, I started to feel like I was re-inventing CoreOS, so I gravitated towards it. CoreOS also has other benefits like being a minimal Docker host and providing the distributed configuration with etcd so it seemed like a good idea.

But to fit within that paradigm I had to start re-wiring my images to be horizontally scalable. Then I ran into the problem of how hard it was to just get CoreOS installed on Linode, and got frustrated.

I didn't mean to knock CoreOS or anything, I'm just saying this solved a use case problem I've been having. I'm sure if I spent more time with CoreOS I would have gotten everything working correctly, but at this point I'm just trying to run my multi-host Docker application in a stable way without doing the whole horizontally scalable cluster thing.


there is no security provided here

Actually there is plenty of security[1]. It may not be "as secure" as a traditional virtualization platform, but that doesn't mean Docker/containers don't have plenty of capabilities (ha!) to offer in terms of security.

You need to understand what they offer, but the old "Docker isn't secure" thing is not more truthful than "Docker is a security solution". Security is a continuum, and Docker has some very interesting security uses.

I've already linked to [1], but I'd encourage all to read it.

[1] http://www.slideshare.net/jpetazzo/is-it-safe-to-run-applica...


Try Vultr, they support CoreOS (and FreeBSD!).

https://coreos.com/docs/running-coreos/cloud-providers/vultr...


Etcd isn't consisdered production grade?


Well, Docker just slapped a "1.0" sticker on and called itself "production ready" when it clearly wasn't (features like in this 1.2 release are kind of mandatory for any serious deployments).


You could already do these things outside of dockerland via the host-integration stuff (ie, systemd to handle restarts/monitoring).

1.0 was/is about API stability, engine stability, full ecosystem with Docker Hub, and enterprise support.


Yea, i really thought that this stuff should be out of dockerland. Not that i dislike the feature, just that it seems to overlap with established tools that excel at doing one thing well.


FWIW, now that docker is handling the restart/monitoring it can actually do it better than if systemd is doing it, since docker knows that it doesn't have to tear down and re-create the network namespace, unmount/remount the container's FS, etc.

So when a container is restarted via the restart policy, it not only happens faster, it will get the same IP as well.


> it will get the same IP as well.

Not for me. Just tested it.


How did you test it?


I'm not sure about how I feel about this (edit - restart policies). It's cool, but seems to ignore what the OTP part of erlang development learned. They've already gone to "X number of restarts = failure" but with no time involved. There's also no hierarchy, which is where you really start to get the benefits.

While great, I worry that this is a part-solution that will delay the implementation of a proper one.


You can happily use containers supervised by systemd in a hierarchy like any other process if you like.


1) We do degrade restarts over time. 2) There is hierarchy (so it will start linked containers).


Perhaps I'm being a bit harsh.

> 1) We do degrade restarts over time.

Is this adjustable? X restarts in Y seconds? I feel this is something that could be quite application specific.

> 2) There is hierarchy (so it will start linked containers).

So if A has a link to B, will A be restarted if B is restarted?

I'd be aiming to build hierarchies with restart strategies like these:

http://www.erlang.org/doc/design_principles/sup_princ.html


1) I know you can specify the number of restarts, not sure yo can specify the gradual decay of the restarts

2) No, this is not the goal of the restart policies. I think this would be best implemented by something watching the docker event stream.


Maybe a bit off-topic.

I haven't found a satisfactory solution to having communicating containers across multiple hosts. There seems to be quite a few solutions in the making (libswarm, geard, etc). How are other people solving this (in production, beyond two or three hosts)?



Interesting, thanks! FWIW, ambassadord has consul integration and that's something I've wanted.


CoreOS seems to do what you are asking.


I simply expose ports (or do --net=host) and communicate between hosts in the normal fashion. Unless you don't trust your host I don't see the problem with that.


Writable `/etc/hosts`, `/etc/resolv.conf` is huge - no more local dns hacks.


Yeah that one was super-high on the request list, there were 3 competing patches before we finally got it right.


Why is this not supported during docker build? I'm not very familiar with the --privileged flag - isn't that only related to sharing on the host?

Edits to /etc/hosts in a container don't affect the host, so it seems like everything should Just Work.


I think it is supported. The feature we are discussing allows the container to change its own resolv/host files, from the inside. I'm pretty sure that works from build also. But it's possible that I missed one of the patches (a sign of healthy autonomy and trust between maintainers).


Hm...this sounds like a bit of a disconnect from the 1.2 announcement post, which says:

"Note, however, that changes to these files are not saved during a docker build and so will not be preserved in the resulting image. The changes will only “stick” in a running container."

If I do RUN echo "127.0.0.1 somehost" >> /etc/hosts or COPY container_hosts /etc/hosts, according to the announcement wording, this will not persist in my image.

If this is incorrect, could you clarify what isn't saved during docker build? If this is correct, why is docker build unable to retain these changes?


Ah, you are correct. Every container can now modify its own /etc/resolv.conf and /etc/hosts, but these changes are not kept when a new image is committed from the filesystem of that container. Even if they were committed, the runtime would overwrite them when creating a new container (Docker continues to inject initial values into these files to provide a predictable networking environment to the application).

Now, in its current implementation "docker build" commits an intermediary image after each build step. These intermediary images are used by the build cache, to speed up successive builds. These intermediary images are created in exactly the same way as every other image - which means the same rules apply for /etc/resolv.conf and /etc/hosts. An unfortunate side effect is that changes to these files are not shared between build steps.

We can solve this in a future version (or even a hotfix if necessary). A relatively quick stopgap would be for "docker build" to copy these files across build steps. The long-term solution is to no longer commit a full-blown intermediary image at each build step, but instead to use a snapshotting facility more tailored to the needs of the build caching system. While we're at it we can make sure it preserves /etc/hosts and /etc/resolv.conf.

Sorry for the inconvenience, I hope the explanation helps.


No the PR that was chosen ensures that it doesn't commit it - so it can be setup fresh on start. So it is up to each container's startup to set it up as it needs before continuing.

(just tested it out as this was my assumption all along).


Any update on when the OS X version will be available? I'm only seeing version 1.1.2 here:

https://github.com/boot2docker/osx-installer/releases


The boot2docker devs are running their build right now. The QA for that part is harder to automate (mac, windows, virtualbox etc). I think starting with the next release we'll need to pull in b2d builds as a gateway to the main release process to avoid this problem.


Use the Vagrant env, boot2docker is a mess IMO.


I liked the Vagrant env but I don't see it mentioned as an option here: https://docs.docker.com/installation/#installation

Are there any up-to-date installation instructions for the Vagrant env?


It's no longer supported. I use the standard Mac setup (boot2docker + Virtualbox) and it works quite well for me. It's my primary environment for developing Docker :)


Why is boot2docker a mess? I find it unusable due to lack of file sharing from the host through to a container. Are there any other showstoppers?


And anything related to FreeBSD Jail support? :)


Docker != Jails

(not in any way, shape, nor form). Please stop trying to make Docker into everything it's not.


Docker currently uses LXC. Isn't it reasonable to suggest porting it to other jail-solutions?


It doesn't use Lxc (and hasn't for a while) but uses Cgroups and namespaces in a similar way to what Lxc does. This is even exposed as a library they call libcontainer.


Well, simplifying here, but Docker is more-or-less a fancy wrapper for LXC.

FreeBSD Jails do a lot more than just make applications portable -- they provide security and isolation between applications (like a super chroot). Jails can be used to safely provide application hosting for various clients while Docker should only be used for your applications (jails prevent clients from messing with the host system nor each other, while Docker applications can, making it not secure for a multiple-client hosted system -- but then again, it was not designed to do that).

So, no, it's not reasonable to want to "port" Docker to anything since Docker is it's own thing. It would be more reasonable to port docker to FreeBSD (it it weren't already) than to say to "port" it to a Jail, or request it operate like a Jail (unless you got those features into LXC first, which LXC isn't, and therefore it wont).

~

A better (but admittedly over-simplified) comparison would be Jails are closer to application virtualization and Docker is closer to application portability. Very different problems they are solving.


I don't think you are understanding what docker is doing.

Docker interfaces with the kernel to provide security and isolation via cgroups and namespaces. This is exactly what jails does, and is indeed on the list of things to be added. It's really a matter of someone taking the time to write the driver for it.

Docker also provides an image format and infrastructure for helping to make applications portable.


> I don't think you are understanding what docker is doing.

Seems like you actually are not understanding what docker is doing. Docker (and LXC for the matter) aren't about security -- they are about portability of the application and environment. Everything else is tertiary.

> It's really a matter of someone taking the time to write the driver for it.

It's a bit more complicated than that -- the two are different beasts with different goals.

> Docker also provides an image format and infrastructure for helping to make applications portable.

This is the main goal of Docker -- making applications and their environment portable.

> Docker interfaces with the kernel to provide security and isolation via cgroups and namespaces

Neither Cgroups nor Namspaces provide security in the same sense as a virtual machine or virtualized app (jails). Cgroups are about resources allocated from the host, and namespaces are about process isolation... but neither prevent different containers from interacting with each other nor the host. This is the security aspect - which Docker (and LXC) were not designed to provide. The problem they solve does not require it.

Use the right tool for the right job. If you are going to host a bunch of applications for different people -- go with virtualization, either via a hypervisor or jail. If you are going to deploy applications in an enterprise environment and need it to be consistent always, across all distros and version -- go with LXC/docker.


> Seems like you actually are not understanding what docker is doing

cpuguy83 is a Docker core maintainer: https://github.com/cpuguy83

He's answered plenty of my questions in #docker IRC.


I don't know about core maintainer, but I contribute where I can :)


The cgroups and namespaces do indeed provide a layer of security. We also drop certain capabilties, so for instance root inside the container can't (by default) manipulate iptables, mount things, change network settings, etc. To come later would also be user namespaces so root inside the container != root outside the container. There's also a significant amount of support within Docker for selinux/apparmor stuff.

Indeed, all these things come together to do exactly what jails does.

Jails do not provide the same security as a VM, just like what Docker does is not providing that same level of security. You are kidding yourself if you think jails does.

It's all layers... like ogres... or onions :)


I get the PR angle... But over-representing Docker is doing more harm than good. People are reading things like "We would like to add feature X" or "Implementing feature X is on the roadmap", but interpreting it as "Docker does all these things right now". Soon people will be talking about how Docker makes pizza too.

Can Docker be secure? Sure -- is it? No.


These are all things that are there now, except user namespaces (which is indeed huge), not roadmap items.


> I don't think you are understanding what docker is doing. Docker interfaces with the kernel to provide security and isolation via cgroups and namespaces.

If I were to nitpick I would say docker is doing none of those things. LXC is. Docker is just freeloading off LXC while providing almost no benefits.

I tried Docker, but quickly discovered it was a cumbersome interface on top of LXC, and if you wanted to get any real work done, you needed to manage LXC yourself anyway. So ... Why should I bother with Docker in the first place then?

Docker may be good enough for some people, but I feel LXC, which Docker is actually built on is getting no credit, when clearly they deserve 99% of it.


> If I were to nitpick I would say docker is doing none of those things. LXC is. Docker is just freeloading off LXC while providing almost no benefits.

This was true in early versions, but now the default container "exec driver" is libcontainer (a pure Go container implementation), and you can swap to LXC "exec driver" if you wish. To say Docker is piggybacking on top of LXC is unfair when in reality Docker wrote thier own container implementation.

> I tried Docker, but quickly discovered it was a cumbersome interface on top of LXC, and if you wanted to get any real work done, you needed to manage LXC yourself anyway. So ... Why should I bother with Docker in the first place then?

How long ago was this? The project moves fast and new features get added all the time.


Almost no benefits? The good parts of Docker adds on top of plain LXC:

- Layered file system for containers, commiting - Images, easily transferable - Remote API - Network interfaces - Linking containers - Nice build system, one Dockerfile and you are almost good to go

I really wonder what is the real work you want to do, that you cannot do with Docker, and Docker becomes a cumbersome.


LXC and FreeBSD Jails are almost completely comparable[1]. LXC does indeed attempt to provide a security wrapper, in (very) roughly the same way jails do.

Indeed, dotcloud (ie, Docker before it was Docker) were using LXC as a security measure to isolate clients inside their PAAS (see pg 8 of [2])

It has long been speculated that it would be possible to port the Docker API to other container mechanisms.

Personally I don't think this should be a priority - I'd much prefer Docker put all their resources behind building the best experience possible on a single platform.

Nevertheless, asking about it is a valid question.

[1] http://en.wikipedia.org/wiki/LXC#Alternatives "LXC is similar to other OS-level virtualization technologies on Linux such as OpenVZ and Linux-VServer, as well as those on other operating systems such as FreeBSD jails, AIX Workload Partitions and Solaris Containers."

[2] http://www.slideshare.net/jpetazzo/is-it-safe-to-run-applica...


Docker definitely does all of the things you describe jails as doing.


How to deal with persistent storage (e.g. databases) in Docker 1.2?

is this info up-to-date? http://stackoverflow.com/questions/18496940/how-to-deal-with...


Mountable volumes and data containers works for us in staging atm, you can easily spin up single task dockers to pull that out, tar it up and send to S3 as a backup regime as well.



Yes, no changes here.


Any idea when AWS Elastic BeanStalk will start supporting this version?


Is it possible yet to build and "RUN" multiple sublayers inside the same Dockerfile ?


Oh this is juuuust great. /sarcasm.

So now docker is taking on the work of what systemd and other daemon-managers are supposed to solve? Looking forward to docker run --restart on-failure ubuntu /bin/bash exit -1

When you include a --restart "feature" you know for sure you have don goofed.

But anyway, the rest of the stuff looks like pure candy. Great job!


1) Then don't use it in your scripts.

2) Restart policies are exposed in the API, so daemon managers can leverage this feature to hook in their own restart policies in a consistent, supported way.


There's definitely feature overlap, and there are still problems running docker via systemd as show by the existence of workarounds like https://github.com/ibuildthecloud/systemd-docker




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: