This week we are freezing all feature merges and focusing on refactoring, code cleanup and generally repaying as much technical debt as possible.
We are also considering a gradual slowdown of the release cadence (we currently cut a release every month), to give more time for QA. Even though we work hard to keep master releasable at all times and run every merge through the full test suite, in practice there can never be enough real-world testing before a release. An 8-week cycle (which is roughly what Linux does) would allow us to freeze the release 1-2 weeks in advance and do more aggressive QA.
Keep up the good work!
That said, yes this release looks interesting!
It hasn't bothered me enough to comment on that issue though, Docker has really improved my workflow enormously.
"Currently if we restart docker daemon, all the running containers also restart. Are there any plans to keep the running containers running even if we upgrade our docker daemon?"
The answer is yes, there are plans to do exactly that :)
Trying to get CoreOS installed on VPS providers is a huge pain, and fleet and etcd are technically not labelled as production-ready (only CoreOS used as a base OS is), so I'm really glad I can go back to vanilla Docker.
I'm pretty sure CoreOS and Docker are different solutions to different problems. So... one is not really inter-changeable with another, and thus you were not "wasting" your time.
So, perhaps you didn't understand the problem you were trying to solve? Docker is about application environment isolation/portability (not! virtualization! -- there is no security provided here), where CoreOS is about scaling and HA.
However, right now I don't have a Mesos-type of cluster with a bunch of horizontally scalable services that would truly benefit from CoreOS, Consul, etc. I've got three basic services that are going to be pinned to specific boxes. I don't need an elastic scaling solution right now - I just need container restarting, and I'll handle provisioning and upgrades myself with Ansible. Basically I don't have cattle right now, I have kittens.
I had been looking for a way to manage my containers with the usual supervisors - supervisord, systemd, etc., but didn't see any resources on anyone else doing it. After I started adding systemd on the host to manage container restarts, and a playbook for managing Docker daemon upgrades, I started to feel like I was re-inventing CoreOS, so I gravitated towards it. CoreOS also has other benefits like being a minimal Docker host and providing the distributed configuration with etcd so it seemed like a good idea.
But to fit within that paradigm I had to start re-wiring my images to be horizontally scalable. Then I ran into the problem of how hard it was to just get CoreOS installed on Linode, and got frustrated.
I didn't mean to knock CoreOS or anything, I'm just saying this solved a use case problem I've been having. I'm sure if I spent more time with CoreOS I would have gotten everything working correctly, but at this point I'm just trying to run my multi-host Docker application in a stable way without doing the whole horizontally scalable cluster thing.
Actually there is plenty of security. It may not be "as secure" as a traditional virtualization platform, but that doesn't mean Docker/containers don't have plenty of capabilities (ha!) to offer in terms of security.
You need to understand what they offer, but the old "Docker isn't secure" thing is not more truthful than "Docker is a security solution". Security is a continuum, and Docker has some very interesting security uses.
I've already linked to , but I'd encourage all to read it.
1.0 was/is about API stability, engine stability, full ecosystem with Docker Hub, and enterprise support.
So when a container is restarted via the restart policy, it not only happens faster, it will get the same IP as well.
Not for me. Just tested it.
While great, I worry that this is a part-solution that will delay the implementation of a proper one.
> 1) We do degrade restarts over time.
Is this adjustable? X restarts in Y seconds? I feel this is something that could be quite application specific.
> 2) There is hierarchy (so it will start linked containers).
So if A has a link to B, will A be restarted if B is restarted?
I'd be aiming to build hierarchies with restart strategies like these:
2) No, this is not the goal of the restart policies. I think this would be best implemented by something watching the docker event stream.
I haven't found a satisfactory solution to having communicating containers across multiple hosts. There seems to be quite a few solutions in the making (libswarm, geard, etc). How are other people solving this (in production, beyond two or three hosts)?
Edits to /etc/hosts in a container don't affect the host, so it seems like everything should Just Work.
"Note, however, that changes to these files are not saved during a docker build and so will not be preserved in the resulting image. The changes will only “stick” in a running container."
If I do RUN echo "127.0.0.1 somehost" >> /etc/hosts or COPY container_hosts /etc/hosts, according to the announcement wording, this will not persist in my image.
If this is incorrect, could you clarify what isn't saved during docker build?
If this is correct, why is docker build unable to retain these changes?
Now, in its current implementation "docker build" commits an intermediary image after each build step. These intermediary images are used by the build cache, to speed up successive builds. These intermediary images are created in exactly the same way as every other image - which means the same rules apply for /etc/resolv.conf and /etc/hosts. An unfortunate side effect is that changes to these files are not shared between build steps.
We can solve this in a future version (or even a hotfix if necessary). A relatively quick stopgap would be for "docker build" to copy these files across build steps. The long-term solution is to no longer commit a full-blown intermediary image at each build step, but instead to use a snapshotting facility more tailored to the needs of the build caching system. While we're at it we can make sure it preserves /etc/hosts and /etc/resolv.conf.
Sorry for the inconvenience, I hope the explanation helps.
(just tested it out as this was my assumption all along).
Are there any up-to-date installation instructions for the Vagrant env?
(not in any way, shape, nor form). Please stop trying to make Docker into everything it's not.
FreeBSD Jails do a lot more than just make applications portable -- they provide security and isolation between applications (like a super chroot). Jails can be used to safely provide application hosting for various clients while Docker should only be used for your applications (jails prevent clients from messing with the host system nor each other, while Docker applications can, making it not secure for a multiple-client hosted system -- but then again, it was not designed to do that).
So, no, it's not reasonable to want to "port" Docker to anything since Docker is it's own thing. It would be more reasonable to port docker to FreeBSD (it it weren't already) than to say to "port" it to a Jail, or request it operate like a Jail (unless you got those features into LXC first, which LXC isn't, and therefore it wont).
A better (but admittedly over-simplified) comparison would be Jails are closer to application virtualization and Docker is closer to application portability. Very different problems they are solving.
Docker interfaces with the kernel to provide security and isolation via cgroups and namespaces. This is exactly what jails does, and is indeed on the list of things to be added. It's really a matter of someone taking the time to write the driver for it.
Docker also provides an image format and infrastructure for helping to make applications portable.
Seems like you actually are not understanding what docker is doing. Docker (and LXC for the matter) aren't about security -- they are about portability of the application and environment. Everything else is tertiary.
> It's really a matter of someone taking the time to write the driver for it.
It's a bit more complicated than that -- the two are different beasts with different goals.
> Docker also provides an image format and infrastructure for helping to make applications portable.
This is the main goal of Docker -- making applications and their environment portable.
> Docker interfaces with the kernel to provide security and isolation via cgroups and namespaces
Neither Cgroups nor Namspaces provide security in the same sense as a virtual machine or virtualized app (jails). Cgroups are about resources allocated from the host, and namespaces are about process isolation... but neither prevent different containers from interacting with each other nor the host. This is the security aspect - which Docker (and LXC) were not designed to provide. The problem they solve does not require it.
Use the right tool for the right job. If you are going to host a bunch of applications for different people -- go with virtualization, either via a hypervisor or jail. If you are going to deploy applications in an enterprise environment and need it to be consistent always, across all distros and version -- go with LXC/docker.
cpuguy83 is a Docker core maintainer: https://github.com/cpuguy83
He's answered plenty of my questions in #docker IRC.
Indeed, all these things come together to do exactly what jails does.
Jails do not provide the same security as a VM, just like what Docker does is not providing that same level of security.
You are kidding yourself if you think jails does.
It's all layers... like ogres... or onions :)
Can Docker be secure? Sure -- is it? No.
If I were to nitpick I would say docker is doing none of those things. LXC is. Docker is just freeloading off LXC while providing almost no benefits.
I tried Docker, but quickly discovered it was a cumbersome interface on top of LXC, and if you wanted to get any real work done, you needed to manage LXC yourself anyway. So ... Why should I bother with Docker in the first place then?
Docker may be good enough for some people, but I feel LXC, which Docker is actually built on is getting no credit, when clearly they deserve 99% of it.
This was true in early versions, but now the default container "exec driver" is libcontainer (a pure Go container implementation), and you can swap to LXC "exec driver" if you wish. To say Docker is piggybacking on top of LXC is unfair when in reality Docker wrote thier own container implementation.
> I tried Docker, but quickly discovered it was a cumbersome interface on top of LXC, and if you wanted to get any real work done, you needed to manage LXC yourself anyway. So ... Why should I bother with Docker in the first place then?
How long ago was this? The project moves fast and new features get added all the time.
- Layered file system for containers, commiting
- Images, easily transferable
- Remote API
- Network interfaces
- Linking containers
- Nice build system, one Dockerfile and you are almost good to go
I really wonder what is the real work you want to do, that you cannot do with Docker, and Docker becomes a cumbersome.
Indeed, dotcloud (ie, Docker before it was Docker) were using LXC as a security measure to isolate clients inside their PAAS (see pg 8 of )
It has long been speculated that it would be possible to port the Docker API to other container mechanisms.
Personally I don't think this should be a priority - I'd much prefer Docker put all their resources behind building the best experience possible on a single platform.
Nevertheless, asking about it is a valid question.
 http://en.wikipedia.org/wiki/LXC#Alternatives "LXC is similar to other OS-level virtualization technologies on Linux such as OpenVZ and Linux-VServer, as well as those on other operating systems such as FreeBSD jails, AIX Workload Partitions and Solaris Containers."
is this info up-to-date? http://stackoverflow.com/questions/18496940/how-to-deal-with...
So now docker is taking on the work of what systemd and other daemon-managers are supposed to solve? Looking forward to docker run --restart on-failure ubuntu /bin/bash exit -1
When you include a --restart "feature" you know for sure you have don goofed.
But anyway, the rest of the stuff looks like pure candy. Great job!
2) Restart policies are exposed in the API, so daemon managers can leverage this feature to hook in their own restart policies in a consistent, supported way.