
Zfs support for docker (beta) - Mic92
https://github.com/Gandi/docker/pull/7
======
shykes
Note from the Docker guy: I look forward to reviewing this PR, I agree zfs
support would be an awesome addition. However, note that this PR is not
against the actual Docker repo but against a contributor's fork. I suppose
it's because they don't yet feel it is ready to be contributed upstream.
Except it's not marked as a fork in github so not sure if the usual PR system
will work? In any case, we are happy to help if we can.

Also note that we have merged experimental overlayfs support in the upcoming
1.4 release candidate.

~~~
Mic92
Done

~~~
shykes
Awesome, thanks. Anything we can do to help, just let us know. #docker-dev on
Freenode. Many thanks for the contribution.

------
plq
So, can we deploy docker containers using BSD's jails?

~~~
ecnahc515
That will probably never work the way you expect, if it does happen. While
containers are awesome, and I love docker, it doesn't magically solve the
difference between operating systems. Linux ELFs will not work in BSD, so if
it does end up happening, it'll be with docker containers built for BSD.

~~~
teacup50
Which is why Docker is an ultimately bad idea; OS-specific binary images are
the wrong level of abstraction for managing the deployment of code that's
otherwise completely portable.

~~~
johnbellone
I disagree. I think not having a continuous build pipeline for your containers
is a bad idea. As a large enterprise I want to make sure that my software
builds on all platforms my organization supports even if I don't _yet_ have
any clients requesting compute there yet. Implementing the pipeline is risk
mitigation against that and a whole other class of problems (namely not being
able to rebuild containers, compliance/audits, etc).

~~~
contingencies
That's a false dichotomy. One can certainly have an automated build pipeline
_and_ use a higher level of abstraction, thereby ensuring portability. Also,
if you think docker style containers cancel the risks of unreproducible
builds, I believe you are wrong... since in many cases, they depend on
external network resource availability.

~~~
johnbellone

        That's a false dichotomy. One can certainly have an automated build pipeline and use a higher level of abstraction, thereby ensuring portability.
    

I don't think that we are in disagreement here. In fact I believe in order to
ensure portability that both are necessary. I don't see it being feasible to
use a Dockerfile to build a container and just expect it to simply work on a
different processor architecture. A higher level of abstraction, in my
opinion, is an absolute necessity.

~~~
shykes
That is an ongoing topic among the maintainers: support for multiple hardware
architectures and OS flavors is a defining feature of Docker, and we need to
get better at it.

Within Linux-land we already deal with a multitude of storage and sandboxing
backends, networking topologies, kernel versions and builds, packaging and
dylib versions across distros, underlying supervisors etc. We've made good
progress in wrangling that "matrix from hell" so that the upper layers of the
stack don't have to. But things are about to accelerate exponentially on this
front: in just the last 6 months we've seen 1) Microsoft engineers get hugely
involved in adding Windows support, 2) Joyent "betting the farm" on making
Docker a first-class citizen on SmartOS, 3) A big influx of new Linux distros:
CoreOS, Boot2docker, Atomic, Ubuntu Core - not to mention the whole schism of
traditional distros over "the systemd wars". 4) more and more unofficial
builds of Docker for Arm and Power (not to mention x86_32), being deployed in
production today!

Meanwhile we are getting the first mergeable patches to move Docker towards
fully content-addressable storage and distribution - which once complete will
give us the building blocks for real repeatable builds, a-la Nix or Gitian.
Yes, fetching external content over the network and executing arbitrary code
will always introduce side effects, but with better tooling there are lots of
great ways we could manage those side effects. This is one of my favorite
areas of the project, and there are lots of awesome ideas and prototypes
floating around. Ping me on irc if you're into that sort of thing!

Obviously there is no silver bullet to make computation more repeatable and
portable across a variety of machines and operating systems.. But the
community is scratching its own itch, which means it will happen no matter
what. There will be plenty of trial and error, but I think we collectively
have an opportunity to improve the state of the art. That's the beauty of
open-source :)

And, to state the obvious: the solution is certainly _not_ to pretend that
every binary can execute anywhere, unmodified, with the same behavior.
Machines and operating systems are heterogeneous, that is a fact. Trying to
hide that heterogeneity will not make it go away. Rather, we should embrace
it, and define a portable set of commands and properties which have a clear
and predictable definition everywhere. Sure, the property "I require a Linux
kernel later than 3.8 on x86_64 to start" will not be handled in the same way
on every installation of Docker - some may present an error because they are
running on another arch. But that property still has a precise and portable
definition, and all Docker installations will _understand_ the same thing. Now
they can choose to process that property in the most appropriate way: perhaps
they will present an informative error message to the user. Perhaps they will
look for an "other architectures" field in the image manifest and point the
user there. Perhaps they will redirect the request to another host which
matches the requirement (hint: that is what Docker Swarm does when it receives
a 'docker run'). With this kind of design we can greatly improve our daily
flow as developers and sysops, and we can do it on top of the systems which
are installed in the real world, today.

------
doublerebel
Fantastic! I really appreciate btrfs but ZFS has a longer, more battle-tested
history. The pace of Docker improvements is impressive, and additions like
this will help prove its longterm viability and encourage adoption by the
enterprise.

------
seoguru
tweaks needed it you put mysql database files on zfs linux:
[https://github.com/zfsonlinux/zfs/issues/2872](https://github.com/zfsonlinux/zfs/issues/2872)
and:
[https://github.com/zfsonlinux/zfs/issues/224](https://github.com/zfsonlinux/zfs/issues/224)

other than that: awesome!

