
Docker 1.5: IPv6 support, read-only containers, stats, and more - mohamedbassem
http://blog.docker.com/2015/02/docker-1-5-ipv6-support-read-only-containers-stats-named-dockerfiles-and-more/
======
ademarre
> _Open Image Spec ... As we continue to grow the contributor community to the
> Docker project, we wanted to encourage more work in the area around how
> Docker constructs images and their layers. As a start, we have documented
> how Docker currently builds and formats images and their configuration. Our
> hope is that these details allow contributors to better understand this
> critical facet of Docker as well as help contribute to future efforts to
> improve the image format. The v1 image specification can be found
> here:[https://github.com/docker/docker/blob/master/image/spec/v1.m...](https://github.com/docker/docker/blob/master/image/spec/v1.md)
> _

This is a great start, and I hope this doesn't sound negative, but this likely
wouldn't be here if CoreOS hadn't shaken things up the way they did with
ACI/Rocket.

~~~
ykumar6
Great post by Fred Wilson on Perez's technological surge cycle
[http://avc.com/2015/02/the-carlota-perez-
framework/](http://avc.com/2015/02/the-carlota-perez-framework/)

She predicts every major technology has a breaking point and turning point.

I can't see why the same isn't true for docker. Rapid adoption leads to
growing pains, which leads to introspection, which leads to fixing of issues
to create better product.

If you've been around the block, it's hard to see Rocket as competition. There
is a lot of sunk cost already in Docker (Amazon, Google, Joyent, lots of
startups), if it's not obvious to CoreOS already.. Docker will be the
predominant way we package our applications for the next 5-10 years

~~~
kordless
> Docker will be the predominant way we package our applications for the next
> 5-10 years

That same effect will also drive a revolution in cloud infrastructure. I call
the effect the problem cloud because it's a pain in the ass sometimes, just
like a teenager.

------
guhcampos
Docker Issue #1988 is still an issue.

While it is still an issue, and still neglected (or more likely arbitrarily
ignored for profit) Docker will be a red flag for any real corporate uses.

~~~
shykes
I agree it's still an issue. Enterprise sysadmins should be allowed to block
access to external registries, including Docker Hub. There is nothing
contentious about it, and it has nothing to do with profit. If you send a
properly implemented patch for it, the maintainers will merge it.

~~~
justinsb
That's great news. What do you consider a "properly implemented patch"? I
would think the most bulletproof & simplest patch would simply allow the
default index (index.docker.io) to be reconfigured to something else in
docker.conf. Would you support a patch that did that?

Edit: And perhaps
[https://registry-1.docker.io/v2/](https://registry-1.docker.io/v2/) as well?

~~~
shykes
I think the best way is to allow a "whitelist mode" where only an explicitly
specified list of URLs are allowed to be reached. Everything else would be
blocked by default. This should give ops the peace of mind they need.

Note that this is an ACL change, and not a namespace change. That is important
because we want image names to have the same meaning everywhere, regardless of
site-specific configuration. So for example, "docker pull ubuntu" should
always mean "install ubuntu from the official docker library". This is crucial
to the developer experience and to respect the principle of separation of
concern between dev and ops. However, if ops chooses to block access to the
standard library then "docker pull ubuntu" will fail with "access blocked by
your administrator", which is totally acceptable. What we don't want is the
operation silently substituting a site-specific image, without the knowledge
of the end user, thereby breaking their build in a thousand invisible ways.

I hope this helps. Does this mean I should look forward to a patch from you?
:)

~~~
vidarh
> However, if ops chooses to block access to the standard library then "docker
> pull ubuntu" will fail with "access blocked by your administrator", which is
> totally acceptable.

Actually, it isn't, at least in production, because it forces us to rebuild a
lot of images that references the standard library, when a more reasonable
approach would be to mirror them.

The index/registry/image identity problem is by far the weakest part of
Docker, and what appears most attractive with Rocket, in my opinion. There are
pretty much zero cases where I, in my ops role, can allow production
deployments to have access to the official Docker repository, because it opens
the door to pulling in all kinds of stuff that has not been vetted (e.g.
referencing the "latest" images, and having that change between dev signing
something off and deployment), and it creates all kinds of obnoxious failure
scenarios.

At the same time, I don't want devs to have the hassle of having to repackage
all the images to point them to our internal registries, when we could easily
mirror the images that have been tested.

So if there's no easy way to point the default somewhere else, what we'll
resort to instead is increasingly adding firewall rules to block the official
registry, coupled with DNS tweaks to make *.docker.io point where we want it,
or patching the code.

Or switch to Rocket once it gets more mature, if Docker continues to make
custom image management more troublesome than necessary.

~~~
shykes
I totally agree that mirroring of official images should be easier, and right
now it's an obstacle to easier production deployment. This is why it's
important to have cryptographically signed, self-describing images. Then it
becomes irrelevant _where_ you download them from, and anyone could host a
public or private mirror. I am 100% in favor of it and we are upgrading the
registry system to allow it. Happy to chat more on #docker-dev.

------
ubercow
>Specify the Dockerfile to use in build

oh man it's finally here. I'm excited.

~~~
girvo
Finally! I can finally use one Dockerfile for Fig development, but another for
staging and pushing it to Octohost, and then a final one for kicking up to
AWS/CoreOS for deployment!

They all use the same code, they just have subtly different configurations and
trade-offs. This is going to make that so much easier!

------
carrja99
Oh yeah, being able to specify dockerfiles is golden. We actually use
Makefiles to copy docker files from different directories for building, then
copying them back. What a hassle it has been.

~~~
toomuchtodo
Couldn't you use symlinks that you create/unlink at build time?

~~~
nbaksalyar
Docker explicitly forbids usage of symlinks.

But there was a workaround: tar the entire directory and then build your
container from the archive, like `tar cfh * | docker build -`.

Hopefully that wouldn't be needed now.

------
minimaxir
I've been doing research into Docker, because the ideas of making a dashboard
to easily manage entire apps quickly and easily is very compelling for
building and managing rapid prototypes. With the addition of a stats API and
parametric Docker builds, this appears to be a realistic use case.

I still haven't found a good answer to whether you can embed Docker images
within a parent administrative Docker image, though, in order to achieve
ultimate portability. Who Contains the Containers?

~~~
hammerdr
Not entirely sure what you're asking, but you should probably look into
"schedulers" and other tools built on top of such ecosystems. Here are a few
to get you started:

Docker ecosystem: swarm and/or compose

Mesos ecosystem: Chronos, Aurora, Marathon

CoreOS ecosystem: fleet

Hashicorp: Terraform

Amazon Container Service: works with the above, will likely build their own
simple one in the near future

This is less about embedding images and about managing/"scheduling" them

Edit: Zikes mentioned Kubernetes, as well.

~~~
jacques_chester
There's also Lattice: [https://github.com/pivotal-cf-
experimental/lattice](https://github.com/pivotal-cf-experimental/lattice)

Which is extracted the next generation runtime of Cloud Foundry, known as
Diego.

------
mikehearn
Named Dockerfiles support is a nice addition. I usually split up my projects
with one (or more) container running the app, and a separate container running
Grunt to compile all the front-end libraries. It was a minor annoyance to
separate these into different directories simply because a Dockerfile could
only be named... Dockerfile.

------
ykumar6
Read only flags? Streaming stats? Docker's API gets more and more powerful
everyday! Very excited about this release

------
Scaevolus
You can rename containers now! `docker rename OLD_NAME NEW_NAME`

Uou can have a container named `service_prod`, deploy a new version as
`service_staging`, then shuffle them with renames if the staging version is
effective.

------
damm
IPv6 is really a 1.0 feature.

I know how to build and i know it does go test; but it makes me a little :(
that there's no public CI listed in github.com/docker/docker.

~~~
TheDong
They link to their jenkins with the little "build passing" badge near the
bottom:

[https://github.com/docker/docker#contributing-to-
docker](https://github.com/docker/docker#contributing-to-docker)

[https://jenkins.dockerproject.com/job/Docker%20Master/](https://jenkins.dockerproject.com/job/Docker%20Master/)

They used to use drone.io, but they recently removed it:

[https://github.com/docker/docker/pull/10519](https://github.com/docker/docker/pull/10519)

I like the creative branch naming.. "jfrazelle:burn-in-a-fire-drone" :)

------
general_failure
Does anyone know the status of User namespace support? I would think this is a
blocker for any paas to use docker.

~~~
shykes
User namespaces recently got merged into libcontainer (which is used as the
default backend for sandboxing in Docker). There is 1 technical question left
to resolve to enable it by default: how to abstract away the concept of UID
mapping, and how does it impact sharing of volumes between containers? There
is an ongoing technical discussion, I am optimistic that we will find a solid
solution soon but don't want to make any promises we can't keep.

~~~
SEJeff
In addition to user namespaces, and the obvious sVirt / SELinux bits Dan W
from Redhat has been contributing, what features / enhancements are necessary
for docker to be considered mostly secure?

For reference I run all of my apps as different containers with different
users on my own server. I then have iptables rules to block any outbound
internet connections for containers that shouldn't. It was hilarious to see
when someone hacked my wordpress install running in a container and managed to
write out a perl daemon using a rexec bug in wp. But when it tried to contact
its C&C server, iptables dropped it and ossec notified me. In a perfect world,
I'd do something similar, but root inside the container would map to != uid 0
on the host. I'm just curious if there is anything else you consider necessary
to deem docker more "secure" than it currently is.

------
anders
Wish they'd enable IPv6 by default

~~~
justincormack
Yes, I dont see why they shouldnt get link local addresses. It is also
slightly odd that if there are existing router advertisements they dont get
used, rather you have to do manual config.

~~~
tianon
It does have linked local addresses by default. It's the more complicated
setup of actually routing IPv6 addresses outside our current host that's not
enabled by default.

[https://docs.docker.com/articles/networking/#ipv6](https://docs.docker.com/articles/networking/#ipv6)
has more of the details (and the discussion at
[https://github.com/docker/docker/pull/8947#discussion_r22534...](https://github.com/docker/docker/pull/8947#discussion_r22534269)
is also useful)

Basically, we can't use existing router advertisements (as I understand it)
because you also have to tell your current IPv6 router that the entire prefix
you use for Docker needs to go to this one host as opposed to just the one
IPv6 address that host would auto-assign itself via RA.

Since there's manual outside-Docker setup involved, we can't really automate
this bit. If there's a nice clean way to do so, we're definitely open to a PR
(I'd love to have something simpler myself)! :)

~~~
justincormack
That is not very clear from the docs then which say "By default, the Docker
server configures the container network for IPv4 only. You can enable
IPv4/IPv6 dualstack support by running the Docker daemon with the --ipv6 flag"
\- that doesnt sound like link local addresses by default...

Will have to take a look, I guess there are lots of potential setups. If you
have a /64 per host it should be ok anyway, if you have a /64 for the network
it might not be.

~~~
zurn
Link local addresses are not meant for application level use in IPv6 so
bringing them up would only be confusing.

LL addresses are used for stuff like router advertisement, neighbour discovery
(IPv6 equivalent of ARP) etc. You can't use link-local addresses without extra
gyrations in socket API (scope id) so they cannot be usefully passed to normal
apps.

------
sshykes
In other news, the issues list [0] just keeps growing by the day, apparently
with few or no devs committed to ensuring show-stopping kernel
interoperability bugs [1] get resolved in a timely manner.

I want to love you docker, but the experience of using your products is often
sooo painful!

[0]
[https://github.com/docker/docker/issues](https://github.com/docker/docker/issues)

[1]
[https://github.com/docker/docker/issues/4036](https://github.com/docker/docker/issues/4036)

Karma-wise I'm sure there will be hell to pay for my impolite outburst.. Sorry
for the offense! Just calling it like I see it.

~~~
shykes
You are right that devicemapper (basically lvm snapshots for storage, used as
an alternative to aufs/btrfs on many distros including Red Hat) has been a
source of headaches for us. It is relatively obscure (we knew we were in
trouble when googling libdevmapper error messages returned our own source code
as the first result) and frankly not pleasant to work with.

But, the good news is that we have made progress recently. In fact the 1.5
release includes several improvements to devicemapper. And as of last week,
Red Hat has volunteered a dedicated engineer to escalate devicemapper
problems. So, fingers crossed things will be even better in the next release
:)

Another alternative is to switch storage drivers: aufs and btrfs are popular
options.

~~~
sshykes
Thanks for following up Solomon! I'm glad you are at least aware of what is
going on with this.

------
michaelsbradley
I was a little disappointed to see 1.5 released without a fix for the "tty
bug" in `docker exec`, and without mention of the same known bug/limitation in
the documentation:

[https://github.com/docker/docker/issues/8755](https://github.com/docker/docker/issues/8755)

Hopefully it will get fixed soon. Other than that, I'm excited to kick the
tires of v1.5 -- thanks Docker team!

------
muaddirac
> Open Image Spec

I'm wondering if this will eventually merge with the ACI that Rocket
implements.

~~~
efuquen
That seems unlikely, based off the bad blood developing between CoreOS and
Docker. See the links below:

[https://github.com/docker/docker/issues/9538](https://github.com/docker/docker/issues/9538)

[https://github.com/docker/docker/issues/10643](https://github.com/docker/docker/issues/10643)

~~~
Alupis
Wow, those are some pretty hostile words coming from Shykes.

> Coming up with a new "standard", then criticizing the established open-
> source project for failing to implement it, is a common tactic

> One last fact, which you might find funny: one of these alternative
> implementations of Docker's image distribution system is developed by
> CoreOS, the very same vendor which is propping up this so-called standard

> Do you know how many complaints I received, since Docker was created, that I
> didn't "comply" with this or that self-proclaimed standard? Dozens

> But [CoreOS] never did, because as competing commercial vendors their
> interest is to weaken and fragment the Docker standard, not contribute to
> it.

~~

> based off the bad blood developing between CoreOS and Docker

You know, I think this is really 1-way... I have not see anything along these
levels of hostility coming from the CoreOS camp.

~~~
jontro
Looks like spam accounts / repeated behaviour which reported these bugs. Even
though Shykes isnt the best pr guy in the world.

~~~
Alupis
I'd hardly call 2 issue tickets "spam accounts"... and the first one
definitely isn't -- there's years of activity.

~~~
jontro
I really do not use docker nor coreos, but I wonder what the huge demand is
for standardizing this. At least it looks a little suspicious for me while
browsing through those 3 tickets.

~~~
Alupis
> but I wonder what the huge demand is for standardizing this.

Containerization on Linux is really poised to be the "next big thing",
enabling all sort of new workflows, deployments, etc. In order for it to
really take off, people need a universal standard image format which is
portable to other implementations. There _must_ be multiple competing
implementations. It _must_ avoid vendor "lock-in".

Back when virtualization was getting off the ground, there was no standardized
format. Every vendor had their own format, and all were completely
incompatible with one another. This made migrations incredibly painful,
sometimes impossible.

It created a "vendor lock-in" _even when it came to the open source
hypervisors_ \-- ie. once you decided on a product, you were stuck.

The OVF (Open Virtualization Format) came along after years of this... not
without it's imperfections -- but it was a real life saver in a great many
ways. Vendors and open-source projects alike started to support the format,
allowing users to export their VM's and import them into a different
hypervisor with relative ease -- no weird hacky work-arounds, no re-imaging
your vm, no nonsense.

~~~~

For this kind of industry-changing technology, it's more important to have an
open standard than an open implementation.

------
nicois
It's really annoying how long it's taking to add fuse support to containers.
If I want to act on remote filesystems I have to use something like rsync.

~~~
ewindisch
It seems pretty close to working. One of the few (or only) issues remaining is
this one:
[https://github.com/docker/docker/issues/10184](https://github.com/docker/docker/issues/10184)

Basically, docker had to add device support, which it now has, but fuse is
explicitly forbidden in libcontainer
([https://github.com/docker/libcontainer/blob/164cd807a16e63ed...](https://github.com/docker/libcontainer/blob/164cd807a16e63ed539cddda55ce3bbc32e1791e/devices/defaults.go#L146))

A patch to libcontainer or possibly Docker itself should resolve this
(volunteers always welcome!)

