
The Oncoming Train of Enterprise Container Deployments - whatami
http://www.juliandunn.net/2015/12/04/the-oncoming-train-of-enterprise-container-deployments/
======
mwcampbell
I think the Alpine Linux image for Docker is a good solution to the point
about userland bloat. Much better than unikernels, which would entail giving
up all the benefits of running containers on a shared, mature OS kernel,
particularly with regard to performance, resource pooling, and debuggability.
I'm still waiting for bcantrill, who actually has experience with large-scale
production systems (unlike me), to expound on why he believes unikernels are a
terrible idea, as he said in a brief rant here:

[https://www.youtube.com/watch?v=Ya6h2zKlpaQ&t=5110](https://www.youtube.com/watch?v=Ya6h2zKlpaQ&t=5110)

~~~
simonvdv
Even though using Alpine as the base image for a container is a lot
beter/cleaner than other base image imho we shouldn't rely on distro package
management inside containers.

Not only does running a package manager inside the container mean you'll need
to satisfy its dependencies in your image you also increase the image's attack
surface compared to an image without a package manager.

Ideally we'd have a simple way of installing stuff into images from the
outside so you can always start `FROM scratch` and add the minimum deps you
need to run your app. Adding stuff could be as simple as extracting tars with
the tar's contents following the Filesystem Hierarchy Standard. Each tar could
be a layer so it matches well with how Docker images work as well.

Since it isn't possible to extend the Dockerfile syntax I started prototyping
a static binary written in Go to add to `scratch` to do this. It worked better
than I expected :) Only thing I couldn't find was a distro that packaged it's
packages this way and it would obviously suck to create another packaging
standard.

------
jacques_chester
I'm keen on unikernels too. At work I've argued that we should build a Xen
backend for Garden so that Cloud Foundry can drive unikernel apps on equal
footing with buildpack staged containers, docker containers and .NET apps.
Mind you, I argue for a lot of things at work, very few of which are ever
picked up; probably for the best.

But for all the virtues of unikernels, an awful lot of software assumes a
linuxy userland. Outside of consciously isolated universes like Java and .NET,
it might be tricky to get traction.

As for devops, the missing bit isn't the sexy stuff -- the parts that locate
and monitor processes running in containerised environments. Kubernetes,
Mesos, Diego and the dozens of others popping up every week are busy solving
that.

What seems to be forgotten is that, as this article mentions, you still need a
sane build-and-verify pipeline. Tools like Puppet, Chef and the like can be
seen as image-building toolkits. Buildpacks already know how to take various
stacks and turn them into runnable images (on Heroku or Cloud Foundry). Docker
is a step backwards, in some respects.

Disclaimer: I work for Pivotal, the company which donates the majority of
engineering effort to Cloud Foundry.

~~~
jzelinskie
My understanding from talking to people working on MirageOS is that a lot of
the unikernel users are still deploying inside containers. Just like
containers aren't going to "kill" virtualization, unikernels aren't going to
"kill" containers.

I'd particularly like to stress the part of the article that mentions
containers that bring all of userland with them. I hope this will be fixed
once people are using tools like Kubernetes and are forced to think of
containers as pieces of a larger system rather than short-sightedly "porting"
their applications into a container. At CoreOS, we introduced Clair[0]
specifically to show people how much userland is still inside their containers
and vulnernable to known CVEs. ~80% of tags (downloadable containers) on
Quay.io still contain Heartbleed, for crying out loud.

[0]: [https://github.com/coreos/clair](https://github.com/coreos/clair)

~~~
jacques_chester
> _I hope this will be fixed once people are using tools like Kubernetes and
> are forced to think of containers as pieces of a larger system rather than
> short-sightedly "porting" their applications into a container._

Could you elaborate a bit? I don't see how the scheduler changes the contents
of a container image.

~~~
Gigablah
I think it's more of people treating their containers like VMs and stuffing it
with things like sshd, syslog, a webserver, etc.

