

Dockerfile Tips from the Official Images - melbo
http://container-solutions.com/2014/11/6-dockerfile-tips-official-images/

======
akerl_
#1: I'm still hopeful that we'll see folks stop treating containers as
lightweight VMs that need a full OS and start treating them as processes. The
fact that the recommended steps are to bundle a distribution's userspace to
run a single process adds a lot of bulk for minimal gains.

#3: The ugliness of the RUN command shown should be a clear indicator that
Something Isn't Right. I highly recommend building things externally and then
pulling them into the container either manually or via a package format. You
can build and package the software using a build container, and then your
service containers can consume the package without needing to install build
tools themselves. This saves you from long chained RUN commands and allows
more logical separation of build tasks from the end product.

I agree with the other listed points wholeheartedly: do ensure you're checking
the authenticity of downloads via appropriate means, do use the right
utilities and source images for the task, and definitely make use of labels:
they're a very powerful resource for managing images in the field.

~~~
wernerb
#3: You could very well build a deb with docker (compile from source and
package it) and then use that deb subsequently in other containers. Docker can
be used in CI/CD processes very well. If that is the case, I like the way that
Dockerfiles give me complete transparency over the build process, which manual
building perhaps does not.

This point clearly states "If you compile code from source during your build"
then you should clean the sources.

~~~
akerl_
My apologies for being unclear: I'm saying that in the case where you're
building from source during the build of your service container, you're almost
always better off building from source in a separate container and then either
turning that into a deb or other consumable format.

~~~
amouat
_Maybe_. But you do lose some transparency and repeatability - it becomes
harder for others to understand and use your Dockerfile. They suddenly have to
either download or recreate whatever debs/tarballs etc you've used.

I think the real problem is that Dockerfiles aren't expressive enough yet. At
the very least we need a more user friendly method to run several commands in
single layer.

------
wernerb
A problem I had with some dockerfiles, was that often they rely on
debian/ubuntu upstream BUT rely on their own dockerfile container as an
intermediary. I understand this, it makes it easier to maintain if you have a
lot of docker containers.

But for me 'just pulling from the registry', I have to check out that
intermediary dockerfile to ensure that no unexpected things happen or that the
correct version of debian/ubuntu is referenced.

What I then do is fork their docker containers and replace FROM 'bla/ubuntu'
to the official one, to regain control. But this sticks me with the job of
then maintaining the container.

In short, I wish registry containers would just directly use the ubuntu/debian
official images as some sort of guideline for general "public use" docker
images.

Edit: Off-topic, would love some kind of graph view or some kind of
dependency-depth indicator when browsing registry containers, or set it as a
requirement.

~~~
amouat
What do you mean by "rely on their own dockerfile container as an
intermediary"? Things like the buildpack-deps image?

Graph view would be cool!

~~~
wernerb
Basicly just a reiteration of your #1 point. I have seen containers where they
create their own 'parent' containers that point to the ubuntu/debian base-
image for their generic app containers. The extra layer irks me, i'd very much
like to just have all app-containers depend on official containers. E.g. a
project maintainer might install wget in their upstream container, then rely
on that image for all its app-containers.

~~~
vidarh
I think the issue there is that once you start building a number of
containers, it's easy to start noticing patterns that repeats across your
containers, and so it's very tempting to do what you mention to avoid
repeating yourself.

I sort of agree with you, but I think it reflects a tooling problem in
expanding dependencies more than anything.

After all, Docker introduces temporary images for nearly every step of the
Docker file anyway, so if these "parent containers" are genuine dependencies
it's beneficial to collapse the steps for individual app containers down to a
set of shared ancestors as possible.

