
Inside Docker's “FROM scratch” - kiyanwang
https://embano1.github.io/post/scratch/
======
rem7
Unless building a base image... doesn’t this just take away from the benefits
of using docker? If I understand one of the primary goals of containers it to:
create an isolated environment with quotas and restrictions to the underlying
OS by using Linux namespaces and cgroups. However one of the great things
about docker is that I can do FROM ubuntu and then anywhere I run my container
I now have my app running in an OS that I’m comfortable with. So I can always
run bash inside the container and apt-get whatever I need and debug
it/experiment etc...

I understand the problem with docker image sizes. I worked at a company where
we had a ~1GB image and our CI tool didn’t support caching of docker images so
it would take a good 15 minutes to do a build every time. But when we were
faced with the option of using another smaller OS, like alpine, we decided not
do it because we would give up a lot of flexibility that the OS was providing
us.

If you’re running a statically linked binary produced by go and that’s all you
want on your pretty much empty image, why not just scp the file and run it
manually under a cgroup? Or good ol choot/jails/zones?

~~~
lucaspiller
> If you’re running a statically linked binary produced by go [...]

In a way you are answering your own question. Sure you can give up Docker and
use something else, but you are giving up benefits of using the Docker
infrastructure and ecosystem.

If you are just using Docker for one app, then yes I agree, but if you have
other apps running through Docker then it’s certainly beneficial to do so even
for statically linked executables, to keep everything consistent.

~~~
acdha
This is especially important once an organization grows much. Once you start
having ops or security teams, different development groups, etc. there's a
significant benefit to having one way to manage everything.

A new sysadmin doesn't need to learn that custom way your hand-rolled
deployment system handles dependencies, how to see what's supposed to be
running on a box, etc. A security person who's wondering whether something is
supposed to be listening on a port can at least see that it's something
someone went to the trouble of exposing. That QA team or the person who
inherited your maintenance coding can avoid learning a new way to ensure they
have a clean CI build.

(That doesn't mean that Docker's always the right answer — maybe you've
identified an area like networking where there's a reason not to use it — but
in most cases the differences are below the threshold where it's worth caring)

------
Patrick_Devine
My current philosophy on this is use always start from scratch if I can. This
would be the case where I'm using something that is statically compiled, or a
standalone go binary.

If I need more facilities from an OS, then I try to use a micro-distribution
like Alpine. This could be because I have a more complex go binary, or if I
have a python script that I want to execute.

If Alpine isn't cutting it, then I go for something like Ubuntu. This is
typically because Alpine doesn't have some library that I need, or because
musl libc isn't behaving properly.

------
cdancette
Very interesting. Does someone have an example of a project where they used
scratch? It seems to be only useful to build base distribution images

~~~
ek5Jf
Yes.

Works great with Haskell statically compiled binaries. Running the binary
through UPX i've managed to get small HTTP microservices down to a 2MB docker
image with just Scratch.

~~~
Intermernet
Works just as well for Go binaries. It's pretty much the recommended base
image for distribution of Go apps on Docker. I assume that it would be just as
effective for any statically compiled binary.

Edit: I really should have read the article first. It uses Go binaries as the
example. Good to know Haskell folks are also using it.

~~~
rmoriz
Massive downside: You run your app as root or you have to do nasty mounts of
/etc/passwd and /etc/group from your host

~~~
Intermernet
You can run a Docker container as a particular user.

[https://docs.docker.com/engine/reference/builder/#user](https://docs.docker.com/engine/reference/builder/#user)

You can use `setcap` to grant capabilities to the binary or the `pam_cap`
module if you need to do capabilities per user.

I haven't run across the need to run most containers as root for a while now.

~~~
embano1
Yup, in the end it´s an OS process and all rules apply. I did not care too
much about Dockerfile best practices in my article. Good point, should at
least have used "user <!root>".

