
What's new in Docker 1.13: prune, secrets, checkpoints and more - assaflavie
https://www.cloudshare.com/blog/whats-new-docker-1-13
======
dandermotj
I'm really looking forward to seeing the scientific community adopt docker as
a way to distribute reproducible research and coursework.

MIT 6.S094 has a Dockerfile[^1] that contains all the software required for
taking part in the class. This is a huge boon for getting stuck into the class
and its coursework.

[^1]:
[http://selfdrivingcars.mit.edu/files/Dockerfile](http://selfdrivingcars.mit.edu/files/Dockerfile)

~~~
sigjuice
How is publishing a Dockerfile even remotely reproducible? Almost every
Dockerfile is a series of apt-get install, or yum install or pip install
commands. How do I know what versions of packages I am downloading or whether
they will even be available to download if I build from this Dockerfile, say
two months from now?

IMHO, every Dockerfile has left-pad written all over it.

~~~
yeukhon
Good question.

Reproduciblity is all about the starting point. Computers are electronic, so
if your computation requires high entropy from some random source and supposed
next run there is not enough entropy your experiment may fail. But that's
really really really a corner case. Docker image keeps the state of the
starting point (kernel, packages, history of bashrc etc) are kept version
controlled. It is as if someone gave you a copy of the virtualbox image.

So how do we lock down?

1) When you start with a Dockerfile, specify the version of the packages you
are installing

2) When you want to reproduce, you can rebuild an image with that Dockerdile.

3) But most people are just going to use your image which is always the same
now or next year. Building image != launching a container using an image.

------
lacampbell
I'm just a guy that wants to deploy web apps. Is docker overkill for me?
Basically, I want to be able to test something on my local machine under the
same conditions it will be running on my server. Containerisation seems like
the only way to do this that doesn't involve keeping packages and system
configurations in sync in two or more systems.

~~~
scanr
Docker may be overkill to start but it's relatively low cost to implement and
it will definitely pay dividends over time:

    
    
        * You can be sure that what you're running locally 
          is exactly what you'll be running on the server
        * Your deployment experience will be the same 
          regardless of which tech stack you're using for the 
          web application
        * There are many places you can deploy docker 
          containers (Google GCE, Amazon ECS, Amazon EB, etc.)
        * A web application is often composed of several 
          services (e.g. the web app, a database, redis etc.) 
          and docker compose makes it easy to fire all of 
          those up in development e.g. if a new 
          developer joins, they only need to install 
          docker rather than web app framework + 
          database + redis
        * Docker sets you up quite well to grow into a 
          more complex deployment (e.g. using Kubernetes)

~~~
MrBuddyCasino
> it's relatively low cost to implement

Running Docker in production takes a huge amount of effort to get right and is
not easily done.

~~~
tra3
I dont believe that's an accurate assessment. If the grandparent wants to run
a one off container with reproducible results, something like docker-compose
is perfect. If he wants to run a multi-node microservices architecture then
the story gets more complicated.

~~~
LeanderK
i run a lot of small projects with docker-compose on a single host and it
makes deploying my changes very easy. Maybe there is a low cost setting it up,
but i think eve with a small project it pays it divides pretty fast.

------
wenbert
Does this mean the qcow2 disk space usage in Mac is fixed?

~~~
ewang1
Yep, just tested it. The qcow2 disk space gets reclaimed on Docker restart.

~~~
wenbert
Sweet. Now time to play. I followed that bug until i got tired of it. Took a
couple of months! Good on them for fixing it. Thanks!

------
tachion
As much as I welcome the CLI cleanup, I can't stop thinking that the 'docker
ps -> docker container ls' change makes no sense to anyone who has any
experience with bsd/unix/linux systems. Seriously, why?

~~~
rockostrich
I agree. It looks like `docker ps` still works so it's nothing to really be
concerned about just yet.

~~~
cpuguy83
`docker ps` will never be removed.

------
EtienneK
Secrets is a big one! Will really help speed up enterprise adoption.

------
joekrill
Looks like there's a mistake about image pruning:

"Add -f to get rid of all unused images (ones with no containers running
them)."

But the option is actually `-a` -- `-f` just simply skips the prompt.

~~~
willemmali
Like this?

docker rmi -af

I'm a bit confused by the backticks as I use them all the time scripting, but
also in Markdown.

~~~
realPubkey
I have a gist for it:
[https://gist.github.com/pubkey/73dcb894cf5f7d262863](https://gist.github.com/pubkey/73dcb894cf5f7d262863)

#stop and delete all containers

docker rm -f $(docker ps -a -q)

#delete all images

docker rmi -f $(docker images -q)

~~~
e40
This is NOT equivalent. The OP was talking about removing _unused_ images.
Your commands remove _all_ images.

~~~
lloeki
Maybe this?

    
    
        docker rm $(docker ps -qa --no-trunc --filter "status=exited")
        docker rmi $(docker images --filter "dangling=true" -q --no-trunc)

------
andmarios
Prune seems not that well thought to me. Don't get me wrong, I do find it
useful but many people use containers as environments. Think about how many
people are going to run prune only to find their work go missing.

If you are gonna add a nuclear button, do it with a big red alert and give the
option to whitelist some containers.

~~~
joekrill
But that's really what `docker rm` is for, isn't it? I mean, if you want to
only delete specific containers, use that. Prune has a specific purpose, which
I think is very clear. If you're running the command, you (presumably) know
what it should be doing.

I suppose you could argue it might be nice to be able to do something like
`docker container prune startsWith*` or something similar. But on the other
hand, that functionality is already available -- just use `docker rm` with
xargs or something.

~~~
andmarios
But the thing people complain most isn't because they want to delete
everything but because they need docker rm, xargs and complex bash foo to
delete the containers and images they don't need.

For example I want to delete all old and all untagged versions of an image. I
want to delete all stopped containers that use a specific image, or that were
created more than two weeks ago. I want to delete all images starting with
test.

Nuke everything? Not so much and to be honest this would be the easiest even
with xargs and docker rm.

~~~
cpuguy83
fyi, you do not need xargs.

`docker rm $(docker ps -q --filter blash)`

But agree, `prune` is currently sledge hammer and needs some refinement. It's
not about not being well thought out, it's about getting something out there
that can be built on top of.

~~~
andmarios
Thanks, it was too long since I used filter and it wasn't that interesting.
Seems much better now!

------
lojack
Curious what methods others use for handling secrets at build time (using
docker-compose). I'm currently installing (private) dependencies at runtime by
mounting my secrets as a volume. I couldn't find a method that didn't seem to
have some risk of inadvertently exposing them.

~~~
vkjv
There are only methods that I'm aware of:

\- Exposing the secrets on a (http) server that the Dockerfile can use to
fetch

\- What we use: Create a one time use secret that is destroyed after the image
is built and before it is pushed.

~~~
djstein
>What we use: Create a one time use secret that is destroyed after the image
is built and before it is pushed.

This approach has sparked my interest, could you post an example of any open
source docker-compose file and/or associated scripts that would do this?

~~~
lojack
I did actually encounter this solution while researching the problem, didn't
love it, but you can check out the solution at:
[https://github.com/docker/docker/issues/13490#issuecomment-1...](https://github.com/docker/docker/issues/13490#issuecomment-162125128)

As long as you add the file and remove it in the same command it doesn't get
committed as an extra layer, so the container won't have any history of the
secrets. You'll run into problems if you do multiple RUN's or an ADD and then
RUN.

------
the_duke
Why not one 'prune' command with 'containers', 'images', ... as an argument /
subcommand?

Would have seemed more intuitive to me.

~~~
danpalmer
All of the other commands have been namespaced by what they deal with, so I
think it makes more sense in the come t of everything else.

