
The Docker security philosophy is “secure by default” - pella
https://blog.docker.com/2016/08/software-security-docker-containers/
======
mjg59
The chart is _very_ out of date. rkt has support for userns, the capability
set is identical to that of Docker, and so is the default seccomp
configuration. The SELinux policy used by rkt has _always_ been identical to
the one used by Docker (I have no idea why the chart claims there was a
difference), and it's used by default if SELinux is enabled on the system. Rkt
has no apparmor policy support, the set of files under /proc restricted by rkt
is still slightly smaller than that restricted by Docker, and the cgroup
difference doesn't actually appear to be discussed in the NCC paper so it's
hard to comment on the difference there.

I don't want to criticise Docker here from a security perspective - several
security features in rkt are based on the implementation in Docker, and
running your apps isolated under Docker is definitely more secure than running
them all on the same unconfined host.

(Disclaimer: I work at CoreOS, though not primarily on rkt)

~~~
justincormack
The default seccomp configuration in rkt is not the same. It is based on the
Docker one, yes, but as it does not support parameter filtering it is
significantly weaker, eg it allows namespace creation, and all use of the
personality syscall, allowing users to disable ASLR.

~~~
mjg59
True, I'd forgotten that aspect. Thanks for the correction.

------
falcolas
"Secure by default" is a great philosophy to have. I hope Docker continues to
push into this direction.

However, where is image signing? Where is pulling images by their hash instead
of mutable tags? Where is restricting what images run according to their
hashes? Where is Docker not running a daemon as root? How about Docker doing
more hardening releases and fewer feature releases?

There's a lot of low hanging, security related fruit which haven't been
picked. Why not?

Process isolation is a fantastic. There just really needs to be more, in my
opinion.

~~~
shykes
Docker founder here.

> _" Secure by default" is a great philosophy to have. I hope Docker continues
> to push into this direction._

Thank you. We will absolutely keep pushing.

> _However, where is image signing?_

Docker supports image signing out-of-the-box. It's based on TUF which is the
state if the art in secure content distribution. Along the way we open-sourced
our underlying TUF implementation so thaf others can reuse it:
[https://github.com/docker/notary](https://github.com/docker/notary)

> _Where is pulling images by their hash instead of mutable tags?_

That's also available out of the box: "docker pull NAME@DIGEST"

> _Where is restricting what images run according to their hashes?_

That's not available out of the box, but a popular way to implement it is to
funnel images through a trusted private registry, only allow pulls from that
registry, and then controlling what you push to it.

Another possibility is to use the new auth plugin system, although I haven't
tried.

> _Where is Docker not running a daemon as root?_

Docker runs unprivileged by default, but only on relatively recent kernels
with user namespace support (it drops itself to "fake root" so it can still
create containers without actually having uid0).

On older systems (or if you don't trust user namespaces) I agree we could
break up some parts of the daemon (for example push/pull) to drop privileges
further. That requires serious refactoring which we have been working on
gradually for some time.

> _How about Docker doing more hardening releases and fewer feature releases?_

We follow the linux methodology: every release includes many hardening
improvements, as well as whatever features have been deemed stable enough to
merge. We don't control what pull requests the community sends, but we work
hard to maintain a high quality bar and encourage quality-oriented
contributions. The Docker core engineering team invests more time on hardening
than features.

> _There 's a lot of low hanging, security related fruit which haven't been
> picked. Why not?_

I'm sure there are :) Security is a process, we will never be done. The
important thing for us is to take it seriously, invest appropriately (Docker
core team employs 6 full time security engineers and funds various third-party
security efforts), and make sure we are perpetually improving.

On top of that, Docker is known for making powerful technology more accessible
to more people. We want to use that "superpower" to make security more usable
by non-experts. We think that contributes to making everyone more secure.

> _Process isolation is a fantastic. There just really needs to be more, in my
> opinion._

Docker security is about much, much more than process isolation.

Thanks for the feedback. We hope to see you on the github repo, we value bug
reports :)

~~~
falcolas
> It's based on TUF which is the state if the art in secure content
> distribution.

Sounds great in theory, but there hasn't been much traction on that project,
at least from their own website, for over 6 months. Not much in terms of peer
review of their method, or practical audits of their technology. Even your own
audit is over a year old now, despite constant changes to the codebase.

That said, I do recall now when this was announced originally, and I'm glad
that it at least exists. That it has skipped my memory makes me wonder why
there has been little to no press about it since then. How broadly adopted has
this been? Can we get signed images for, say, Ubuntu from docker hub?

> docker pull NAME@DIGEST

A great feature to have. Can you point me at the digest for the ubuntu repo on
docker hub?

> serious refactoring which we have been working on

Good to hear. Also, interesting to note that runc still requires root access
to make cgroups; I was not aware of this restriction.

> The Docker core engineering team invests more time on hardening than
> features.

I find this to be an interesting position to take, considering that a core
component of docker has been historically re-written with every other release:
registry in 1.6, disk plugins in 1.7, networking in 1.9, runc in 1.11. IIRC,
networking was re-written again in 1.12, to support yet another version of
clustered networking. That's a lot of churn for less than a year and a half
worth of time.

> We hope to see you on the github repo, we value bug reports

That hasn't gone well for me in the past.

devicemapper errors in ubuntu 14.04, prior to dynamic linking: not our
problem.

devicemapper dropping mounts sporadically: move to overlayfs.

Low level mutexes being shared between containers: that's nodejs' problem.

It's frankly become easier to create and use workarounds like spam-unmounting
orphaned devicemapper mounts, or custom-compiling and distributing docker, or
disabling docker-managed networking, and so on.

I feel bad for ragging on Docker like this, especially since I do end up using
it frequently; but as a system administrator Docker is a nearly daily
headache. Our developers love it, but the rest of us... not so much. I guess
it's one of those "technologies everyone complains about" vs "technologies
nobody uses" things. It's just really frustrating.

~~~
bigmac
In terms of signing and verification, doing trusted pulls of the official
ubuntu image (or any other official image) is quite easy:

    
    
      export DOCKER_CONTENT_TRUST=1
      docker pull ubuntu

~~~
falcolas
Cool, that mostly just works (had a socket error the first time I tried).

However, there's some usability problems here that I'd like to bring to your
attention:

\- There is no indication that the pull is different; no output from trust
verification that it is indeed signed. This means I have to trust that Docker
did the right thing, with no means of verification.

\- Inspecting the image after pulling gives no indication that the image is
signed, and gives me no way to do my own signature validation on the image.

\- It does fail properly when pulling an unsigned image. Yay!

\- Docker run initiates a connection back to notary, and fails if it can not
connect. This makes me uneasy - it makes me wonder what data is passed, how
it's being used, what changes are being made according to the response, and so
forth.

\- Using trust is a per-command decision, instead of a daemon setting.

\- There's no clear method to revoke a signing key if identified to be
malicious.

\- This seems like a good candidate for "secure by default" once some of the
usability issues are resolved.

------
hinkley
If Docker is Secure By Default, then why is this issue 15 months old with no
forward progress?

[https://github.com/docker/docker/issues/13490](https://github.com/docker/docker/issues/13490)

It's not 'secure by default' if secrets are flying around through the system
with no formal way to prevent it.

~~~
bigmac
We are on the case right now. The solution is going to be really, really good.
We had to get cryptographic node identity rolled out first and we're designing
secrets management on top of that.

~~~
hinkley
Why haven't you been forthcoming about this?

If you're playing to the 'secure by default' crowd, they aren't going to just
trust that you're on the case if you don't communicate. Platitudes are
something people watch out for when security is on the line, and they'll chose
another solution if they think you're full of shit.

You have to show, at all times and all things, how you're thinking about the
problem. 15 months of dead air is a lot to atone for.

------
finid
“Gartner asserts that applications deployed in containers are more secure than
applications deployed on the bare OS”

Gartner is like a financial ratings agency. I wouldn't rely on their
recommendation if I were concerned about the security of any application or
platform.

------
Perceptes
Can anyone from Docker speak to the status of content trust? The fact that I
can still `docker pull whatever` without explicitly trusting the source seems
to conflict with the chart in this post. Will/shouldn't content trust be
turned on by default at some point?

~~~
justincormack
Yes that is the plan, once everything is in place. The next step is to turn it
on for official images (which are all signed now).

~~~
endophage
To add to Justin's comment, we merged what we've called "trust pinning" into
Notary. We're still iterating on exactly the right scopes to pin to, but
anticipate that making it into docker in the not too distant future. That will
enable you to hard fail when pulling images not signed by an explicitly
approved source.

------
raesene9
From what I've seen Docker has added a lot of security functionality in the
last couple of releases which is great, although I'd like to see more work at
the Docker engine level on authentication and authrorization which still feels
kind of all or nothing (I know there's a plugin framework, but there's not a
load of good options there at the moment). Also at the moment implementing
additional security options is a bit bitty, as the planned security profile
work hasn't landed yet.

the other thing to note is that when deploying Docker engine the defaults tend
to go towards making everything work and aren't necessarily the most secure,
so its very much worth doing some additional hardening where possible.

Specfically things like

\- enabling user namespacing

\- disabling icc

\- removing capabilities (e.g. NET_RAW)

\- not using the default docker0 network

\- disabling userland proxy

There is a CIS standard which is pretty much up to date and has some,
hopefullly, decent sets of things to look at hardening (disclaimer I helped
work a bit on the latest version...)

------
general_failure
We use Docker to containerize apps in cloudron. One issue we have faced is
that --readonly and user namespaces don't work together. Is there any on-going
effort on this front? I can't find a link now but this limitation is hidden
somewhere deep inside the docker docs :/ For cloudron apps, we simply decided
that readonly was more important than user namespaces for now.

edit: found the link -
[https://docs.docker.com/engine/reference/commandline/dockerd...](https://docs.docker.com/engine/reference/commandline/dockerd/#/user-
namespace-known-restrictions)

~~~
cpuguy83
Work is being done here:
[https://github.com/docker/docker/pull/25540](https://github.com/docker/docker/pull/25540)

------
fulafel
Both the Docker blog post and the Gartner piece seem to be silent on the topic
of out of date and vulnerable software in Docker images, which has been much
talked about.

Looks like Docker have put out a content scanner service to mitigate the
problem this still seems to be a weakness relative to the pre-Docker way of
building VMs.

------
bruxis
Does anyone know of a guide to using Docker, or an alternative, on Windows to
help isolate various software (say, Steam games, multiple Visual Studio
installations, .NET and Java runtimes/SDKs)?

------
merb
without docker the software is even safer.

Actually for all people below, here are my reasons:

Docker adds another Layer. This also adds another attack target (docker).
Saying a software is more secure while adding another layer is just naiv.

Docker is good for some companies and it definitiv is true that you can make
it to run a PaaS, it's still not more secure as the world before.

Also some other people also pointed out other things. With docker you add a
lot of complexity to your actual software, that is negligible if your big
since you have a total different set of problems.

Still when it comes to docker it's mostly buzzword bingo. I mean just looking
at the table... oh god it's like half the things there could just be done
without docker in a way more secure way, without LXC, Docker or RKT.

Also I always tought that docker should make the deployment easier, which
would be a hugh bonus in security since the process of understanding and
everything beyond would be more understandable. however the more docker grows
the worse the complexity gets. Docker once looked shiny but than the
enterprise hit it and it's direction more and more makes it less secure since
they try to add more and more stuff and stops focusing on the stuff docker was
firstly invented.

How could adding a ton of abstraction (which grows every release) making
something more secure? I just keep that as an open question to everybody.

~~~
bigmac
This couldn't be further from the truth.

Docker containers run with default seccomp profiles, namespacing (filesystem,
PIDs, mounts, etc), LSM policies (AppArmor, SELinux), and capability dropping.

These are all common sense security controls that aren't widely available
because they're typically hard to use. Docker makes them defaults for
everyone. This makes classes of remote attacks against applications far more
difficult. Tangible example: read-only filesystems + mount namespacing make
vulns that require filesystem modification or directory traversal far more
difficult.

At this point arguing against running in Docker is like arguing in favor of
IE6's security model instead of Chrome's.

Disclaimer: I manage security at Docker.

~~~
yid
You guys are doing a great job! Some security-related sandboxing options to
docker run that people may not be aware of, which are hard to assemble
individually from Linux pieces:

    
    
      * --read-only
      * --security-opt="no-new-privileges"
      * --cap-drop=ALL
      * --net="none"
      * --cpu-period=
      * --cpu-quota=

~~~
redtuesday
Is it intended that the last two have nothing after the equal sign? If yes,
what does that do?

There is also --pids-limit=<some number> against fork bombs.

EDIT - this git repo has more links to security related articles etc.
[https://github.com/wsargent/docker-cheat-
sheet#security](https://github.com/wsargent/docker-cheat-sheet#security)

