
Docker Image Insecurity - Titanous
https://titanous.com/posts/docker-insecurity
======
shykes
I wish the author had not omitted this crucial paragraph in the announcement
he quotes:

    
    
        Note that this feature is still work in progress:
        for now, if an official image is corrupted or tampered with,
        Docker will issue a warning but will not prevent it from
        running. And non-official images are not verified either.
        This will change in future versions as we harden the code
        and iron out the inevitable usability quirks. Until then,
        please don’t rely on this feature for serious security, just yet.
    

So, we've made it pretty clear from the start that we're _working_ on ways to
make image distribution more secure, but are not _claiming_ that it's more
secure yet.

~~~
jsmthrowaway
I wish you'd take things like this seriously when they become public and not
spend your time telling the world how everyone is wrong about Docker. The
author spent a lot of time disassembling this issue and he doesn't even work
for you; rather that interpret that as trying to make Docker better, which it
is, you're right back on HN playing the "nobody understands what we say and
everybody bullies us" attitude that Rocket really brought out of you. I wish
you wouldn't do that.

We can't all get what we wish for, apparently. The difference between your
wish and my own is that the less I get mine, the more you alienate the people
in this community. It's your prerogative not to care about that, but how you
act on HN has really made clear to me that Docker is riding on simply being
first, just as MySpace did.

~~~
shykes
We do take security reports very seriously, when they were brought to the
project repo:
[https://github.com/docker/docker/issues/9719](https://github.com/docker/docker/issues/9719)

If you are looking to gauge our reaction to legitimate criticism and bug
reports, you should follow the github repo and mailing lists, not hacker news.

~~~
jsmthrowaway
I'm not talking about Docker. I'm talking about you. Your first inclination is
how people are wrong and only you are right, in every case. I don't understand
your need to make everyone who criticizes Docker feel wrong, or like they
missed something. Your thread here about Rocket was extremely alarming and
directly motivated me to plan for a future without Docker.

Best way forward here would have been "hey, thanks for looking into this, we
know it's an issue." All of which you said later after making sure the author
knew he missed something, which no normal user is going to look for anyway.

The real bummer is you remind me of the CEO of Linode, whose company will fail
for similar reasons. He is equally infallible and ignores criticism.

~~~
shykes
That's an understandable criticism, even though it makes me want to die
inside. Let me try and explain my point of view, and why this whole situation
is so frustrating to me.

First, my impression is that I _do_ spend most of my days listening to
criticism by people smarter than me, revisiting my assumptions, and trying
very hard to make people feel appreciated for their effort in the process.
Most of that effort is spent on IRC, the Docker repository, and the Docker
mailing lists (in decreasing order of time invested). Obviously, I am biased.
But I'm willing to bet that if you spent a week or 2 on the IRC channel
interacting with me and the other maintainers, you would develop an entirely
different picture of how we work and how we treat criticism.

Second: aggressive blog posts by competitors and their coverage in Hacker
News. This is something new to me, and I acknowledge that I have been handling
it very poorly. I thought I had learned my lesson last time, but as your
message points out, it appears I have not. Basically, I handle very badly the
combination of 1) feigned indignation and ignorance by people with clear
conflicts of interest; 2) a small number of vocal haters waiting for an
opportunity to pile on, and 3) lots of intelligent but uninformed bystanders.
When someone makes a valid criticism, with the ultimate goal of improving the
project, as harsh as it is, I will be the first to embrace it. But here we are
talking about something different: a blog post written by commercial
competitors, with the intent of making the project look as bad as possible -
using valid _known_ criticism when available, but not shying away from
recycling old criticism which has since been addressed, or sometimes making up
facts altogether - resulting in a more compelling narrative that will "stick"
better. These are posts crafted by venture-backed companies competing with
Docker, inc. in the same general space (developer and IT tools), with all
sorts of conflicts of interests. I now these people personally, they are all
part of the Silicon Valley scene, and what they do is very intentional. It has
nothing to do with honest criticism or attempts to improve the project - those
are taking place all day long on the repo, and never generate any drama. It's
just business as usual, but as an engineer it drives me crazy. What drives me
most crazy is how easily people fall for it. But, I acknowledge that it's not
an excuse for the sort of knee-jerk reactions I've engaged in. As frustrating
as it is, I think the proper reaction would be to 1) stay under the radar,
wait for the drama to die down; 2) take a solid look at the criticism, extract
the _real_ one from the political bs, and focus on fixing that; 3) take a
solid look at how well we communicate what we do. For example, we have made
good progress on security compared to 6 months ago, but clearly we have done a
poor job at explaining that and showing solid proof points. 4) as outrageously
hypocritical and unfair as these posts appear to _me_ , who know the full back
story and understand the commercial motivations, remember that the rest of the
world is not aware of the commercial back story. And frankly, they don't care.
What they'll remember is our response to the criticism.

So - I did my best to express my point of view on all this. My personal lesson
is that success is not that fun, and sometimes I regret the days when Docker
was not successful. But at the same time there is a sense of responsibility: a
great many people use this tool every day, and we are responsible for
improving those tools every day. Once these vc-backed competitors (whether
it's coreos or flynn) are done writing their blog posts, they aren't going to
fix the project. That's our responsibility, and nothing else should matter.

So, for what it's worth, my new year's resolution will be to do a better job
at the above - and perhaps take a break from Hacker News altogether, too ;)
Feel free to join the IRC channel if you want to discuss this some more. In
the open, of course!

~~~
jsmthrowaway
You make some good points (and I genuinely appreciate the long and well-
thought-out reply, honest), but two things stand out to me:

First, a blog post is only aggressive if you let it be. I can set up a blog
and sit around shitting on Docker all day long, and in fact, the entire
Hollywood ecosystem is built around that crap. There's blog posts about my
current and former employers that make my stomach turn. One guy in particular
has written essays about how a former employer is anti-Semitic, and they're
basically all made up. My root point is just to be above it. If you want
Docker to succeed, it's more effective to _prove_ those blog posters wrong
than to _tell_ them they're wrong, you dig? Talk is cheap. "You missed this
part of the announcement" is just throwing contempt around, fixing the issue
is just better in every way.

The second point is I get your contempt for CoreOS and Flynn, but calling them
VC-backed and swiping at them didn't help. You're VC-backed too, and you know
that, so I'm left confused by your general tone on that. You're also not going
to score many friends on Hacker News by contempt for the VC ecosystem, which
is how that read (and apologies if I was wrong there). The us-versus-them
stuff that I see you do, that included, is a pretty strong signal regarding
coming to work for you, so for every asshole that won't shut up like me,
imagine all the silent people reading your remarks and quietly coming to
conclusions.

All in all, thank you for the reply, and I'm glad you're at least thinking
about how to be better on this.

~~~
shykes
On the vc-backed point, i should have been more specific. There is nothing
wrong with being vc-backed. However there has been a deliberate attempt at
crafting a narrative of "docker is expanding feature scope because it is vc-
backed therefore greedy an untrustworthy". That narrative is pushed by
competitors who are also vc-backed, making the whole thing quite ridiculous.
Yet somehow it sticks. It's very disheartening given the enormous focus on
cleanly layering company success on top of project success.

Anyway, you are right. Time to focus on fixing the actual issues and ignoring
the rest.

Thanks.

~~~
jsmthrowaway
No, thank you. You gained a lot of respect back just by listening and taking
it to heart.

By the way, I totally get you. I'm the same way. If someone's wrong, damn my
position, I want to tell them. It's damaging to a deep layer when people are
wrong about your work, especially when you believe in it very strongly. I
compared you to caker earlier and he and I went twenty rounds about decisive,
honest response to some of the criticism. One thing he's good at, to his
credit, is knowing when _not_ to respond (usually). I'm still learning that.

Anyway, happy holidays.

------
ef4
I keep hearing people ask "why won't they just get the core tech right instead
of adding all these tangentially related features?".

If Docker was just an open source project, it could focus on getting the core
tech right. But Docker is also a startup, and the startup can't stay
differentiated unless they keep adding bells & whistles, all of which stay
tightly integrated.

See also "Why there is no Rails Inc"
([http://david.heinemeierhansson.com/posts/6-why-theres-no-
rai...](http://david.heinemeierhansson.com/posts/6-why-theres-no-rails-inc))

~~~
tinco
Let's be real here. If Docker was 'just' an open source project, it would move
at 50% of the pace and security issues like this would still not be solved.
It's crazy to assume that if it were an open source project it would somehow
get the core right faster or not expand on its ambitions.

If it were just an open source project, it could focus on nothing, because
Solomon would be working on something else that would actually pay his bills.

It's just a young project with a huge interaction surface, they're trying to
please a lot of people in a very small amount of time. No reason to panic,
just wait for it to get finished.

Also, Docker is an extracted product just like Rails is, and it's not really a
framework so I don't think DHH's argument really applies here.

~~~
tuffle
> Let's be real here. If Docker was 'just' an open source project, it would
> move at 50% of the pace and security issues like this would still not be
> solved.

That's wild speculation, and the elephant in the room (Linux), while a
different beast, is a pretty strong counterexample for how being 'just' open
source oesn't doom you to being slow or insecure.

> It's crazy to assume that if it were an open source project it would somehow
> get the core right faster or not expand on its ambitions.

I don't think it's crazy at all. A (say, community funded) purely open source
project has no distractions and has nothing to do _but_ to get the core right.
It also has no incentive or reason to expand its ambitions.

~~~
FooBarWidget
You took _Linux_ as an example? It took Linux 10 years (1992-2002 - kernel
2.4) to get to a stage where it's considered widely production-ready in the
enterprise. And the phenomenal development rate of the 2.4+ series is thanks
to corporate sponsors hiring talented developers to work on the kernel full
time.

I can give you a couple of counter-counter-examples.

GIMP has been around for almost 20 years by now, but its development rate is
abysmally slow, still faraway from what most people expect from a Photoshop
replacement. Its progress since ~2008 hasn't been that big.

GNOME has been around for almost that long too, but it's constantly suffering
from lack of man power.

The Sidekiq background worker system for Ruby is open source, but development
went much faster ever since the author started monetizing on it by selling
commercial licenses.

Nginx is also open source. They have much more development power now that they
are a business and have a source of income. Before, it was just Igor working
on it whenever he had time when he's off work.

------
lclarkmichalek
I don't understand why the image distribution is so tightly tied into the main
docker codebase. This is why rocket is a thing, because docker is the systemd
of the container world. Please stop trying to do everything.

~~~
the_real_bto
There are a lot of problems to solve, and docker by itself only gets you part
of the way. Orchestrating docker containers is the interesting problem, not so
much Docker in itself (for me anyway). The handwringing that docker should do
less (as a company), because each company should only solve a small problem is
just stupid.

If the argument is that the tools should be small and composable, then I agree
100%. Maybe image distribution should be a separate tool from the tool used to
run a container. That is actually a pretty good point.

~~~
23david
I think the "handwringing that docker should do less" has a lot to do with the
fact that just getting Docker containers right is a huge effort in itself, and
they have a long way to go even there. It's not a _solved_ problem by a
longshot.

~~~
the_real_bto
I agree, there are problems all around. I was responsible for getting docker
adopted on a fairly small (but a number of moving parts) software project. It
has been somewhat painful.

------
ewindisch
Hello, I'm the lead security engineer at Docker, Inc.

There is nothing particularly new in Jonathan's post and I thank him for
facilitating a conversation. Image security is of the upmost importance to us.
For these reasons, we've concentrated efforts here in both auditing and
engineering effort. Engineers here at Docker, our auditors, and community
contributors alike have been evaluating this code to many of the same
conclusions.

Last month, we released Docker 1.3.2 which included limited privilege
separation and extending this paradigm has been discussed. I have explicitly
called out the need for containerization of the 'xz' process, and to run it in
an unprivileged context. I thank Jonathan for reminding us of the need for
this work and validating much of what is already in progress.

As the recently published CVEs describe, we are expending resources in
discovering and fixing security issues in Docker. Yet, I agree the v1 registry
has a flawed design and we're aware of it. In September, I requested to become
a maintainer of the tarsum code and have also made proposals and pushed PRs
toward improving the v1 registry integration. This is not to replace the v2
effort, but to offer improved security for the design we have today.

We have a draft for a v2 registry and image format. This and the supporting
libtrust library are in the process of being audited by a 3rd-party. This is
something we had previously promised the community and are making good on.
What code exists today is a technical preview.

Unlike the v1 registry and image format, the libtrust and v2 image format code
has been designed for a decentralized model. However, as the libtrust and v2
image work, and subsequently, registry protocols are still in draft and
security review, it is difficult for us to recommend that users yet attempt
deploying these. This is why the developers of that code have not published
clear instructions for its use, nor made such recommendations. As this work
comes out of review and a specification is finalized, we should expect to see
a much better experience and more secure image transport, along with stronger
support for on-premises and 3rd-party registries.

~~~
zobzu
Lead marketing engineer

~~~
kaptain
It's unfortunate ewindisch's post has elicited such a response. He clearly
spent time trying to communicate clearly the situation at Docker and some of
the security issues surrounding it. I could be wrong but it doesn't seem that
you've spent a commensurate amount of time and through in your reply.

It's certainly your right to be snarky and negative. Perhaps that's how you've
learned to address others whom you're offended by. But instead of insulting
ewindisch by using 'marketing' as an epithet, can I encourage you to address
the content of the post instead?

Clearly, by using the term 'marketing' you mean to imply that the content of
ewindisch's post lacks the kind of content you deem relevant to sufficiently
addressing this issue. Why don't you challenge ewindisch on this? That would
be more constructive than saying that he's NOT the lead security engineer.

~~~
maxlybbert
Perhaps it has something to do with smaller companies and the need for
engineers to talk to customers. But the engineers I work with would never say
silly things like "facilitating a conversation." OTOH, the marketing
department will go to great lengths to add phrases like that to anything they
send outside the company.

~~~
mst
I've seen more authentic replies on here get lambasted for being
unprofessional. That bit of phrasing did make me twitch, but I can see how he
might've thought he needed to put on his 'corporate communications' voice
before posting here.

------
geku
I would really prefer when Docker, Inc would spend their time and effort in
securing their core product rather than extending it all the time by adding
more and more features like Machine, Swarm, etc.

~~~
kofalt
According to Red Hat, the current best way to secure your Docker usage is to
`127.0.0.1 index.docker.io` and use an alternate transport.

The core "translate flags into running container options" works fine IMO, it's
the centralized transport causing the issue. Which isn't the end of the world,
as distributing tarballs is not exactly a demanding task.

As an example / plug, I helped write a (prototype) tool that lets you import a
docker image from the registry, then transport / version it separately:
[https://github.com/polydawn/hroot](https://github.com/polydawn/hroot)

Thus, integrating via `docker load` + `docker export` is possible &
reasonable.

Linked from the article: [https://securityblog.redhat.com/2014/12/18/before-
you-initia...](https://securityblog.redhat.com/2014/12/18/before-you-initiate-
a-docker-pull)

------
23david
The inevitable CVE's coming from this report will definitely get their
attention. Hopefully the adults in the room will help make sure that the
Docker team addresses what up until now has been a really lax approach towards
security.

Who is the architect in charge of this, and do they have any security chops?
If not, it's just a matter of $$$ to get a 3rd-party security review before
every major release. I've done it before, and it's really not a big deal.

------
snoble
It confuses me why they wouldn't just verify the images since they have the
signature in the manifest. Is this because they don't want to wait for a
complete image before the start streaming through the pipeline? Is this
actually a significant time saver?

~~~
rcoder
The entire model looks to me like it never had even the most superficial
security analysis done. It's like a smorgasbord of insecure decisions:

* the false sense of security from putting signatures in the manifests then ignoring them

* loading signing certs via the network with no provision for pinning

* happily loading untrusted/unsigned images by default (npm, rubygems, installtools, etc. also do this but why repeat their awful design mistake?)

* running basically everything as root (because why deal with all those messy permissions?)

My sysadmin Spidey-sense has been tingling at the rate of change in the Docker
ecosystem since it went from "interesting POC" to "we think it's production
ready" in a shockingly short period of time. Things like this sadly confirm
that initial pessimistic view.

~~~
efuquen
Not at all related to docker, but this sort of thing is what makes me happy
about communities like Rust. They are taking an incredibly long time to get to
1.0, but they've been progressing methodically and consistently and are trying
to get something good out the door instead of bowing to any pressure to
release early. Of course, Mozilla is a different beast compared to Docker,
Inc., there is less of a profit motive more so than a need to create a revenue
stream to stay maintainable and keep creating good new tech.

Things like this are really putting everything that is happening with Rocket
and the drama around it in perspective.

------
wayoverthere
Particularly interesting given that some of these problems were pointed out to
Docker folks ~4 months ago in the development of the feature.
[https://github.com/docker/docker/issues/8093#issuecomment-57...](https://github.com/docker/docker/issues/8093#issuecomment-57138688)

~~~
efuquen
Read through that thread, funny thing is 19 days ago someone is pointing out
how the docs are misleading after he is informed that docker doesn't verify
signatures [1]. @shykes makes the claim on one of the comments here that
nobody has every brought up that particular issue before, and while he wasn't
the one who responded to this guy someone from the docker core team did.

Clearly nobody changed anything in the docs to clarify. Like most things
security related people don't care until there is a major exploit or somebody
with enough clout complains enough, or more likely than not both.

[1]:
[https://github.com/docker/docker/issues/8093#issuecomment-65...](https://github.com/docker/docker/issues/8093#issuecomment-65612835)

~~~
thaJeztah
Not disputing that the "verified" message is misleading, but the comment you
referred to was about "not yet being able to create a signed image".

I don't think there's anything yet in the docs with regard to signed image,
apart for the release notes[1] mentioning it being a "sneak peak" of a coming
feature, that is under development

[1]: [https://docs.docker.com/v1.3/release-notes/#new-
features](https://docs.docker.com/v1.3/release-notes/#new-features)

------
rab_oof
Early on, I asked that images be signed similar to Debian packages, but was
met with skepticism and resistance. To me, none of the Docker core devs had a
handle on security implications of allowing anyone and everyone to share
random bits without being able to prove end-to-end integrity and
nonrepudiation.

I hope this has changed, Docker is a great app. But if not, Perhaps someone
would like to teach them a security lesson? It seems the only way most people
actually learn, sadly. :(

------
disjointrevelry
Reminds me of debian and ubuntu's requirement that apt-get is run under root.
There is simple ways to get apt-get to run on non-root, but it require giving
permission to non-root account to modify important package signature files.
But, they're not as bad as docker. It's becoming norm for these US/Silicon
companies to give very bad integrity on data.

~~~
rcoder
The difference here is that apt-get and its ilk need to modify critical system
state basically every time they run, and that state isn't controlled by a
persistent daemon. I actually consider this a great tradeoff: yes I have to
use sudo to run that one command, but I don't have a long-lived process
sitting around pulling data down off the Internet and doing stuff with it
while humming along as uid 0.

It's also literally one line of code in most UNIX-based languages
(syscall.Setuid(<uid that isn't root>) in Go, FWIW) to drop root privileges
before doing something unsafe. Even if the main Docker daemon absolutely has
to run as root most of the time, it can and should fork and drop that access
for anything dealing with moving data between unstrusted (e.g. the Internet,
user input, etc.) and trusted (verified, read-only local state) security
domains.

~~~
mjquinn
As a quick word of caution (which doesn't invalidate anything you've said), Go
has a long-standing bug[0] whereby syscall.Setuid doesn't always apply to all
threads (on Linux at least) so extra care does have to be taken.

[0]
[https://github.com/golang/go/issues/1435](https://github.com/golang/go/issues/1435)

~~~
burke
It's not a bug, strictly speaking; it's just a feature that's really easy to
misunderstand. The `syscall` package in Go tends to be logic-less wrappers
around the raw syscalls, and that's what happens here.

Linux actually maintains the uid/euid/suid/gid/egid/sgid/etc. fields per OS
thread (which are actually processes, just with a bit of shared memory). The
raw syscalls only change the fields on a single task.

Glibc is where the logic happens to propagate that setting to all threads, by
setting up signal handlers and immediately triggering a signal, IIRC.

You can get the useful behaviour by using cgo and importing setresuid from
unistd.h.

What go should probably do here is:

1) Add os.Set{res,re,}{u,g}id, which implements the logic from glibc.

2) Remove syscall.Set{res,re,}{u,g}id. Anyone that wants that behaviour can
use syscall.Syscall6 and syscall.SYS_SETUID or whatever. At least, they could
add some really loud godoc to those methods.

The main problem with that is that `os` tries to have a cross-platform API,
and (for example) BSD has no saved user or group IDs and therefore no
setres{u,g}id. I suspect Windows and Plan9 are even weirder.

~~~
darkarmani
> It's not a bug, strictly speaking; it's just a feature that's really easy to
> misunderstand.

It is a bug on Linux -- strictly speaking. Did you see how it is patched? They
removed setuid from linux:
[https://codereview.appspot.com/106170043](https://codereview.appspot.com/106170043)

> "That these functions should made to fail rather than succeed in their
> broken state."

------
oscargrouch
As a non-security aware(not a security specialist) developer, this was one of
the most instructional and concise little gem about security flaws i've read.
You can learn very useful tricks just by reading this. Thank you

