Hacker News new | comments | show | ask | jobs | submit login
Docker Image Insecurity (titanous.com)
263 points by Titanous 1030 days ago | hide | past | web | 100 comments | favorite



I wish the author had not omitted this crucial paragraph in the announcement he quotes:

    Note that this feature is still work in progress:
    for now, if an official image is corrupted or tampered with,
    Docker will issue a warning but will not prevent it from
    running. And non-official images are not verified either.
    This will change in future versions as we harden the code
    and iron out the inevitable usability quirks. Until then,
    please don’t rely on this feature for serious security, just yet.
So, we've made it pretty clear from the start that we're working on ways to make image distribution more secure, but are not claiming that it's more secure yet.


Author here.

> I wish the author had not omitted this crucial paragraph in the announcement he quotes:

I did not quote this announcement, only linked to it.

If this is intended to be prototype code, it should not be enabled by default, and certainly shouldn't be printing inaccurate messages to users in a 1.0 product.

Also, the referenced warning goes to the daemon log, and doesn't appear to ever be triggered.


I wish you'd take things like this seriously when they become public and not spend your time telling the world how everyone is wrong about Docker. The author spent a lot of time disassembling this issue and he doesn't even work for you; rather that interpret that as trying to make Docker better, which it is, you're right back on HN playing the "nobody understands what we say and everybody bullies us" attitude that Rocket really brought out of you. I wish you wouldn't do that.

We can't all get what we wish for, apparently. The difference between your wish and my own is that the less I get mine, the more you alienate the people in this community. It's your prerogative not to care about that, but how you act on HN has really made clear to me that Docker is riding on simply being first, just as MySpace did.


We do take security reports very seriously, when they were brought to the project repo: https://github.com/docker/docker/issues/9719

If you are looking to gauge our reaction to legitimate criticism and bug reports, you should follow the github repo and mailing lists, not hacker news.


I'm not talking about Docker. I'm talking about you. Your first inclination is how people are wrong and only you are right, in every case. I don't understand your need to make everyone who criticizes Docker feel wrong, or like they missed something. Your thread here about Rocket was extremely alarming and directly motivated me to plan for a future without Docker.

Best way forward here would have been "hey, thanks for looking into this, we know it's an issue." All of which you said later after making sure the author knew he missed something, which no normal user is going to look for anyway.

The real bummer is you remind me of the CEO of Linode, whose company will fail for similar reasons. He is equally infallible and ignores criticism.


That's an understandable criticism, even though it makes me want to die inside. Let me try and explain my point of view, and why this whole situation is so frustrating to me.

First, my impression is that I do spend most of my days listening to criticism by people smarter than me, revisiting my assumptions, and trying very hard to make people feel appreciated for their effort in the process. Most of that effort is spent on IRC, the Docker repository, and the Docker mailing lists (in decreasing order of time invested). Obviously, I am biased. But I'm willing to bet that if you spent a week or 2 on the IRC channel interacting with me and the other maintainers, you would develop an entirely different picture of how we work and how we treat criticism.

Second: aggressive blog posts by competitors and their coverage in Hacker News. This is something new to me, and I acknowledge that I have been handling it very poorly. I thought I had learned my lesson last time, but as your message points out, it appears I have not. Basically, I handle very badly the combination of 1) feigned indignation and ignorance by people with clear conflicts of interest; 2) a small number of vocal haters waiting for an opportunity to pile on, and 3) lots of intelligent but uninformed bystanders. When someone makes a valid criticism, with the ultimate goal of improving the project, as harsh as it is, I will be the first to embrace it. But here we are talking about something different: a blog post written by commercial competitors, with the intent of making the project look as bad as possible - using valid known criticism when available, but not shying away from recycling old criticism which has since been addressed, or sometimes making up facts altogether - resulting in a more compelling narrative that will "stick" better. These are posts crafted by venture-backed companies competing with Docker, inc. in the same general space (developer and IT tools), with all sorts of conflicts of interests. I now these people personally, they are all part of the Silicon Valley scene, and what they do is very intentional. It has nothing to do with honest criticism or attempts to improve the project - those are taking place all day long on the repo, and never generate any drama. It's just business as usual, but as an engineer it drives me crazy. What drives me most crazy is how easily people fall for it. But, I acknowledge that it's not an excuse for the sort of knee-jerk reactions I've engaged in. As frustrating as it is, I think the proper reaction would be to 1) stay under the radar, wait for the drama to die down; 2) take a solid look at the criticism, extract the real one from the political bs, and focus on fixing that; 3) take a solid look at how well we communicate what we do. For example, we have made good progress on security compared to 6 months ago, but clearly we have done a poor job at explaining that and showing solid proof points. 4) as outrageously hypocritical and unfair as these posts appear to me, who know the full back story and understand the commercial motivations, remember that the rest of the world is not aware of the commercial back story. And frankly, they don't care. What they'll remember is our response to the criticism.

So - I did my best to express my point of view on all this. My personal lesson is that success is not that fun, and sometimes I regret the days when Docker was not successful. But at the same time there is a sense of responsibility: a great many people use this tool every day, and we are responsible for improving those tools every day. Once these vc-backed competitors (whether it's coreos or flynn) are done writing their blog posts, they aren't going to fix the project. That's our responsibility, and nothing else should matter.

So, for what it's worth, my new year's resolution will be to do a better job at the above - and perhaps take a break from Hacker News altogether, too ;) Feel free to join the IRC channel if you want to discuss this some more. In the open, of course!


You make some good points (and I genuinely appreciate the long and well-thought-out reply, honest), but two things stand out to me:

First, a blog post is only aggressive if you let it be. I can set up a blog and sit around shitting on Docker all day long, and in fact, the entire Hollywood ecosystem is built around that crap. There's blog posts about my current and former employers that make my stomach turn. One guy in particular has written essays about how a former employer is anti-Semitic, and they're basically all made up. My root point is just to be above it. If you want Docker to succeed, it's more effective to prove those blog posters wrong than to tell them they're wrong, you dig? Talk is cheap. "You missed this part of the announcement" is just throwing contempt around, fixing the issue is just better in every way.

The second point is I get your contempt for CoreOS and Flynn, but calling them VC-backed and swiping at them didn't help. You're VC-backed too, and you know that, so I'm left confused by your general tone on that. You're also not going to score many friends on Hacker News by contempt for the VC ecosystem, which is how that read (and apologies if I was wrong there). The us-versus-them stuff that I see you do, that included, is a pretty strong signal regarding coming to work for you, so for every asshole that won't shut up like me, imagine all the silent people reading your remarks and quietly coming to conclusions.

All in all, thank you for the reply, and I'm glad you're at least thinking about how to be better on this.


On the vc-backed point, i should have been more specific. There is nothing wrong with being vc-backed. However there has been a deliberate attempt at crafting a narrative of "docker is expanding feature scope because it is vc-backed therefore greedy an untrustworthy". That narrative is pushed by competitors who are also vc-backed, making the whole thing quite ridiculous. Yet somehow it sticks. It's very disheartening given the enormous focus on cleanly layering company success on top of project success.

Anyway, you are right. Time to focus on fixing the actual issues and ignoring the rest.

Thanks.


No, thank you. You gained a lot of respect back just by listening and taking it to heart.

By the way, I totally get you. I'm the same way. If someone's wrong, damn my position, I want to tell them. It's damaging to a deep layer when people are wrong about your work, especially when you believe in it very strongly. I compared you to caker earlier and he and I went twenty rounds about decisive, honest response to some of the criticism. One thing he's good at, to his credit, is knowing when not to respond (usually). I'm still learning that.

Anyway, happy holidays.


> you remind me of the CEO of Linode

Hmm... You've just increased my respect for Solomon.

> He is equally infallible and ignores criticism.

As a long-time Linode customer, I know this to be absolutely false.

Was it your intent to so thoroughly discredit yourself?


I was employed by Linode for three years, so I know it to be absolutely true. I'm not willing to tie my actual identity to this account very publicly (in fact, I'd rather not be here), but many people know who I am in various circles and can vouch for that. It was also a throwaway remark about handling criticism and less an indictment, because I am capable of moving on with my life and wish Linode no ill will. You're happy, so you're happy. That's all that matters.

Ignoring criticism and developing an echo chamber is dangerous, that's my point. I see shykes doing it, and caker does it too. Most of Linode has quit in the last year, too, so if you ignore the wrong criticism too long, well, people leave.


:s/most/some


So you're a bitter ex-employee who comes to HN only to anonymously sling mud? That sounds about right.


These are some Reddit-level retorts. You're not doing yourself any favours.


> but are not claiming that it's more secure yet

That's a bit disingenous. The CLI clearly makes a security claim here, and you know very well that users will take it as one.

I'm all for "working on ways to make image distribution more secure", but making security claims in the CLI when said security does not exist yet is -- assuming good faith -- a security bug in the user interface that should be fixed.


> That's a bit disingenous. The CLI clearly makes a security claim here, and you know very well that users will take it as one.

This is truly one of the largest complaints about the Docker project and how they present themselves. They constantly present their product as it does X, Y, and Z today, yet upon research you will discover it really only does X, while Y and Z are planned for sometime in the future. Unfortunately, all too often X becomes some new shiny feature, while Y and Z are security related issues.


Docker should not show the '...Verified...' message,- I agree. I have not used it in a while. I also remember that when I used it,- I was aware that it was not verifying the images. That was my responsibility, to double check and peer review the tool which I'm using. Docker is not meant to be used by your grandma and certain level of responsibility is expected on your part.

Now, I don't care one way or another , but your message history clearly points that you troll like a maniac in each Docker thread. With all this time spent, why didn't you research a better approach, create a merge request or Docker-type of RFC post proposing a single improvement.

If you already did - great!

While I haven't contributed yet, I have read the source and saw this entry on GH: https://github.com/docker/docker/blob/master/CONTRIBUTING.md So there is no excuse not to!


I'd like you to try and get any security commits in the docker tree. Try your luck.. others have.


I agree. The message should be more clear.


Your two messages contradict each other-

>So, we've made it pretty clear from the start

>I agree. The message should be more clear.

It seems like a bit of backpedalling here. Rather than try to blame the author for not understanding or quoting the announcement that most people wouldn't have read anyways I think you should just commit to fixing the underlying problem (first the message that's passed, then the actual feature itself) and thank the author for drawing attention to it and educating your users on the security issue at hand.


The first quote is in reference to the announcement. The second quote refers to the CLI messaging. Shykes is acknowledging the problem.


It's good to hear that it will be fixed, but it's also troubling that a front-facing security issue took a #1 HN post, followed by debate in the comments before Docker recognized it.


All the issues in this post are already known and being actively discussed on the mailing lists and repo (including with the blog post author).

However nobody so far had noted that the "image verified" message is misleading and should be clarified. I'm simply acknowledging that we should fix that, too.


OK, honest question: why have the "The image you are pulling has been verified" message without also printing a disclaimer on `docker pull`?

Secondly, It seems that the author did not follow the stated docker procedure for security vulnerabilities (https://www.docker.com/resources/security/). In my view this taints the author's position of revealing a vulnerability to the community and reduces it to mere attention-getting.


The author is a developer and professional security researcher. He responds to the security disclosure issue on the github issue page:

  > @ewindisch:
  > I also remind everyone that if they feel there may be possible attacks 
  > against the current format, not to publicly discuss this on GitHub. 
  > If you do feel this way, I'll happily entertain a private discussion. 
  > While I generally prefer and encourage transparency in open source 
  > projects, we should be careful to practice responsible disclosure.

  Disclosing security vulnerabilities is a responsible thing to do. 
  As a security researcher, it is entirely my choice how/when/if I disclose
  security issues. In this case, I'm not dropping any 0-days, just pointing out
  fundamental flaws in the current system. Fixing these flaws should be an 
  open discussion, not a private one.

  As far as "responsible disclosure" goes, it is only one vulnerability 
  disclosure approach (the alternative is not "irresponsible disclosure"), 
  and there is zero consensus about it.
Source: https://github.com/docker/docker/issues/9719#issuecomment-67...

Some good reading by Bruce Schneier re:full disclosure vs responsible disclosure: https://www.schneier.com/essays/archives/2007/01/schneier_fu...

If you want to really understand the security world, I definitely recommend attending Defcon in Las Vegas at least once to meet our Cyber brethren... :-)

Companies with the resources who publicly state that they care about security should be willing and excited to at least partially sponsor employees to go. Definitely worth every penny.


https://securityblog.redhat.com/2014/12/18/before-you-initia...

I think RH has also posted this just for the attention.


I really advise caution with using 'docker load'. It's more susceptible to malicious input than 'docker pull' is. That said, it has a stronger trust story. Lets not confuse the two. If you know without a doubt that your image and all of its layers are safe, then you can use GPG to do image signing and verifications, then load the image into docker.

That assumes that what you're signing and verifying is safe, i.e. non-malicious. Again, 'docker load' is less protected against malicious inputs, so under no circumstance is it safer to load arbitrary, untrusted content through this mechanism.

Finally, an interesting middle-ground is to containerize the 'docker pull', then use Docker itself to generate sanitized input to 'docker load'. It's not perfect, there are still ways to attack it, but I did put together a PoC of this:

  $ docker run ewindisch/docker-pull ubuntu | docker load


> It's more susceptible to malicious input than 'docker pull' is. That said, it has a stronger trust story.

What does that mean?

Let's look at how 'docker pull' and 'docker load' compare:

- They're both loading an image into docker.

- They're both NOT checking any signatures in any meaningful way.

- All that's different about `docker pull` is that it's fetching directly from the network.

How could 'docker load' possibly be more susceptible to malicious input?

Clearly, there is never any circumstance in which it is safe to load arbitrary, untrusted content through docker. I fail to see how loading untrusted content from the network could be safer than loading untrusted content from disk.


It's mostly in how Docker perform the 'docker load' and 'docker pull' that results in a different security story, not so much how it ultimately extracts the files and applies them to the filesystem.

When you use 'docker pull', you're explicitly loading a specific tag and the layers associated with it.

Docker 'load' doesn't load an image and a tag as specified by the user, it loads an arbitrary number of images and tags as specified in the provided archive.

For one example of an actual vulnerability, up until Docker 1.3.3, it was possible for a 'docker load' to execute path traversal attacks based on malicious image IDs. This was largely mitigated by the 'docker pull' code and url semantics. There was still some risk from malicious registries, but again, mitigated by having a trusted registry behind TLS.


Running something inside a container does very little to actually secure it. If someone can execute arbitrary code inside a container, they can use a kernel exploit to jump outside of the container. It's important to always keep in mind that containers provide resource isolation, not a security boundary.


Running in a container restricts what processes may do in userspace. It's possible to take away CAP_SETUID, so that even if a process could execute arbitrary code, it could not leverage a setuid binary. There are a whole set of capabilities that admins may take away from their processes that are possible with a container. Some of these, yes, actually could protect against certain kernel exploits.

However, no, containers do not entirely protect against kernel exploits. Yet, is that what we're talking about there where the alternative is to simply execve(3) a binary, and possibly setuid to a non-root uid? Running processes with a restricted capabilities set and new namespaces is generally more secure than running them without these restricted capabilities without namespaces.

So yes, actually, containers do provide security. No, it's not absolute, but it's better than the alternative if the alternative is a naive 'execve' or one of its many frontends.


> So, we've made it pretty clear from the start that we're working on ways to make image distribution more secure, but are not claiming that it's more secure yet

That very clearly makes Docker not secure from it's inception through today currently. That was/is the point of the article.

> but are not claiming that it's more secure yet.

What does "more secure" mean... it's either not secure or is secure.

> This pipeline is performant but completely insecure. Untrusted input should not be processed before verifying its signature. Unfortunately Docker processes images three times before checksum verification is supposed to occur.

> However, despite Docker’s claims, image checksums are never actually checked.

Things like this are by design and at the core of how Docker does what it does. Docker clearly was not built with security considerations in mind.

> Docker exacerbates this situation by running xz as root. This means that if there is a single vulnerability in xz, a call to docker pull could result in the complete compromise of your entire system.

The security implications go on and on.


It doesn't make Docker "not secure", it means that you need to verify image files in the way you verify other files you download. These changes are irrelevant to the security of Docker itself, for the way I use it with my own images.

"it's either not secure or is secure." Security is always a matter of degree.


> it means that you need to verify image files in the way you verify other files you download

That is difficult to script automation that handles all cases. They have mechanisms built in that claim they provide the security you would expect, except that this article exposes their claims are absurdly false and actually dangerous because most users are not going to take the time to verify what the code is actually doing -- ie. they will just trust Docker's claim and move forward, potentially blindly exposing themselves.

> Security is always a matter of degree.

I agree, except in this case Docker has totally cast most considerations of security out the window. Simple "best practices" like verify user/untrusted input before you begin to work with it are completely ignored as evidenced in this article.

> Docker users should be aware that the code responsible for downloading images is shockingly insecure.

The issue is Docker sweeps this under the rug and continues to claim their platform is secure. Users trust this, and leave themselves exposed.

> It doesn't make Docker "not secure"

It sure seems to indicate that Docker is certainly not secure.


More secure means exactly what he says. Nothing is ever secure unless it's unplugged from the network. Security is a series of doors between your data and the attackers, making something more secure is simply adding more doors in front of it.


> I assumed this referenced Docker’s heavily promoted image signing system and didn’t investigate further at the time. Later, while researching the cryptographic digest system that Docker tries to secure images with, I had the opportunity to explore further. What I found was a total systemic failure of all logic related to image security.

The issue is Docker has never taken security seriously. From inception up to today. Security is an afterthought, very clearly evidenced by shykes comments above, and by the code being committed.

Security for such a core piece of infrastructure, something Docker Inc hopes to thrust into all servers and devices someday, should rightfully be the first and foremost thoughts and part of the initial design of new features. Instead things are designed and developed, and security slips to the side. It takes articles like this to actually motivate Docker to do anything about it.


Professional Advice for shykes: Do some background research + internal prep before answering/commenting here on HN. You represent the Docker brand and over time such hastiness will erode Docker's credibility. Best to be silent than defend every incredulous comment about Docker on HN.


I keep hearing people ask "why won't they just get the core tech right instead of adding all these tangentially related features?".

If Docker was just an open source project, it could focus on getting the core tech right. But Docker is also a startup, and the startup can't stay differentiated unless they keep adding bells & whistles, all of which stay tightly integrated.

See also "Why there is no Rails Inc" (http://david.heinemeierhansson.com/posts/6-why-theres-no-rai...)


Let's be real here. If Docker was 'just' an open source project, it would move at 50% of the pace and security issues like this would still not be solved. It's crazy to assume that if it were an open source project it would somehow get the core right faster or not expand on its ambitions.

If it were just an open source project, it could focus on nothing, because Solomon would be working on something else that would actually pay his bills.

It's just a young project with a huge interaction surface, they're trying to please a lot of people in a very small amount of time. No reason to panic, just wait for it to get finished.

Also, Docker is an extracted product just like Rails is, and it's not really a framework so I don't think DHH's argument really applies here.


Regardless of the development model, the correct ordering of implementing a feature like signature verification remains the same.

1. Write code to verify signatures.

2. Write code to print "The image you are pulling has been verified."


I wouldn't disagree, obviously someone messed up here. The discussion is more about that some people believe that the fact that his bug is not being solved is a sign that the Docker team lacks focus.


> Let's be real here. If Docker was 'just' an open source project, it would move at 50% of the pace and security issues like this would still not be solved.

That's wild speculation, and the elephant in the room (Linux), while a different beast, is a pretty strong counterexample for how being 'just' open source oesn't doom you to being slow or insecure.

> It's crazy to assume that if it were an open source project it would somehow get the core right faster or not expand on its ambitions.

I don't think it's crazy at all. A (say, community funded) purely open source project has no distractions and has nothing to do but to get the core right. It also has no incentive or reason to expand its ambitions.


You took Linux as an example? It took Linux 10 years (1992-2002 - kernel 2.4) to get to a stage where it's considered widely production-ready in the enterprise. And the phenomenal development rate of the 2.4+ series is thanks to corporate sponsors hiring talented developers to work on the kernel full time.

I can give you a couple of counter-counter-examples.

GIMP has been around for almost 20 years by now, but its development rate is abysmally slow, still faraway from what most people expect from a Photoshop replacement. Its progress since ~2008 hasn't been that big.

GNOME has been around for almost that long too, but it's constantly suffering from lack of man power.

The Sidekiq background worker system for Ruby is open source, but development went much faster ever since the author started monetizing on it by selling commercial licenses.

Nginx is also open source. They have much more development power now that they are a business and have a source of income. Before, it was just Igor working on it whenever he had time when he's off work.


"no incentive or reason to expand its ambitions"

with a nod towards systemd.


Linux is not a counterexample. The only reason it looks different is that the primary gatekeeper and a few key developers are paid jointly and circuitously by the sponsoring companies.


> Let's be real here. If Docker was 'just' an open source project, it would move at 50% of the pace and security issues like this would still not be solved

Funny, systemd-nspawn just gained the ability to import and run Docker images. Or rather "the best known container solution".


I don't believe Docker is an extracted product. They wrote it from scratch and pushed it on the world, without running it in production themselves.

In particular, I don't believe dotCloud ever ran Docker in production / for paying customers. It seems like there were several earlier containerization technologies based on LXC or OpenVZ, but it wasn't Docker per se. They had some domain knowledge from those earlier experiences, but not production quality code.


It'd be nice if they said that then. Shit like https://twitter.com/solomonstre/status/539527695999049728 make me wonder just how out of touch the docker folk are.


That one seemed pretty ridiculous to me as well. If your project's direction is constant but your user's understanding is changing, then either you're failing to communicate or (worse) your product is actively forsaking its users.


I think a good governance model comes from Mozilla; where you have a Org with the open source core binded to it, which owns a Corp that are free to do some monetization efforts to help the Org to be financially healthy and keep moving.

The core managed at the Org will have less marketing bubble-words, lies and hype, so we keep trusting their goals and mission, while the corp can do whatever they want to achieve their distinct goals.


The business model seemed clearer before with the commercialized dockerhub. What is the business model with all these new features?

Seems to me that they'll either license tech to enable enterprises to build their own docker platforms, or offer their own hosted platform... dotcloud 2.0


pretty clear in Docker Inc CEO words here: https://gigaom.com/2014/12/20/on-docker-coreos-open-source-a...

“The closest analogy I guess I can give you is, for people who think of Docker and containers as a new form of virtualization, so [with] open source we gave away ESX and what we are selling is something akin to vCenter or vSphere.”


Which, if you put the pieces together, clarifies that the purpose of new Docker features is to lasso in fragmentation and make sure you're going to buy their vCenter/vSphere rather than any alternative.

This also clarifies why things like Rocket are the biggest existential threat facing Docker Inc., and why the mud is being slung.


well, those are two different things, aren't they? the direction is pretty clear.

On everything else I'm on the fence but so far the 'batteries included, but swappable' motto has been respected: everything new they have done/announced lately is not in Docker core but rather an external project (think: Swarm, Machine, Compose) that uses the public APIs


The business model is the same. We are selling cloud services via Docker Hub, in addition to training and support.

All the new features are pure open design, they only exist because users are asking for them and people are sending patches.


I don't understand why the image distribution is so tightly tied into the main docker codebase. This is why rocket is a thing, because docker is the systemd of the container world. Please stop trying to do everything.


There are a lot of problems to solve, and docker by itself only gets you part of the way. Orchestrating docker containers is the interesting problem, not so much Docker in itself (for me anyway). The handwringing that docker should do less (as a company), because each company should only solve a small problem is just stupid.

If the argument is that the tools should be small and composable, then I agree 100%. Maybe image distribution should be a separate tool from the tool used to run a container. That is actually a pretty good point.


I think the "handwringing that docker should do less" has a lot to do with the fact that just getting Docker containers right is a huge effort in itself, and they have a long way to go even there. It's not a solved problem by a longshot.


I agree, there are problems all around. I was responsible for getting docker adopted on a fairly small (but a number of moving parts) software project. It has been somewhat painful.


It's less "handwringing that Docker should do less" and more "get one thing right first."


Agreed. I was referring to the rocket/systemd remarks. There were some pretty big docker hate going on when rocket's announcement was posted here on HN. Most of that hate was because docker was getting into the orchestration business.


I don't think it's the fact that docker-the-company is working on these problems, but rather the fact that they're tying everything together very closely, making it hard to use one thing they're working on without the other for no fundamental reason.


It's mostly an artifact of our initial focus - a great integrated user experience - combined with the fact that Docker became incredibly successful, incredibly fast. Image management is combined with runtime since the very first version. Over time we've been shifting our focus to quality, modularity and security, but with the massive adoption everything is slower to design and implement. The community is also not unanimous on the topic, see for example https://groups.google.com/forum/m/#!topic/docker-dev/mzpAga_...


Hello, I'm the lead security engineer at Docker, Inc.

There is nothing particularly new in Jonathan's post and I thank him for facilitating a conversation. Image security is of the upmost importance to us. For these reasons, we've concentrated efforts here in both auditing and engineering effort. Engineers here at Docker, our auditors, and community contributors alike have been evaluating this code to many of the same conclusions.

Last month, we released Docker 1.3.2 which included limited privilege separation and extending this paradigm has been discussed. I have explicitly called out the need for containerization of the 'xz' process, and to run it in an unprivileged context. I thank Jonathan for reminding us of the need for this work and validating much of what is already in progress.

As the recently published CVEs describe, we are expending resources in discovering and fixing security issues in Docker. Yet, I agree the v1 registry has a flawed design and we're aware of it. In September, I requested to become a maintainer of the tarsum code and have also made proposals and pushed PRs toward improving the v1 registry integration. This is not to replace the v2 effort, but to offer improved security for the design we have today.

We have a draft for a v2 registry and image format. This and the supporting libtrust library are in the process of being audited by a 3rd-party. This is something we had previously promised the community and are making good on. What code exists today is a technical preview.

Unlike the v1 registry and image format, the libtrust and v2 image format code has been designed for a decentralized model. However, as the libtrust and v2 image work, and subsequently, registry protocols are still in draft and security review, it is difficult for us to recommend that users yet attempt deploying these. This is why the developers of that code have not published clear instructions for its use, nor made such recommendations. As this work comes out of review and a specification is finalized, we should expect to see a much better experience and more secure image transport, along with stronger support for on-premises and 3rd-party registries.


> There is nothing particularly new in Jonathan's post and I thank him for facilitating a conversation.

Maybe not new to you or Docker Inc staff, but I don't see any warnings that a pull could result in complete compromise. On top of this, the inaccurate "verified" image message is still in the current release.

> I have explicitly called out the need for containerization of the 'xz' process, and to run it in an unprivileged context.

Do you have a link to this?

> As this work comes out of review and a specification is finalized, we should expect to see a much better experience and more secure image transport

Is there a draft specification available for libtrust?

> However, as the libtrust and v2 image work, and subsequently, registry protocols are still in draft and security review, it is difficult for us to recommend that users yet attempt deploying these.

And yet you deployed it to production in 1.3.


> I don't see any warnings that a pull could result in complete compromise

Well, certainly not by intention, but we did have CVEs issued with 1.3.3 and 1.3.2 which indicated that pulls were not secure. They're better now. Perfect? No, it does seem to be far better. I also put together a PoC of containerizing the entire 'docker pull' process and I'd like to see such privilege separation in the future. (https://github.com/ewindisch/docker-pull)

> the "verified" image message is still in the current release.

I agree the "verified" image message is too strongly worded and this is what bug reports are for. Solomon seems to agree as well.

> Do you have a link to this?

It was on IRC. I'd have to dig it up, but I've filed a github issue in response to this conversation: https://github.com/docker/docker/issues/9793

> Is there a draft specification available for libtrust?

https://github.com/docker/docker/issues/8093 https://github.com/docker/docker/issues/9015

> And yet you deployed it to production in 1.3.

This is a valid criticism. I'll admit this code did not get the proper review and audit that I personally expected before being deployed. At Docker, we consider that a process bug. Things happen, we learn, we adapt. We didn't pull the code, but neither did we advise anyone else yet deploy a v2 server. Again, we have an active audit against this code currently, and the spec is developing.


>> Is there a draft specification available for libtrust?

> https://github.com/docker/docker/issues/8093 https://github.com/docker/docker/issues/9015

libtrust does not appear to be mentioned in either of these issues.


>However, as the libtrust and v2 image work, and subsequently, registry protocols are still in draft and security review, it is difficult for us to recommend that users yet attempt deploying these.

I am not sure I agree with this. You have deployed these draft protocols in 1.3 resulting in CVEs for the pull ops. For me (and I think for every software release engineer), I would deploy something only as part of a test branch and not as part of production code. This is (IMHO) bad engineering practice for such a vital piece of software. Not good.


Why does it claim the image has been verified if it's only verifying the manifest?


Have you got any references / blog posts / articles related you could point to ( URL's? ) for folk to work on this?


Lead marketing engineer


It's unfortunate ewindisch's post has elicited such a response. He clearly spent time trying to communicate clearly the situation at Docker and some of the security issues surrounding it. I could be wrong but it doesn't seem that you've spent a commensurate amount of time and through in your reply.

It's certainly your right to be snarky and negative. Perhaps that's how you've learned to address others whom you're offended by. But instead of insulting ewindisch by using 'marketing' as an epithet, can I encourage you to address the content of the post instead?

Clearly, by using the term 'marketing' you mean to imply that the content of ewindisch's post lacks the kind of content you deem relevant to sufficiently addressing this issue. Why don't you challenge ewindisch on this? That would be more constructive than saying that he's NOT the lead security engineer.


Perhaps it has something to do with smaller companies and the need for engineers to talk to customers. But the engineers I work with would never say silly things like "facilitating a conversation." OTOH, the marketing department will go to great lengths to add phrases like that to anything they send outside the company.


I've seen more authentic replies on here get lambasted for being unprofessional. That bit of phrasing did make me twitch, but I can see how he might've thought he needed to put on his 'corporate communications' voice before posting here.


Much defensive. Its easier to look at what things are rather than taking it personally:

The engineer tries to downplay the issue by the now conventional "we're working on fixing it". And the whole pr-approved speech.

What should be fixed is the whole process and design reasons why this is even allowed to happen.

It's not the fault of the engineer - but by being the messenger in this case you're also condoning it. And downplaying stuff on non-technical grounds, that's actually marketing. But it's easier to be offended than to realize that these days.

Also - regardless - merry xmas!


And I suppose you're Condescending PR Lead. You reply to a troll with troll bait. Good job.


lol


I would really prefer when Docker, Inc would spend their time and effort in securing their core product rather than extending it all the time by adding more and more features like Machine, Swarm, etc.


According to Red Hat, the current best way to secure your Docker usage is to `127.0.0.1 index.docker.io` and use an alternate transport.

The core "translate flags into running container options" works fine IMO, it's the centralized transport causing the issue. Which isn't the end of the world, as distributing tarballs is not exactly a demanding task.

As an example / plug, I helped write a (prototype) tool that lets you import a docker image from the registry, then transport / version it separately: https://github.com/polydawn/hroot

Thus, integrating via `docker load` + `docker export` is possible & reasonable.

Linked from the article: https://securityblog.redhat.com/2014/12/18/before-you-initia...


The inevitable CVE's coming from this report will definitely get their attention. Hopefully the adults in the room will help make sure that the Docker team addresses what up until now has been a really lax approach towards security.

Who is the architect in charge of this, and do they have any security chops? If not, it's just a matter of $$$ to get a 3rd-party security review before every major release. I've done it before, and it's really not a big deal.


It confuses me why they wouldn't just verify the images since they have the signature in the manifest. Is this because they don't want to wait for a complete image before the start streaming through the pipeline? Is this actually a significant time saver?


The entire model looks to me like it never had even the most superficial security analysis done. It's like a smorgasbord of insecure decisions:

* the false sense of security from putting signatures in the manifests then ignoring them

* loading signing certs via the network with no provision for pinning

* happily loading untrusted/unsigned images by default (npm, rubygems, installtools, etc. also do this but why repeat their awful design mistake?)

* running basically everything as root (because why deal with all those messy permissions?)

My sysadmin Spidey-sense has been tingling at the rate of change in the Docker ecosystem since it went from "interesting POC" to "we think it's production ready" in a shockingly short period of time. Things like this sadly confirm that initial pessimistic view.


Not at all related to docker, but this sort of thing is what makes me happy about communities like Rust. They are taking an incredibly long time to get to 1.0, but they've been progressing methodically and consistently and are trying to get something good out the door instead of bowing to any pressure to release early. Of course, Mozilla is a different beast compared to Docker, Inc., there is less of a profit motive more so than a need to create a revenue stream to stay maintainable and keep creating good new tech.

Things like this are really putting everything that is happening with Rocket and the drama around it in perspective.


If this is the case, it would seem pretty insulting to me as a developer and user of Docker that "time-saving" is more important that security and validating images. I'd rather use a slower tool that is secure.


I think this is because they regard the tar'd layers as a transport mechanism, not as the signed payload itself.


Yes that makes sense, as tar is not fully deterministic, so untar and retar might give a different checksum on the same files (eg ordering). However it is generally better to keep the same bits people signed regardless.


Maybe you could use the Git packfile format; this is a self-contained compressed Merkle-tree. If you ever need to reconstruct deterministically the tar from that, you can use something like pristine-tar[0].

[0]: https://joeyh.name/code/pristine-tar/


Sounds interesting. Perhaps you should create a proposal for that on the docker issue tracker, so that it can be discussed as a possible alternative?


Particularly interesting given that some of these problems were pointed out to Docker folks ~4 months ago in the development of the feature. https://github.com/docker/docker/issues/8093#issuecomment-57...


Read through that thread, funny thing is 19 days ago someone is pointing out how the docs are misleading after he is informed that docker doesn't verify signatures [1]. @shykes makes the claim on one of the comments here that nobody has every brought up that particular issue before, and while he wasn't the one who responded to this guy someone from the docker core team did.

Clearly nobody changed anything in the docs to clarify. Like most things security related people don't care until there is a major exploit or somebody with enough clout complains enough, or more likely than not both.

[1]: https://github.com/docker/docker/issues/8093#issuecomment-65...


Not disputing that the "verified" message is misleading, but the comment you referred to was about "not yet being able to create a signed image".

I don't think there's anything yet in the docs with regard to signed image, apart for the release notes[1] mentioning it being a "sneak peak" of a coming feature, that is under development

[1]: https://docs.docker.com/v1.3/release-notes/#new-features


Early on, I asked that images be signed similar to Debian packages, but was met with skepticism and resistance. To me, none of the Docker core devs had a handle on security implications of allowing anyone and everyone to share random bits without being able to prove end-to-end integrity and nonrepudiation.

I hope this has changed, Docker is a great app. But if not, Perhaps someone would like to teach them a security lesson? It seems the only way most people actually learn, sadly. :(


Reminds me of debian and ubuntu's requirement that apt-get is run under root. There is simple ways to get apt-get to run on non-root, but it require giving permission to non-root account to modify important package signature files. But, they're not as bad as docker. It's becoming norm for these US/Silicon companies to give very bad integrity on data.


The difference here is that apt-get and its ilk need to modify critical system state basically every time they run, and that state isn't controlled by a persistent daemon. I actually consider this a great tradeoff: yes I have to use sudo to run that one command, but I don't have a long-lived process sitting around pulling data down off the Internet and doing stuff with it while humming along as uid 0.

It's also literally one line of code in most UNIX-based languages (syscall.Setuid(<uid that isn't root>) in Go, FWIW) to drop root privileges before doing something unsafe. Even if the main Docker daemon absolutely has to run as root most of the time, it can and should fork and drop that access for anything dealing with moving data between unstrusted (e.g. the Internet, user input, etc.) and trusted (verified, read-only local state) security domains.


As a quick word of caution (which doesn't invalidate anything you've said), Go has a long-standing bug[0] whereby syscall.Setuid doesn't always apply to all threads (on Linux at least) so extra care does have to be taken.

[0] https://github.com/golang/go/issues/1435


Wow. That's a fun one.

It's also a perfect example of why even really amazing teams reinventing a language/tooling ecosystem from scratch stumble over problems that were solved years (or even decades ago) in preceding platforms. I leave it as an exercise for the reader to decide if the "reinvent from scratch" critique is more deserved by Docker, Go, or Linux.

That being said, I'm pretty sure even the broken Setuid behavior described there would be good enough to sandbox a thread or child proc that was just handling buffered I/O into and out of the xz binary.


It's not a bug, strictly speaking; it's just a feature that's really easy to misunderstand. The `syscall` package in Go tends to be logic-less wrappers around the raw syscalls, and that's what happens here.

Linux actually maintains the uid/euid/suid/gid/egid/sgid/etc. fields per OS thread (which are actually processes, just with a bit of shared memory). The raw syscalls only change the fields on a single task.

Glibc is where the logic happens to propagate that setting to all threads, by setting up signal handlers and immediately triggering a signal, IIRC.

You can get the useful behaviour by using cgo and importing setresuid from unistd.h.

What go should probably do here is:

1) Add os.Set{res,re,}{u,g}id, which implements the logic from glibc.

2) Remove syscall.Set{res,re,}{u,g}id. Anyone that wants that behaviour can use syscall.Syscall6 and syscall.SYS_SETUID or whatever. At least, they could add some really loud godoc to those methods.

The main problem with that is that `os` tries to have a cross-platform API, and (for example) BSD has no saved user or group IDs and therefore no setres{u,g}id. I suspect Windows and Plan9 are even weirder.


> It's not a bug, strictly speaking; it's just a feature that's really easy to misunderstand.

It is a bug on Linux -- strictly speaking. Did you see how it is patched? They removed setuid from linux: https://codereview.appspot.com/106170043

> "That these functions should made to fail rather than succeed in their broken state."


Wow, nice bug.

I like a lot of things about Go, but I wish they didn't invent their own mini operating system in the runtime on top of Linux/POSIX. This bug seems like a good example of why that's a leaky abstraction.

I think it would be interesting to explore a coroutines + threads + channels design space for a concurrent language. Basically like Go, except without the green thread implementation.

For CPU bound work, you can use threads. For I/O bound work, use coroutines. And then they all compose together with channels. The implementation would be a lot simpler because it works with the OS rather than trying to paper over it.


As a non-security aware(not a security specialist) developer, this was one of the most instructional and concise little gem about security flaws i've read. You can learn very useful tricks just by reading this. Thank you




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: