Hacker News new | comments | show | ask | jobs | submit login

I really advise caution with using 'docker load'. It's more susceptible to malicious input than 'docker pull' is. That said, it has a stronger trust story. Lets not confuse the two. If you know without a doubt that your image and all of its layers are safe, then you can use GPG to do image signing and verifications, then load the image into docker.

That assumes that what you're signing and verifying is safe, i.e. non-malicious. Again, 'docker load' is less protected against malicious inputs, so under no circumstance is it safer to load arbitrary, untrusted content through this mechanism.

Finally, an interesting middle-ground is to containerize the 'docker pull', then use Docker itself to generate sanitized input to 'docker load'. It's not perfect, there are still ways to attack it, but I did put together a PoC of this:

  $ docker run ewindisch/docker-pull ubuntu | docker load



> It's more susceptible to malicious input than 'docker pull' is. That said, it has a stronger trust story.

What does that mean?

Let's look at how 'docker pull' and 'docker load' compare:

- They're both loading an image into docker.

- They're both NOT checking any signatures in any meaningful way.

- All that's different about `docker pull` is that it's fetching directly from the network.

How could 'docker load' possibly be more susceptible to malicious input?

Clearly, there is never any circumstance in which it is safe to load arbitrary, untrusted content through docker. I fail to see how loading untrusted content from the network could be safer than loading untrusted content from disk.


It's mostly in how Docker perform the 'docker load' and 'docker pull' that results in a different security story, not so much how it ultimately extracts the files and applies them to the filesystem.

When you use 'docker pull', you're explicitly loading a specific tag and the layers associated with it.

Docker 'load' doesn't load an image and a tag as specified by the user, it loads an arbitrary number of images and tags as specified in the provided archive.

For one example of an actual vulnerability, up until Docker 1.3.3, it was possible for a 'docker load' to execute path traversal attacks based on malicious image IDs. This was largely mitigated by the 'docker pull' code and url semantics. There was still some risk from malicious registries, but again, mitigated by having a trusted registry behind TLS.


Running something inside a container does very little to actually secure it. If someone can execute arbitrary code inside a container, they can use a kernel exploit to jump outside of the container. It's important to always keep in mind that containers provide resource isolation, not a security boundary.


Running in a container restricts what processes may do in userspace. It's possible to take away CAP_SETUID, so that even if a process could execute arbitrary code, it could not leverage a setuid binary. There are a whole set of capabilities that admins may take away from their processes that are possible with a container. Some of these, yes, actually could protect against certain kernel exploits.

However, no, containers do not entirely protect against kernel exploits. Yet, is that what we're talking about there where the alternative is to simply execve(3) a binary, and possibly setuid to a non-root uid? Running processes with a restricted capabilities set and new namespaces is generally more secure than running them without these restricted capabilities without namespaces.

So yes, actually, containers do provide security. No, it's not absolute, but it's better than the alternative if the alternative is a naive 'execve' or one of its many frontends.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: