Perhaps I missed something though? (https://www.youtube.com/watch?v=at72dhg-SZY&feature=youtu.be...)
First, TUF allows you to have freshness guarantees over the content. In GPG's model a MITM or malicious mirror can serve you old, known vulnerable content that you'll accept as valid because the signatures verify. This is not possible with TUF as metadata is additionally signed with a timestamping key.
Second, TUF has a property called 'survivable key compromise' which basically means that there are a hierarchy of keys involved in the system, each with a different responsibility and security requirements. There's a root key that's kept offline, a target key responsible for signing actual content, a timestamping key for freshness, and a snapshot key to tie all the other keys together. GPG's model does allow for signing subkeys, but it is rather clunky to use and many of the Linux package managers don't support using signing subkeys, sadly.
Finally, GPG's usability leaves something to be desired. Docker makes pushing and pulling of images extremely easy, essentially making everyone a publisher of content. GPG works when publishing software is more rare and you can take the time to use a new utility in order to get security guarantees, but we wanted to make it extremely easy so that anyone can do it.
For more background, this paper does a good survey of existing package managers and where they fall short: https://isis.poly.edu/~jcappos/papers/cappos_pmsec_tr08-02.p...
A wrapper would have done an awesome job at that... I use GnuPG daily, enter a passphrase once and boom. Mails are signed, my password manager unlocked. Where is the "unusability" in this?
TUF should be understood as a higher level concept than GPG. There are additional features of the TUF spec that we'll be implementing in later versions, such as threshold signing (k of n signatures required for verification) and secure delegation.
For what its worth, TUF could be implemented on top of GPG just fine. If folks have an appetite for that we'd welcome contributions here: https://github.com/docker/notary
This presumes you use HTTP, have a compromised SSL cert, or have pissed off the NSA. At the worst one would be installing older packages with known vulnerabilities via a replay attack, not a fresh code injection by the attacker. This is more the fault of the package manager running over HTTP than GPG, as far as I can see.
Here's a paper covering 'survivable key compromise' by the same chaps: http://freehaven.net/~arma/tuf-ccs2010.pdf. Interesting stuff.
This is not taking into content mirroring. TUF allows you to treat all mirrors as potentially malicious allowing anyone to reliably deliver trusted content, even in an untrusted network. GPG does not provide a way to detect active attacks other than signature verification.
Python is also looking at using this for their package management system, pip.
 Discussion: https://news.ycombinator.com/item?id=9708120
This seems like a lot of extra infrastructure and process in a space where there is already a lot of infrastructure and process.
Granted, this doesn't always apply to packages you build yourself.
As a rule of thumb, end-to-end trust and naming is most useful to developers ("How do I know I'm building on the right dependency, and using the latest and most secure version?"), and low-level hashing is most useful to ops ("How do I enforce a whitelist of containers allowed on my production cluster based on home-made PKI and policies?")
Another distinction is that you can use Notary (and Docker Trusted Content) with any kind of content - for example Compose files, source artifacts, system packages used to build the container, etc.
Privilege escalation vulnerabilities are found in the Linux kernel fairly regularly -- like, monthly, sometimes weekly. An attacker who can run arbitrary code in your Docker container would only need to wait a couple weeks for the next vulnerability report (or poke around the kernel code and find a new one) and then hit you before you can patch. The most recent example is this batch of CVEs: http://www.openwall.com/lists/oss-security/2015/07/22/7
Part of the problem is that the Linux kernel API is gigantic with lots of obscure features that haven't been carefully vetted. One way to solve this problem is to drastically constrain the attack surface by doing things like using seccomp-bpf to block obscure system calls, not mounting /proc or /sys, etc. Unfortunately doing this will sometimes break apps. Usually the apps can be tweaked to work around the missing features.
Docker is not meant to be a sandbox. Docker is meant to be able to run any arbitrary Linux software. So Docker comes down on the side of compatibility, and does not use attack-surface-reduction techniques (unless you manually configure them, which no one does).
In contrast, Sandstorm.io (of which I am lead developer) prioritizes security over compatibility, and makes attack surface reduction mandatory for all apps. Some docs:
The second link is almost exactly a year old, but has proven true: we've seen a lot of kernel exploits in the last year that were non-events for Sandstorm. The above-mentioned CVE, for example, did not affect Sandstorm because we block the modify_ldt syscall.
Note that Google Chrome's sandbox pioneered these techniques -- they originally created seccomp-bpf.
- Not running as root.
- Enabling the no_new_privs prctl to prevent attacks on buggy setuid binaries.
- Using an aggressive seccomp-bpf filter.
- Hiding /proc, /sys, most of /dev, etc.
- Other things I'm forgetting at the moment. :)
It's ok that Docker containers are not yet secure sandboxes, and it would be a great if that changed.
But potentially breaking compatibility with existing apps is understandably not something Docker wants to do. (Whereas it's something sandstorm.io is happy to do, because apps already need to be tweaked in other ways for it.)