Do not assume all software you currently use gets these things right.
> Delegated trust can be revoked at any time by the delegating role signing new metadata that indicates the delegated role is no longer trusted.
This is a mistake, as it means that a client must receive the new metadata in order to be made aware of the revocation. The correct approach is either to delegate trust for a particular time period (determining how long is a risk-based decision), or to specify an online trust check (this fails safe).
I.e. there are multiple time-based controls in place
I kind of think it's an important part of secure management of libraries. As the Fosshub breach last week showed, software repo's are tempting targets for attackers...
I really prefer the system managers, but what if things don't get new packages in a timely manager. When I was on openstack security, we used the Cannonical repos since they updated faster than the Debian repos, but some major exploits would be left out there for half a month (which lead down the path of; do we want to run our own CI-builds so we can deploy CVE patches immediately).
This isn't even getting into the issues with Java/Scala/Groovy projects that could have jars with exploits embedded within the project (most OS packages let projects keep their own jars, instead of using a set of system jars. Gentoo tries to use system jars, but oh boy is that it's own crazy mess). There are lots of programs that embedded C/C++ libraries for stability/error reporting (I'm looking at you Firefox/Libreoffice) and hence there are scanning tools for finding these embedded libraries with security issues (some guy did a talk on this at Ruxcon 2012 I believe).
TL;DR It's turtles all the way down.
I'm writing software updater right now. Security is one of major concerns. Target platform, both client and server side is Linux. I write most of the code in Python. Looks like great opportunity to use TUF. But it is not. My updater has so many specific requirements that TUF is not nearly what I need. More than this, TUF is not subset of what I need and cannot be foundation for what I need, even from security perspective.
Initially client knows only own hardware ID (which is just sha256 of part serials, MAC, etc.), and registers itself on server, which is allowed only from trusted networks. During registration client receives unique HTTP client certificate. Later administrator approves registration and assigns client specific stream. Stream may include sensitive information, like unique credentials to access some other services. Some overlays, which are not secret, like executables, 3rd party libraries, OSS stuff, are just rsynced to save time and bandwidth, but others like configuration and credentials, are downloaded over HTTP with client certificate authentication. Overlays are merged on client side in memory and applied in semi-atomic way (series of file moves within partition, not writes or other I/O in between).
There are a few other requirements, like log if devices tries to register second time, log if device tries to register from non-trusted network, log every update attempt, successful or failed, etc. And this is of course just brief overview. For instance, administrator approving registration is not a single person, but group of administrators, each with different permissions on streams. Client cannot download stream of some other client, and cannot get any information on overlays used by other clients, etc.
I do not see any way generic updater [framework] (not only TUF) to fulfill my requirements not causing more problem, than solved.
A couple things you pointed out are definitely hard with TUF right now:
> For instance, administrator approving registration is not a single person, but group of administrators, each with different permissions on streams
TUF is being actively developed and this type of approval pattern is planned: https://github.com/theupdateframework/tuf/issues/346
> Client cannot download stream of some other client, and cannot get any information on overlays used by other clients, etc.
This is currently a limitation of TUF, but it is known and there are some tentative plans to address it. If you have specific use cases you'd like to contribute to the discussion, there are GitHub issues and the mailing list.
I think it's worth pointing out that even if you don't use TUF, you need to be thinking about the vulnerabilities it's designed to protect against. Once you start doing that you'll probably end up with something similar modulo formats and transport.
"Those threats" in this case are things which major package managers you rely on currently get dangerously wrong. (I'd rather leave this sans naming of names and as a "think critically" thing, but seriously: for one example, look at apt and how it handles rollback prevention.)
TUF is not a pointless competing new standard. It contains useful information and constructive thoughts. If you think the 15-standards XKCD is a relevant dismissal... try giving it another read in the morning.
(I'm sorry, but this XKCD really raises my hackles when used wantonly. I've never seen such a good description of threats to update systems as the TUF spec. I link to it constantly as an educational material. It's really not constructive to dismiss it with a webcomic link.)
Seriously. I can't think why this wouldn't "just work".
Git protects against:
* Arbitrary installation attacks. (Because metadata is signed)
* Extraneous dependencies attacks. (Because metadata is signed)
* Mix-and-match attacks. (Because a git commit hash depends on its parents)
* Rollback attacks, assuming you make sure you only fast-forward when pulling.
* Wrong software installation.
* Indefinite freeze attacks. The attacker could
just replay the same signed git repository for a long time.
* Vulnerability to key compromises. (Everyone can sign any commit, no separation)
So I don't see how `git` is a valid solutions to the problems that TUF wants to solve.