Hacker News new | past | comments | ask | show | jobs | submit login
Reflections on Trusting Trust (1984) [pdf] (cmu.edu)
75 points by kalium-xyz 32 days ago | hide | past | favorite | 19 comments



56 submissions: https://hn.algolia.com/?query=Reflections%20on%20trusting%20... (Reposts are ok on HN after a year or so: https://news.ycombinator.com/newsfaq.html)

These seem to be the threads and they're mostly small:

2017 https://news.ycombinator.com/item?id=13569275

2015 https://news.ycombinator.com/item?id=10698537

2015 https://news.ycombinator.com/item?id=9183106

2014 https://news.ycombinator.com/item?id=8662876

2011 https://news.ycombinator.com/item?id=2642486

2008 https://news.ycombinator.com/item?id=300350

Like a lot of classics, it seems to have actually been discussed less on HN than we would all assume.


"Reflections on trusting trust" is specifically called out as a motivation for the Bootstrappable Builds project[0], which I think is an interesting approach at regaining some of the trust in our software. Another, better known project is the work on Reproducible Builds[1], and of course it would be nice to have auditable hardware based on clearly documented designs.

[0] https://www.bootstrappable.org/

[1] https://reproducible-builds.org/


For some other projects that more or less explicitly try to approach this challenge from various angles, see e.g.:

https://dwheeler.com/trusting-trust/

https://github.com/akkartik/mu


There is also GNU Mes, which is able to build a GCC toolchain starting from nothing but a Scheme interpreter:

https://www.gnu.org/software/mes/

Practical implementation notes can be found on the Guix blog:

https://guix.gnu.org/blog/2019/guix-reduces-bootstrap-seed-b...

https://guix.gnu.org/blog/2020/guix-further-reduces-bootstra...


Ken Thompson is a giant in our field. This paper convinced me that pragmatically all computer security reduces to theater. There are too many moving parts and the likelihood of some TLA influencing one of them such that it has a backdoor is so high that you can comfortably assume that any information you enter into a computer network is publicly available. One of my ex-NSA buddies told me that they don't even bother attacking crypto algorithms, they just attack the implementation.


"Trusting Trust" and the halting problem are often misinterpreted in similar ways. Unbounded skepticism is not right take-away from either of those results.

I don't have a solution to the halting problem, but I can definitely tell you that `while(false) {}` terminates.

Similarly, I can't resolve the problem of trusting the people who write the libraries I use, but I identify a trusted computing base and substantially reduce or even sometimes eliminate the risk of attacks that don't violate the integrity of the TCB.

> This paper convinced me that pragmatically all computer security reduces to theater.

I have yet to be bitten by a smashed stack using any language other than C/C++.

I have yet to be bitten by a SQL injection using any interface to a DB other than passing raw strings around.

As Ken notes, I still have to trust the developers of those libraries/languages (or audit the code myself). But that's okay. Nailing down a chain of trust is possible.

Ruling out certain classes of security vulnerabilities is possible.

> One of my ex-NSA buddies told me that they don't even bother attacking crypto algorithms, they just attack the implementation.

Again, a solvable problem^1. Within the next 5-10 years, they might have to go back to attacking the algorithms.

[^1]: See e.g., https://www.wireguard.com/papers/zinzindohoue-bhargavan-prot... Note that the first sentence of Ken's paper still holds, but the number of people you have to trust can be reduced if you only need to trust the verifier's kernel instead of every line of code committed from every contributor.


It's possible to provably construct a program that will halt. That is the whole point of loop invariants. Similarly it is possible to provably construct a "secure" program up to physical limitations. However in both cases it requires that the semantics of the programming environment are known to the programmer. Even without a malicious adversary there are gotchas at every level from hardware bugs, compiler bugs, programming errors, incorrect configuration, and who knows what else that impair the programmer's ability to understand the actual semantics of the programming environment. When you add to that the potential for maliciously clever modifications such as the one in the Thompson paper along with the undeniably large benefit to any number of parties that might wish to compromise same, the only rational conclusion is that no computing system deserves to be trusted in itself.

I don't trust other levels of the alleged chain of trust either. For example I consider it an absolute certainty that every competent globally active intelligence agency in the world has assets inside Facebook and Google, and most likely others. The cost and difficulty of inserting or recruiting an asset is so low and the potential benefits are so high that it's certain to have happened and in fact already has[1].

By the way while my ex-NSA buddy didn't elaborate on what he meant by attacking the implementation, but I'm pretty sure that includes subverting an implementer.

Edit: I don't think we really substantially disagree, the above is meant to be elaboration not rebuttal.

[1] https://www.washingtonpost.com/world/national-security/nsa-i...


The difference between security thinking and free-floating paranoia is the threat model. Are you protecting against an actor with goals and capabilities, or are you trying to make yourself feel better? In the first case, you can make it harder for that actor to use their capabilities to get what they want than they're capable of justifying. In the second case, eat some chocolate. It won't make you any more secure, but it might help you feel better.


> Are you protecting against an actor with goals and capabilities, or are you trying to make yourself feel better?

It is difficult to protect against state actors.

But a lot of attackers are not state actors. They are private actors (whether lone individuals or groups) going after the low-hanging fruit. Protecting against this threat is a lot easier.

And for a lot of enterprises, the second category is a bigger threat than the first. Even if the NSA steals all your data, they are unlikely to release it to the general public or use it for extortion. The later group are much more likely to do things like that. If the NSA hacks you, you quite possibly will never know they've done it. If the later group hacks you, they'll make sure you know they've done it by the pain they'll cause.

(Of course, I realise how for companies in certain sensitive industries, such as defence, aerospace, semiconductor fabrication, etc, the threat of state-sponsored cyber-espionage from competing international powers is a real threat – however, I think in those cases one's own country's three-letter agencies are often keen to be helpful.)


That's what he wants you to think.


Worth looking at David A. Wheeler’s 2009 work for countering trusting trust attacks: https://dwheeler.com/trusting-trust/


Thank you so much for the reference! I'm that David A. Wheeler. If anyone has questions about my work, just let me know.

Here's how to contact me by email if you prefed: https://dwheeler.com/contactme.html


Every time this comes up, I’m reminded of this answer to the Quora Question “What is a Coder’s Worst Nightmare?”

https://qr.ae/pNKf3M


If you enjoy Reflections on Trusting Trust, you will absolutely love Coding Machines by Lawrence Kesteloot. Same concept is explored further as fiction.

(If you like, don't forget to buy it) https://www.teamten.com/lawrence/writings/coding-machines/


For anyone who's put off by the visual quality of the pdf, I made a nicer copy of it: https://nerfsoftware.com/reflections.html


Made a short explainer video covering this: https://youtu.be/Ow9yMxJ8ez4


Thank you for sharing and making it.


I've always wanted to write something about how this is a more general philosophical problem that goes beyond computer security. Even Descartes failed to be skeptical enough to account for it https://en.m.wikipedia.org/wiki/Cartesian_circle


A classic paper. The general struggle in security is getting to a point where you minimize the trust required by your application.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: