
Reflections on Trusting Trust (1984) [pdf] - jdnc
https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
======
dwheeler
"Reflections on Trusting Trust" is really awesome; it's a very cogent
discussion of the problems of subverted tools (like compilers).

If you read that, you should also take a look at my PhD dissertation which
discusses how to counter it. It's "Fully Countering Trusting Trust Through
Diverse Double-Compiling" (2009), available at:
[https://www.dwheeler.com/trusting-trust](https://www.dwheeler.com/trusting-
trust) . It was discussed on Hacker News in 2016; here's the link:
[https://news.ycombinator.com/item?id=12666923](https://news.ycombinator.com/item?id=12666923)

Another related work is the reproducible builds work; a good link is here:
[https://reproducible-builds.org/](https://reproducible-builds.org/)

~~~
jdnc
Yes, its one of my favorite papers. Thanks for the pointers, very useful.

------
nickpsecurity
This is a nice elaboration of the attacks and issues discovered by co-founder
of INFOSEC, Paul Karger, during his pentest of MULTICS in early 1970's. That
included a proposed PL/I compiler that inserted vulnerabilities into itself
when compiled. Thompson got the report while on MULTICS team. The solution was
called high-assurance security which culminated in Orange Book of TCSEC. It
came out in SCOMP about same time Thompson started writing on the original
problem.

[https://www.acsac.org/2002/papers/classic-
multics.pdf](https://www.acsac.org/2002/papers/classic-multics.pdf)

Paul helped invent much of INFOSEC from scratch with lots of his lessons
rediscovered over time by mainstream INFOSEC that largely ignores
predecessors' work. Here's Paul's other stuff for any interested:

[https://www.semanticscholar.org/author/Paul-A-
Karger/2467751](https://www.semanticscholar.org/author/Paul-A-Karger/2467751)

Back to this topic, the definitive solution is high-assurance compiler
combined with SCM techniques presented best by David Wheeler:

[https://www.dwheeler.com/essays/scm-
security.html](https://www.dwheeler.com/essays/scm-security.html)

~~~
dwheeler
Paul Karger is well-known and widely cited by people who know about computer
security / information security. Yes, there are a lot of snake-oil salesmen
and people who think reading one book makes them experts. However, since they
generally don't know very much, it's unsurprising that they wouldn't know
about Karger either. Karger's work has been influential.

Oh, and thanks for the citation about my paper on SCM security!

~~~
nickpsecurity
It's true in CompSci but not popular INFOSEC. Most of them cite Thompson as
original, definitive work as he likely intended when not giving Karger credit
in first paper. The people discussing these things also dont know about all
the attacks and solutions Karger found. If you need evidence, just compare
most security-oriented VMM's to Karger's for VAX. It's like nothing got passed
down. They even rediscovered covert channels in cloud VMM's around 2010 or
whatever. So, I keep referencing him on topics likd these.

"Oh, and thanks for the citation about my paper on SCM security!"

You earned it. :) One thing I am still curious about was who originated the
high-security techniques for SCM. Back in Karger's day, they just put it in
safes on paper. Im wondering who did fundamental stuff on making it electronic
and secure, though. Figure it's in your collection but asking in case you know
off hand.

~~~
dwheeler
I don't think Ken Thompson meant to omit the names of Paul Karger and Roger
Schell. At the time it was often extremely difficult to find a paper unless
you already knew where it was. Whereas today you can do a search and instantly
find it and read it: "Multics Security Evaluation: Vulnerability Analysis" by
Paul A. Karger and Roger R. Schell, June 1974,
[http://seclab.cs.ucdavis.edu/projects/history/papers/karg74....](http://seclab.cs.ucdavis.edu/projects/history/papers/karg74.pdf)

I've had trouble finding some of the earlier work on high-security SCM; I do
reference what I found. If you find something important I'm missing, send me
an email:
[https://www.dwheeler.com/contactme.html](https://www.dwheeler.com/contactme.html)

~~~
nickpsecurity
The paper from the NPS in 2002 referenced Air Force as inventors of
configuration management. As military, they might have had security
implications in it. Found some pay dirt.

[http://www.dtic.mil/dtic/tr/fulltext/u2/650214.pdf](http://www.dtic.mil/dtic/tr/fulltext/u2/650214.pdf)

[https://www.computer.org/csdl/proceedings/afips/1967/5069/00...](https://www.computer.org/csdl/proceedings/afips/1967/5069/00/50690045.pdf)

First is a retrospect on original document. It referenced accounting
procedures for the artifacts. Makes sense they'd look at it from accounting
standpoint as that's what they did with other things. Ware Report did that,
too, for early INFOSEC. Lacking details from manual, I found a follow-up in
1967 that describes key details on p4 under procedural data esp concerning
changes. Sounds like an early form of SCM security. I'll have to try to find
the computerized one later on as I have a feeling it independently happened in
mainframes or minicomputers outside INFOSEC field.

Btw, your link to Zeigenhagen was dead when I tried it. DTIC to the rescue:

[http://www.dtic.mil/dtic/tr/fulltext/u2/a417577.pdf](http://www.dtic.mil/dtic/tr/fulltext/u2/a417577.pdf)

------
noobermin
You clearly can't take this argument to ad naseum because then no one could do
anything without starting from scratch. Most linux distros distribute
binaries+source, and even source based distros like Gentoo have you download a
something like a tarball for bootstrapping. Of course, there are things like
checksums and such, but you have to, of course, trust the source.

Yet, the world still turns, absent some terrible exceptions. It's almost as if
no system is truly safe and secure, which I think cultures have known since
the ages.

~~~
nickpsecurity
Sure you can. I've proposed how to do it here several times. You can start
like Niklaus Wirth with a simple interpreter (eg P-code) + basic, routines for
hardware. You write compiler as series of small passes (no optimization) in
that interpreter. Compile it with itself. Add optimizations and recompile.

For hardware, you start with simplified CPU like a Forth or ZPU processor. One
you can check by hand and eye on an older process node. Reverify it with
itself. Then run above step for software on it. Optionally better EDA tools,
too.

The diversity method ports the simple interpreter or compiler to a number of
CPU architectures or dev tools. Pick whichever you want. If you aim for same
binary, you'll need compiler designed to do that or reproducible builds like
Wheeler links to up thread.

~~~
noobermin
"There exists" is not the same as it is practical. My sentiment doesn't regard
possibility, but feasibility for the average user.

~~~
nickpsecurity
It's quite feasible. You just pay a programmer or smart student to do it for
you using the published literature. Done. Even stronger if you trust them. Use
multiple, distrusting pros on same project if more trustworthiness is needed.
What you cant afford you crowd fund or ask for grants.

It's a question of priorities rather than feasibility. Most users or customers
dont care enough to invest time or effort needed. End of story. Same one as
usual for strong INFOSEC.

I'll also add that there already exists certifying compilers for C and
Standard ML that both extract to ML. The ML extracted is simple enough to
hand-compile to ASM following one of two guides. So, not only can oma human
verify no subversion, there's already a production one with machine-checked
proof that's also be checked by humans [again].

------
bub_davos
I had never heard about this lecture until a few months back. I read the
annotated version on Fermat's Library which really helped me understand it -
[http://fermatslibrary.com/s/reflections-on-trusting-
trust](http://fermatslibrary.com/s/reflections-on-trusting-trust)

~~~
jdnc
Never really knew about Fermat's library. From skimming it looks like a really
great idea, especially for reading papers in areas you may not be that
familiar with.

