The paper on Trusting Trust is as much a thought experiment as anything. It's not really about compilers, and thinking of it as a "compiler hack" misses the point somewhat. The bigger point of that hack, is if a component of a system is integral and implicitly trusted, then compromising that component means that the entire system can no longer be trusted.
It seems sort of obvious when stated that way, but Ken's demonstration pointed out a few significant details. First, that such an attack didn't have to be large, only two or three changes to the compiler. Second, that such an attack was practical to implement instead of just theoretical. Third, that once a system is compromised, it couldn't even be relied upon to tell you that it was compromised (his follow-up included a third compiler tweak to alter the decompiler), which was probably the scariest point of all.
The type of attack that Ken described is still relevant, even if it's not the compiler that's the target. For example, if you could compromise Windows Update, you could do basically the exact same thing. That's why Windows Update requires crypto signatures.
A variation of the threat very much exists: Do you know (meaning, there are no unknown unknowns) that no piece, anywhere in your toolchain (OS, compiler, library, framework, buildtool, CI etc), contains malicious code? At the time Thompson made this comment, he talked about the C compiler because it was the one bit of shared code everything else depended on. Today, the attack surface is much, much larger.
If I was running a foreign intelligence service (or a sophisticated mafia operation (potato, potato)), I'd target trusted members of the communities around popular open source projects. A high ranking committer to a library that Rails depends on slips in a bit of clever code in a fix for a legitimate problem, and I can compromise a large number of websites.
I think you vastly overestimate the difficulty of getting a code execution bug on Rails relative to the resources of a nation-state. You don't need to do anything remotely suspicious; just hire vulnerability researchers. They're ridiculously cheaper than aircraft carriers.
Or, to put it another way, anybody with $20k can uncover an RCE in Rails without needing to suborn any community member.
The crude variation of the attack-vector I'm describing is a backdoor, that is correct - but that's also what Thompson is imagining his attack being used for.
Thompson's scenario is more devious, more difficult to detect, and a lot, lot harder to implements (near impossible, as some of the SO answers point out). But at that time, people didn't generally download millions of lines of code written by perfect strangers and incorporate them into their programs, so that was the only way to pull such a thing off.
"At some point, aboveboard modifications to the compiler or login program might get to the point where they defeat the backdoor's compiler-recognizer/login-recognizer, and subsequent iterations would lose the backdoor. This is analogous to a random biological mutation granting immunity to certain diseases."
It seemed very weird at the time--the AT&T lawyers making him deny what he had done.
And it is possible that the denial was about the general release of the exploit, rather than whether or not he really did it. My memory could be slightly off for events back that far ago.
It was a minor industry rag and the story was published shortly after the hack was announced. It might have been the same paper that Dennis Ritchie told of auditing the behavior of Coherent and saying publicly that it was definitely not a clone. I think the lawyers were upset about that as well.
From: Ken Thompson <ken@google.com>
Date: Wed, Sep 28, 2011 at 6:27 PM
Subject: Re: Was compiler from "Reflections" ever built or distributed?
To: Ezra Lalonde <ezra@usefuliftrue.com>
I've read about this scenario before but I didn't realize it was Ken Thompson who proposed it. Very cool.
When I read the title I thought maybe Ken Thompson's compiler hack was Go (golang), which he co-created, and I was going to hear a bunch of Python or Node fans talk about why Go is not a threat :)
Did you mean that the compromised printer would be made to produce compromised output? If so, the downvotes may be because it's relatively easy to audit the output of a 3D printer, compared to the output of a compiler.
It's a common practice, for example, to x-ray a critical piece of structure to verify it's free of cracks. A similar inspection would generally show if the hidden internals of a printed object conform to the original design.
Auditing the output of a compiler, on the other hand, would be devilishly hard, multiplied by the fact that every program it has ever produced is suspect. Given that this might include your kernel, filesystem, network drivers, etc., it's hard to find an adequate 3D printer parallel. Yes, a bit of crucial structure might be compromised by a pwned printer, but it would probably be relatively easy to spot.
So, the full argument my friends and I were having, was that it would be impossible for a government to enforce printers that could not do guns. My assertion was it was likely more doable than my friend would care to admit. My claim centered on this idea, as well as the identifying marks laserjets do on printouts and a belief that they already have scanners noticing when money is scanned. (No, I don't know the details on all of that, so I fully concede my argument is not air tight.)
Actually, I was thinking more that you could target common ammunition types.
Regardless, I thought it keyed into this idea, because so long as you had it in the printer to make it such that any printer it created also did this, it fits this description perfectly. (And yeah, I realize I just jumped from 3d printer to replicators. My knowledge in this area is as shallow as it seems. :) )
Edit: forgot to add, the idea was originally that the printer would make the gun in such a way that it would be identifiable as coming from that printer. Not that it would have to refuse to make them.
It seems sort of obvious when stated that way, but Ken's demonstration pointed out a few significant details. First, that such an attack didn't have to be large, only two or three changes to the compiler. Second, that such an attack was practical to implement instead of just theoretical. Third, that once a system is compromised, it couldn't even be relied upon to tell you that it was compromised (his follow-up included a third compiler tweak to alter the decompiler), which was probably the scariest point of all.
The type of attack that Ken described is still relevant, even if it's not the compiler that's the target. For example, if you could compromise Windows Update, you could do basically the exact same thing. That's why Windows Update requires crypto signatures.