"Controlling bits like the Carry flag is essential to the security of all crypto algorithms (techniques like DPA and "timing attacks" try to discover this information by observing the operation of the CPU) if you have a hardware way to transfer just this ONE bit, than most crypto available todays is useless. "
He kept pointing out, probably from experience, that you could just modify a bit here, an MMU there, or add RF circuit to bypass plenty of protections. Nobody would even notice analog additions because "their digital tools can't see it." It would take careful reverse engineering. Old risk already deployed into production re-invented in a neat, new paper with new technique.
Honestly, I originally got the subversion idea from the MULTICS Security Evaluation  . Schell and Karger, not Thompson, should get credit for first attack like this as they introduced software that kept poking at a memory location until the MMU experienced an intermittent failure. They got in since software people assumed HW always worked. They also invented basis of "Thompson Attack" (see note below). So, I predicted HW trojans sitting on MMU, IOMMU, PCI, TRNG's, and some other things using non-standard circuits that nonetheless preserve timing, etc. So, a few years ahead on this one.
Note: Karger and Schell also invented in same project the idea of subverting a PL/I compiler to insert malicious code into stuff compiled with it, including the OS. Thompson read that and expanded on it with Trusting Trust. Now, Karger and Schell attack is called the "Thompson Attack." Nah, the founders of INFOSEC thought of that one first, too. Take that Thompson fanboys! :P
"It was noted above that while object code trap doors are invisible, they are vulnerable to recompilations. The compiler (or assembler) trap door is inserted to permit object code trap doors to survive even a complete recompilation of the entire system. In Multics, most of the ring 0 supervisor is written in PL/I. A penetrator could insert a trap door in the PL/I compiler to note when it is compiling a ring 0 module. Then the compiler would insert an object code trap door in the ring 0 module without listing the code in the listing. Since the PL/I compiler is itself written in PL/I, the trap door can maintain itself,
even when the compiler is recompiled."
Given backdrop, hard to say whether they put it in the source or object code of the compiler. It's ambiguous: "since PL/I compiler is written in PL/I." Either it's because they have backdoor in its source code or because the backdoored, object code is the PL/I compiler that will be used to re-compile any PL/I source. Next paragraph indicates they insert the trapdoor in another routine using object code that closely matches that produced from PL/I source. So, I'm assuming... with some uncertainty... that they bugged the object code of PL/I compiler to add trapdoor to it and all executables on compiles. With nothing left in the source.
Then, Thompson paper simply says:
"First we compile the modified source with teh normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of teh compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere."
Sounds like they're doing the same thing except MULTICS attack uses assembly code directly. They might have coded it in PL/I first, then directly inputed the code. That would make both attacks equal. Who knows. That they each bug the compiler at object level with no source-level evidence seems accurate. In that case, Thompson attack is MULTICS PL/I attack applied to C with clear use of C for subversion artifact.