Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is Ken Thompson's compiler hack still a threat? (programmers.stackexchange.com)
70 points by S4M on Jan 27, 2013 | hide | past | favorite | 27 comments


The paper on Trusting Trust is as much a thought experiment as anything. It's not really about compilers, and thinking of it as a "compiler hack" misses the point somewhat. The bigger point of that hack, is if a component of a system is integral and implicitly trusted, then compromising that component means that the entire system can no longer be trusted.

It seems sort of obvious when stated that way, but Ken's demonstration pointed out a few significant details. First, that such an attack didn't have to be large, only two or three changes to the compiler. Second, that such an attack was practical to implement instead of just theoretical. Third, that once a system is compromised, it couldn't even be relied upon to tell you that it was compromised (his follow-up included a third compiler tweak to alter the decompiler), which was probably the scariest point of all.

The type of attack that Ken described is still relevant, even if it's not the compiler that's the target. For example, if you could compromise Windows Update, you could do basically the exact same thing. That's why Windows Update requires crypto signatures.


And the crypto sigs on WU don't matter because you can compromise the boot process.


A variation of the threat very much exists: Do you know (meaning, there are no unknown unknowns) that no piece, anywhere in your toolchain (OS, compiler, library, framework, buildtool, CI etc), contains malicious code? At the time Thompson made this comment, he talked about the C compiler because it was the one bit of shared code everything else depended on. Today, the attack surface is much, much larger.

If I was running a foreign intelligence service (or a sophisticated mafia operation (potato, potato)), I'd target trusted members of the communities around popular open source projects. A high ranking committer to a library that Rails depends on slips in a bit of clever code in a fix for a legitimate problem, and I can compromise a large number of websites.


I think you vastly overestimate the difficulty of getting a code execution bug on Rails relative to the resources of a nation-state. You don't need to do anything remotely suspicious; just hire vulnerability researchers. They're ridiculously cheaper than aircraft carriers.

Or, to put it another way, anybody with $20k can uncover an RCE in Rails without needing to suborn any community member.


$20k is too low, but your point stands.


Sure, but it sounds like you're talking about a standard hidden backdoor. This is a pretty straightforward and obvious threat.

What Thompson describes is a whole lot more devious and difficult to detect.


The crude variation of the attack-vector I'm describing is a backdoor, that is correct - but that's also what Thompson is imagining his attack being used for.

Thompson's scenario is more devious, more difficult to detect, and a lot, lot harder to implements (near impossible, as some of the SO answers point out). But at that time, people didn't generally download millions of lines of code written by perfect strangers and incorporate them into their programs, so that was the only way to pull such a thing off.


>I'd target trusted members of the communities around popular open source projects.

I'm sure everyone here is already aware, but the FBI may have already experimented with that:

http://arstechnica.com/information-technology/2010/12/fbi-ac...


Very much so. Note the malware hack against Delphi compilers: https://lists.owasp.org/pipermail/owasp-cincinnati/2009-Augu...


Basically a compiler "rootkit".

I found this comment interesting:

"At some point, aboveboard modifications to the compiler or login program might get to the point where they defeat the backdoor's compiler-recognizer/login-recognizer, and subsequent iterations would lose the backdoor. This is analogous to a random biological mutation granting immunity to certain diseases."


I've seen this implemented in a Lisp compiler before. It's surprisingly easy to do.


Did he really do that?


Better: he announced the exploit in the talk he gave upon receiving the Turing Award:

http://cm.bell-labs.com/who/ken/trust.html


And, after he first mentioned the exploit, and before he achieved the award, the AT&T lawyers made him say publicly that he didn't really do that.


Where can we find records of him stating that this never happened?


I am unable to locate it now.

It seemed very weird at the time--the AT&T lawyers making him deny what he had done.

And it is possible that the denial was about the general release of the exploit, rather than whether or not he really did it. My memory could be slightly off for events back that far ago.

It was a minor industry rag and the story was published shortly after the hack was announced. It might have been the same paper that Dennis Ritchie told of auditing the behavior of Coherent and saying publicly that it was definitely not a clone. I think the lawyers were upset about that as well.


See the email in the article http://skeptics.stackexchange.com/questions/6386/was-the-c-c...

marked with

From: Ken Thompson <ken@google.com> Date: Wed, Sep 28, 2011 at 6:27 PM Subject: Re: Was compiler from "Reflections" ever built or distributed? To: Ezra Lalonde <ezra@usefuliftrue.com>


yes - look up Reflections on Trusting Trust - its a classic paper.


I've read about this scenario before but I didn't realize it was Ken Thompson who proposed it. Very cool.

When I read the title I thought maybe Ken Thompson's compiler hack was Go (golang), which he co-created, and I was going to hear a bunch of Python or Node fans talk about why Go is not a threat :)


The other day, I was putting forth the idea that this threat vector exists in 3d printers. I don't know any obvious reason why I'm wrong on that.

edit: I hate having to ask, but instead of downvoting, are there actual reasons this fear is completely unwarranted?


Did you mean that the compromised printer would be made to produce compromised output? If so, the downvotes may be because it's relatively easy to audit the output of a 3D printer, compared to the output of a compiler.

It's a common practice, for example, to x-ray a critical piece of structure to verify it's free of cracks. A similar inspection would generally show if the hidden internals of a printed object conform to the original design.

Auditing the output of a compiler, on the other hand, would be devilishly hard, multiplied by the fact that every program it has ever produced is suspect. Given that this might include your kernel, filesystem, network drivers, etc., it's hard to find an adequate 3D printer parallel. Yes, a bit of crucial structure might be compromised by a pwned printer, but it would probably be relatively easy to spot.


So, the full argument my friends and I were having, was that it would be impossible for a government to enforce printers that could not do guns. My assertion was it was likely more doable than my friend would care to admit. My claim centered on this idea, as well as the identifying marks laserjets do on printouts and a belief that they already have scanners noticing when money is scanned. (No, I don't know the details on all of that, so I fully concede my argument is not air tight.)


"Money" is a lot more of a known quantity than "gun." How would they detect a printer making a gun I'd designed myself?


I was thinking more of the common types of gun. Much more likely someone would try and make an AR-15/whatever than a self designed thing.

Again, I wasn't meaning this to be air tight. Just more doable than nothing.


Well, fair enough. If you hard-code in a few of the more common files that contain AR lowers, you'll probably catch 80% of this.


Actually, I was thinking more that you could target common ammunition types.

Regardless, I thought it keyed into this idea, because so long as you had it in the printer to make it such that any printer it created also did this, it fits this description perfectly. (And yeah, I realize I just jumped from 3d printer to replicators. My knowledge in this area is as shallow as it seems. :) )

Edit: forgot to add, the idea was originally that the printer would make the gun in such a way that it would be identifiable as coming from that printer. Not that it would have to refuse to make them.


How do you know Intel cpu can be trusted?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: