Hacker News new | past | comments | ask | show | jobs | submit login
A2: Analog Malicious Hardware [pdf] (impedimenttoprogress.com)
315 points by ltcode on May 25, 2016 | hide | past | web | favorite | 94 comments

This has some similarity to the rowhammer vulnerability. There, if you access some DRAM chips repeatedly in a specific way, some digital elements no longer behave in the idealized way that's expected, and there's cross-coupling between things that aren't supposed to be connected. This allows changing RAM to which you don't have access. That was accidental, rather than being designed in.

This new attack is deliberate, rather than accidental, and very explicit, being wired to to the protected-mode bit. It points the way to even more subtle attacks, perhaps something that misbehaves slightly as power management is bringing some part of the CPU up or down. Maybe slightly more capacitance somewhere, so that right after a core comes out of power save, for the first few cycles some part of the protection hardware doesn't work right.

"Maybe slightly more capacitance somewhere, so that right after a core comes out of power save, for the first few cycles some part of the protection hardware doesn't work right."

That already happens in embedded systems (esp MCU's) in a different way. You're thinking on the right track. All I can say.

You made some extremely good insights... ;)

Thanks for this, just added it to my Zotero backlog. I don't see what this has to do with Ken Thompson though. Did he believe that undetectable hardware backdoors would be possible in the future, or what exactly?

I applied for a PhD at UMich this year hoping to work at MICL[1] under Prof. Dennis Sylvester, who co-authored this paper, but I was sadly rejected[2]. MICL is one of the best places in the world to do IC design, and Prof. Sylvester is absolutely amazing.

[1]: http://www.eecs.umich.edu/micl/

[2]: I got a funded offer at GA Tech, so it's all good :D

The Ken Thompson thing is presumably in reference to his popular piece "Reflections on Trusting Trust" (https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...)

As the level of program gets lower, these bugs will be harder and harder to detect. A well installed microcode bug will be almost impossible to detect.

And this is even lower - in the hardware itself.

Really fascinating work.

Thompson's work built on and was mostly predated by MULTICS security evaluation that showed subverted compilers and use of hardware failures to boost trapdoors. See bottom two links on my comment here:


Founders of INFOSEC taught us most of what we needed to know. Mainstream security prefers to ignore it, calling it unnecessary, then reinvent it one technique at a time over several decades. Least the OP paper is doing what high-security recommended which is investigating risks at transistor and analog level. Most data on that is trade secret given it's exploited for competitive advantage and intelligence agencies.

> Mainstream security prefers to ignore it, calling it unnecessary, then reinvent it one technique at a time over several decades.

That's not just true for security, that's true for the entire software industry.

Ken Thompson gave a lecture called "Reflections on Trusting Trust" [1] in 1984 that outlines a similar attack.

[1] https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...

It's not a similar attack. This doesn't modify future chip designs to include itself, which is what Ken's attack did with code.

His talk used the attack he implemented as an example of a broader family of attacks. In his wrapping-up section (morals) he lists other ways one could embed backdoors into systems, and he noted that the further down you go the harder it is to detect. From Ken Thompson's conclusion:

The moral is obvious. You can't trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.

It really isn't. People just love that Thompson paper so much they always bring it up. Whereas I predicted this exact problem repeatedly with people dismissing it. Especially for microcontrollers where there's not much else to modify. And here we have the attack vector proven on a microcontroller. :)

I research as a leisure activity. Thanks for the Zotero mention. It fills a need of mine. I appreciate it!! :)

No problem man, awesome to hear it's useful! Be sure to try out Mendeley[1] too as you may find it better.

[1]: https://www.mendeley.com/

It looks like it has potentially some nice social features but...I like the control and that seems to fit Zotero better. Thanks for the suggestions. Pretty neat that there is software that is specific to this field. As I said, it's a hobby of mine but...I do it very regular. Many of my opinions are due to the research I do and this Zotero might be a great tool to say 'hey...I think this...due to these academic articles...' and being able to nicely point them into a specific direction. Thanks again!

Indeed, the article title, per HN rules, should probably be "A2: Analog Malicious Hardware [pdf]". Maybe with " - undetectable CPU backdoor".

Yes. We've changed it from "Ken Thompson was right, proven definitively 32 years later".

Submitters: it's against HN's rules to rewrite titles like that. It's only ok to do so if they're misleading or linkbait to begin with (and definitely not to make them more so).


Can you add a (pdf) too, please?

Ah yes, missed that earlier. Added now.

> the article title, per HN rules, should probably be "A2: Analog Malicious Hardware [pdf]".

Absolutely it should, mods, dang, please fix the title.

Moreover, matthewmacleod (1) shows that Thompson didn't predict exactly this level of subversion and anything specific to this method, so the title is even thus inappropriate.

1) https://news.ycombinator.com/item?id=11769190

Lighten up:

"And thirdly, the code is more what you'd call "guidelines" than actual rules."

HN actually doesn't even call it rules but Guidelines in the footer.

I don't call them as such too, I just don't like when link poster constructs his own inaccurate or factually wrong interpretation of the material behind the link, and I tend to point to the wrongness of the title only in these situations.

I like the "A2: Analog Malicious Hardware" work, but I'm quite sure even Thompson wouldn't claim that what's in the work is exactly what he presented in his talk. His talk is more nuanced and has one much more interesting consequence: that unless you decompile the compiler binary with some very clever tools, just the simple modification in one generation of the compiler can make the following compilers carrying Trojan payload without the payload being visible in the source of the given compiler. "A2: Analog Malicious Hardware" doesn't have this "payload propagation through the generations" property.

Very different from what the person who gave the false title here believes. What I claim is that "ltcode" misunderstood the intention of the Thompson's work and therefore the ersatztitle he gave here is very inadequate.

Well, at least he didn't create the title that involves Einstein.

This isn't (just) a CPU backdoor though, it's a possible backdoor on any integrated circuit.

To be precise, what was implemented was a CPU backdoor, but the attack applies to all integrated circuits.

Ah, I get the confusion. I've added "just" to my earlier comment.

I am trying to understand the parsing of CPU/integrated circuit you are doing but I am not following. How are you conceptualizing the CPU as different than an integrated circuit?

It's a pars pro toto: a CPU is an example of an integrated circuit, but there are integrated circuits that are not CPUs.

To make this distinction more relevant: this kind of attack would be very useful in the context of EEPROMs or cryptographic chips (like TPMs): you could, for example, induce a cryptographic chip to dump its internal state (private keys), or you could overwrite a flash region that would under normal operation be non-writable.

thank you for the clarification.

This is a neat paper. One of the tricky parts of making a malicious circuit is the fact that you want your behavior to only be triggered on some unlikely condition.

In other words you want:

if (counter == 0x686e686e) { do evil } else counter++;

But that requires a lot of hardware.

What the authors realized is that essentially a simple capacitor can be used as a counter! Each time you reach the condition it adds a bit of charge to the capacitor, until at the appropriate moment it discharges and changes the state of the chip in some way.

This is a really clever bit of design, getting something general-purpose and malicious out of a single capacitor!

The paper uses two capacitors that share charge to get the same effect as your code. In effect Cunit (the smaller capacitor) is the increment (i.e., counter++), while Cmain (the larger capacitor) acts as the count-holding variable (i.e., counter). Another circuit (Schmitt Trigger) acts as the comparator.

Thanks for the clarification! I'm definitely not a hardware expert and was going off of what I remembered from the talk on Monday.

Those tens of billions of dollars investments baffle me. I'm relatively well-versed in technologies involved as well as their r&d efforts, I understand construction parameters for fabs.. but I still can't understand how can setting up a production process cost such an extraordinary amount of cash.

Also makes me think are chips (any smaller transistor scale, for arbitrary amount of small) out of reach for DIY fabrications and smaller startups?

edit: I did stumble upon few youtubers doing their own chips with vacuum plasma chambers and DIY lithography and whatnots. It strikes me odd that there aren't more of 'hack' attempts at DIY chips. FPGAs and software is all fine and dandy, but this really seems like a fun and great challenge.

The top two answers here: http://electronics.stackexchange.com/questions/67598/how-are... are a good summary of why there isn't a useful DIY effort for actual manufacture in your basement.

The process for designing and producing chips is extremely waterfall, requiring various specialist skills and software that aren't particularly cheap. It's not impossible for smaller startups - I've actually been a consultant as part of a team for an outsourced mixed-signal chip design, under half a million dollars.

The problem is that now you've made hardware, with all the woes of a hardware startup, with the addition that you don't make any money until people incorporate your product into their product. It requires deep pockets and you're vulnerable to either straight cloning or existing companies pushing out competing products, while it doesn't really have the same unicorn potential upside.

There's plenty of failed processor startups around if you look, but only really one successful one: ARM.

And even ARM was a special case - it was created on a shoestring budget by a few clever engineers at Acorn and marketed outside w/VLSI later. It's an interesting story :)

(Acorn's 'exit strategy' in the late 90's was also interesting - they shut down the computer division, focused on DSL chip designs, and got bought by Broadcom a couple of years later for $1xx million...)

Capital investment. Each iteration pushes the cutting edge requiring millions to billions in R&D with enough Ph.D.'s and new findings textbooks have to be rewritten regularly to show what was impossible a few years ago is now a fielded procedure. That accounts for high costs to start with.

Next problem is each iteration into deep, submicron processes uses smaller tooling fighting more physics and noise with ever more sensitive instruments. Figuring out each issue requires lots of expensive tooling and experiments. Those vendors gotta make their money back. So, each tool in semiconductor process can cost millions to tens of millions of dollars. Each are custom built. The fab has to buy dozens to hundreds of these plus recover their investment with profit. They do it in wafers sold mainly.

So, given each market converges on a few players, it's easy to see that only a few players will have the volume to sustain the investments into new tech. On top of it, the nodes seem to double in cost ever so many steps trimming off even more competition. It got down to the lowest nodes that make high-end CPU's and mobile SOC's having only five or six companies fabbing them. Still plenty of work, esp for analog/RF, at higher nodes like 180-350nm. Yet, even an older fab might have to do $10-50mil a month in business just to keep their doors open.

Needless to say, people aren't going to do deep-submicron in their garages any time soon. Most companies with deep pockets gave up on fabs, too. Even IBM is selling theirs IIRC. So, most use a "fabless" model where they design hardware then send files that remaining fabs turn into hardware. There's a few companies that keep their own fabs on mostly obsolete nodes that are still useful to them. Yet, almost everyone else is fabless.

I know a guy doing deep submicron in, essentially, his garage; but he's doing it with direct FIB, not photolithography or even FIB lithography.

Yeah, you can do that. Shifts the risk to the expensive, black boxes made in pro-espionage countries. ;) Plus, your volume is really, really low. When Im home, I'll give you link to a defunct fab in Europe that did theirs that way. Might be worth copying for some sectors.

Alright, this is just one of those topics that it's hard to get prior Google results out of that were good. All papers have same terms in them. (sighs) I did find something to illustrate the problem, though.


This describes micromilling semiconductors with focused, ion beams. Now, you said not FIB lithography. I can't be sure this is what you're talking about but it's pretty direct. Regardless, all the schemes work similarly in that you work on one part at a time across a chip or wafer. In this one, you can see the precision work is so time consuming that it can take 8 hours for first, metal layer in a chip with quite a few of them. Another part says days. Whereas, even old uses of masks with steppers were cranking out three wafers per minute. See the difference? That's why the direct technologies, even multiple beams, are usually used for just the photomask (most fabs) or for prototypes (eASIC).

I did find that fab I was telling you about that used tech like this:


It was doing it on nodes that are ancient by today's standards. Yet, you could do quite a bit with them. The lead time was only four weeks for quite a few chips at a reasonable price. Better capabilities can be had at less price today. I keep saying target 0.35 micron initially as it's inexpensive with many fabs available and last cheap one with visual inspection possible. Plus, OSS flows should work on it. Use one beam fab for cheap ones plus one for at least 90nm with 65nm or 45nm high-end.

Far as recent companies, the eASIC process is maskless at 90nm, 45nm, and 28nm for prototyping. THey mainly do structured ASIC's. Here's an example of the prototyping cost:


By "not FIB lithography" I meant he isn't using the FIB machine to make shadow masks for photolithography; he's directly using them on the semiconductor, like Alacron. I'm not sure of the brand of his machine.

You'd think FIB milling (and deposition and implantation) should be able to do considerably better than the 28 nanometers you can do with photolithography these days (after investing a billion dollars in your fab, anyway). The de Broglie wavelength of a heavy ion isn't that big.

The eASIC link you give implies, without saying, that eASIC ASICs aren't "cell-based ASICs", but that's what "structured ASICs" are, and WP at least confirms that the Nextreme ASICs described in that link in particular are cell-based.

It's not totally clear to me that eASIC is in fact using FIBs for their prototypes; is that based on you talking to people at the company?

I agree that the per-unit cost for FIB is enormous. I hadn't thought about the possibility of using many beams at once to reduce that cost.

Re do better. I've skimmed enough papers on the stuff to say the physics is too complex for me to have an opinion either way. It's the implementation issues they're always fighting, though.

Re masks. I know. The reason I mention it is they're both essentually drawn by hand a piece at a time. Showing why one is too slow to use for significant volumes should enlighten you on other. Yet, if tiny volumes, might be acceptable but machines cost $$$.

Re eASIC. eASIC uses a multiple, eBeam machine for theirs. They might have more than one. They directly write the stuff onto the naterial. That it's a S-ASIC defined by one, custom layer is why it's so quick and cheap. A full ASIC has many layers so price/time goes way up.

Btw, I'd be interested in an email from the guy to ask him a few questions. Im especially interested if he needs a cleanroom, how he packages them, and what software is like to assess subversion counters at interface level.

I do part time work packaging FIBbed gizmos, the 'machinist' has done at best a 50nm diameter aperture in metal film (from what I recall, maybe it was 100nm) using a beam diameter of 10nm. This is on a machine that was new 20 years ago (newer machines are probably 5 to 10X better in terms of hardware these days, maybe 50X in software). Also, ASML is researching multi-beam electron steppers and getting more traction lately: http://semiengineering.com/multi-beam-market-heats-up/ (from 2 months ago)

So, what's involved in packaging them? I'm thinking along the lines of me coming up with a custom circuit that I print onto silicon, package into a chip, and put that sucker on a PCB. I figured the packaging would take similarly, specialized equipment given it's so tiny. Is it built into FIB equipment? Extra? How easy is it to use?

I always see articles on the ebeams and FIB showing how they work. Nothing on packaging.

Oh, by 'package' I meant heat-seal in plastic bags. 'Packaging' a silicon chip for electrical use should be pretty similar regardless of the production technique, given chemical compatibility, stress/strain of the chip and how packaging would add/affect that. There's wirebonding which is basically soldering wires from the silicon to some larger package-scale traces/larger-wires (often embedded in the package structure). There are a few ways of getting the actual FIBbed gizmo onto something macro-scale. Sometimes the thing you start with is large enough to handle easily, sometimes you bring in a CNC manipulator, use FIB to solder your gizmo to the manipulator, move the manipulator elsewhere and then 'tack weld' down your gizmo there and mill away the connection to the manipulator. Some systems have micro/nano vacuum manipulators. I bet on the high-end piezos are used to move things, but I am sure at some relatively larger scale mechanical movement wouldn't be too hard to use (depending on how cheap you need things to be, and how many times you want to repeat doing such connections).

Thanks for details. All sounds pretty exotic. :)

I'm not an IC designer, but from what I understand MOSIS (https://www.mosis.com/what-is-mosis) is reasonably priced, for some definition of "reasonable". You don't get to use the latest technologies/feature sizes, but you do get to do small volume prototype runs for somewhere in the neighbourhood of $10,000.

Looking around a bit to verify the pricing landed me at: http://www.cmc.ca/en/WhatWeOffer/Make/FabPricing.aspx. Those prices are likely subsidized from somewhere, but they bring things into the "totally approachable" realm. Here in Canada, if you were going to do an IC fabrication startup, it'd probably make a lot of sense to go through NSERC anyway; they have a lot of programs where you can get a significant portion of your employment and R&D costs covered.

Edit: Looks like TSMC will also do things like this directly, but without specifying prices: http://www.tsmc.com/english/dedicatedFoundry/services/cyberS...

Yeah, they were talking about this on the Amp Hour a few episodes ago. There's some universities that actually fab a physical chip as part of their coursework using MOSIS. If I remember correctly there was an edu discount.

Very cool stuff.

It can get really cheap for small circuits on 350nm, etc. Also, Europeans have something similar with Europractice:


the equipment...when you spend 50million or so on a litho scanner and the associated track, plumb in the chemicals, the support equipment for the chemicals, etc. it adds up fast. Remember they aren't setting up one line they are setting up many because the economics only work at scale. If I need to do 40+ litho layers with 60k wsw you need probably a dozen or more litho tool sets. That alone is over a billion dollars.

You're right. That's what I haven't considered! Something like an offset printing press. You can do all four (or more) layers with only one machine/press, but most machines have 4 layers/presses inside them in order to expedite production.

If you forgive the shameless self-promotion, I did a talk on the scale and economics and so forth a while back you might find interesting: https://www.youtube.com/watch?v=NGFhc8R_uO4 It has been posted on HN a few times before...the technology still puts a tingle in my spine

[insert critcism of how I need to be a better public speaker here]

By the powers bestowed upon me by hacker news user account creation, I absolve you from your sin of shameless self-promotion. Seriously though, I am as remote as possible from that industry (film and TV content creation!), but it absolutely fascinates me. I devour all and every article and paper (that I can understand) about this and HPC as well. You did well in video, we should do a documentary together on this theme!

email me...tmf7811 on gmail.

I attempted to Google and figure out what you meant by "...with 60k wsw". Sorry I'm not versed in the terminology of chip fabrication, so I was curious what wsw means. What does it mean?

wafer starts/week, I think.

correct, apologies for the short hand

After all the advances that are on the way with ICs(5nm, 3D chips, different semiconductors) I wonder if cheap, small batch production that can be done by hobbyists and small businesses will ever be a thing.

Getting to smaller feature sizes is pretty much an exponential curve. Jeri Ellsworth made transistors at home once, and I suspect very early IC's could be done in a garage... if you can deal with the chemicals ;)

My old university had a fairly large cleanroom and start ups would rent space/tools. Most of the equipment was very old, so limited to 100mm or 150mm wafers and sub micron feature would be difficult.

This is really cool. It proves something that anyone with a reasonable amount of knowledge about hardware and software should understand intuitively; that the ultimate trust for any computing platform is put in the hands of not the hardware designers, but the actual hardware manufacturer. That doesn't mean Apple, that means TSMC or some other foundry.

It's great that is proven, though, and not just intuited. This looks like some stellar work by the team.

I agree that this is a really interesting exploit, but it requires a lot of expensive part-specific work to get it right, as well as the assumption that the foundry does their own masks. Of course, our attacker could work for the mask shop instead, but then he has to do it exactly right the first time, which makes it even trickier. This is all to say that, when we see this attack in the wild, somebody with very deep pockets will have been responsible.

If you have a bad nation state actor wanting these exploits in place I don't think it's beyond reason that they would go to any steps. Imagine they could get a backdoor into every smartphone on the market by getting TSMC to manufacture chips with exploits in them. You really think there's any feasible but difficult steps that would stop them from doing this?

If a graduate student and a postdoc can do it...

After reading the other front page article on a possible new Physics force...

Ancient people thought everyday objects had powerful spirits in them or controlling them, subject to whims. Modern science showed almost everything is emergent complexity of very simple rules.

Now, technologists are replacing the gods of old, creating powerful nigh invisible "spirits" that live inside everyday object: radios and batteries and computer chips with microscopic logic, the tiniest pebbles or shreds of fabric could be watching you and talking to you, controlled by an automated or remote maliciously force.

Ken proved himself right 32 years ago, this is just another variation.

Thompson didn't invent or prove anything. He based his work off MULTICS Security Evaluation where Karger et al invented the compiler attack and submitted it in the report. See p 17:


They invented many other attacks and risk areas you see today despite INFOSEC not existing back then. This was one 2 or 3 pentests that started the hacking part of our field.

I never said invented, but he did execute it successfully on a scale that may be larger than he admits.

I find that fascinating! I have a faint recollection that one of bugs on the 80186 (the high integration 8086 that Intel built) was due to cross coupled noise from the metal layer to one of the register bits and the fix was to reroute one of the signals in polysilicon instead. I would never have considered that sort of effect as being exploitable as a back door.

Time to start building discrete transistor CPUs? This discrete 6502 project was posted here recently:


It's far too slow for most uses, but that's mostly because NMOS logic doesn't handle the high capacitance well. NMOS logic uses MOSFETs as constantly enabled pull-up resistors, so they can't be very strong pull-ups or power consumption would be too high. I expect a CMOS design would be able to run much faster, especially with high voltage and the smallest transistors available.

Individual transistors can be sampled and destructively tested, and the order they are placed can be randomized to make it harder to subvert the circuit by replacing them with microcontrollers.

That leaves RAM, which is far too bulky to build from discrete transistors. But you could encrypt the ram in hardware, and mirror it across multiple chips each encrypted with different keys, and check they all read back the same once decrypted. The same could be done with mass storage devices. EDIT -- on second thoughts this will not defend against replay attacks within the storage device. I'm not sure if reliable detection of a malicious storage device is even possible without having some known good storage.

Hardware guru that taught me warned of another risk: you can't uninvent an advance tech once it's invented. The point being such techniques assume you can inspect what's going on because you're using components you know do only X. Yet, as chips get nanoscale, you can actually embed entire CPU's and RF systems in between larger components on others that are invisible to visual inspection and might not show up in black box testing. You can try to act like those nodes and their risks don't exist but subversives can still use them against you.

The simple method here might be swapping more discrete chips or components out for others that are those components plus entire SOC's. Then, once they know your configuration, they hit you. Or they tell each to leak on a different frequency or whatever all at once with them figuring it out later. All kinds of crazy stuff is possible.

So, just make sure you buy older stuff under different names with cash at unusual locations. Have proxies do it for you with legit excuses. Then, use multiple systems with voter logic. Tends to work out better than alternatives. Usability issues for sure, though. :)

Except... http://www.dwheeler.com/trusting-trust/

I think 'nickpsecurity has previously made some interesting remarks on the issue at lower levels...

In my main comment, I showed my flow already prevented this as it relied on validated, open, standard cells plus analog reviewed by multiple, suspicious parties. Plus called the exact attack. Due to memory loss, I forgot that Thompson didn't actually invent the Thompson attack: Schell and Karger did during MULTICS evaluation. It's in 2 papers on bottom. So, you can add that to counterpoints to Thompson Attack legend. ;)


Also, I produced a set of links to drop on high-assurance, subversion in general, and my verified compiler technique. I still need to do a Pastebin or something on latter as right now it's embedded in conversation with jeffreyrogers.



EDIT: While he was ripping off MULTICS, Thompson should've also tried to do prefixed strings, protected stacks, and safer-by-default languages like it had. Might have prevented many attacks on the UNIXen back in the day. Today. Ten years from now. ;)

Can you explain this a bit more? I've only read both the abstracts but if I understand right your link deals with compiler-level attacks but the OP deals with hardware-level attacks ("fabrication-time attacker").

It was basically a comment on the submission title -- Ken Thompson's famous attack has a counter that's been known about for a while. For the submission itself it's not as related as other commenters note they don't provide a way for the exploit to propagate like KT's attack. I'll have to read the full paper later but I wonder if it references https://www.usenix.org/legacy/event/leet08/tech/full_papers/...

For those who are also wondering, this is a preprint. [1]

Kaiyuan Yang, Matthew Hicks, Qing Dong, Todd Austin, and Dennis Sylvester, “A2: Analog Malicious Hardware”, Proceedings of the IEEE Symposium on Security and Privacy (Oakland), to appear May 2016.

[1] http://www.impedimenttoprogress.com/publications/

This isn't quite the same thing -- to be completely analogous, we'd need the fabricator to also recognize that it was fabricating another fabricator, and then change THAT generated fabricator to have the same intervention but only in the case that we care about compromising.

"Analog malicious hardware" made me think of https://en.wikipedia.org/wiki/The_Thing_(listening_device) that snooped on americans for seven years...

Could one defense be to design the chip to not have any empty space? In other words, fill in any empty area with test circuitry such that you couldn't tell which areas were actually used and which weren't.

It would be very complex to fill the entire chip with cells---whose functionality mattered, otherwise we the attacker could replace them without the defender noticing---and get them wired in to the rest of the chip. There is a tradeoff between area utilization and routability of the design: it gets exponentially more difficult to route a design as its area utilization increases. This is why most commercial chips have 20% to 30% of free space in the layout.

Even worse, in many commercial chips, there are spare cells to allow for cheap low-level patching. The attacker can just swap out one of these cells with their own and have an attack that only modifies a single cell.

Called it! It was No 7 on a low-ranked comment [1], the third option in link at bottom of another [2] for standard cells (knew they'd get messed with), and mentioned repeatedly on Schneier's blog. Guy I learned risk from said he actively countered analog poisoning of 3rd party I.P. his company licensed. He said he was constantly finding it, mostly for I.P. obfuscation but sometimes more nefarious. Here's one of his observations on subverting crypto processors with digital or analog additions:

"Controlling bits like the Carry flag is essential to the security of all crypto algorithms (techniques like DPA and "timing attacks" try to discover this information by observing the operation of the CPU) if you have a hardware way to transfer just this ONE bit, than most crypto available todays is useless. "

He kept pointing out, probably from experience, that you could just modify a bit here, an MMU there, or add RF circuit to bypass plenty of protections. Nobody would even notice analog additions because "their digital tools can't see it." It would take careful reverse engineering. Old risk already deployed into production re-invented in a neat, new paper with new technique.

Honestly, I originally got the subversion idea from the MULTICS Security Evaluation [3] [4]. Schell and Karger, not Thompson, should get credit for first attack like this as they introduced software that kept poking at a memory location until the MMU experienced an intermittent failure. They got in since software people assumed HW always worked. They also invented basis of "Thompson Attack" (see note below). So, I predicted HW trojans sitting on MMU, IOMMU, PCI, TRNG's, and some other things using non-standard circuits that nonetheless preserve timing, etc. So, a few years ahead on this one.

Note: Karger and Schell also invented in same project the idea of subverting a PL/I compiler to insert malicious code into stuff compiled with it, including the OS. Thompson read that and expanded on it with Trusting Trust. Now, Karger and Schell attack is called the "Thompson Attack." Nah, the founders of INFOSEC thought of that one first, too. Take that Thompson fanboys! :P

[1] https://news.ycombinator.com/item?id=10906999

[2] https://news.ycombinator.com/item?id=10468624

[3] https://www.acsac.org/2002/papers/classic-multics-orig.pdf

[4] https://www.acsac.org/2002/papers/classic-multics.pdf

AFAIK Ken invented the procedure of quining the compiler backdoor to remove it from the compiler source code.

Ok, it's not clear here as I read each paper. Here's what each one says. Multics paper first. Already has a discussion of source vs object. Source is more visible but survives recompilations. That's the backdrop here. Here's quote:

"It was noted above that while object code trap doors are invisible, they are vulnerable to recompilations. The compiler (or assembler) trap door is inserted to permit object code trap doors to survive even a complete recompilation of the entire system. In Multics, most of the ring 0 supervisor is written in PL/I. A penetrator could insert a trap door in the PL/I compiler to note when it is compiling a ring 0 module. Then the compiler would insert an object code trap door in the ring 0 module without listing the code in the listing. Since the PL/I compiler is itself written in PL/I, the trap door can maintain itself, even when the compiler is recompiled."

Given backdrop, hard to say whether they put it in the source or object code of the compiler. It's ambiguous: "since PL/I compiler is written in PL/I." Either it's because they have backdoor in its source code or because the backdoored, object code is the PL/I compiler that will be used to re-compile any PL/I source. Next paragraph indicates they insert the trapdoor in another routine using object code that closely matches that produced from PL/I source. So, I'm assuming... with some uncertainty... that they bugged the object code of PL/I compiler to add trapdoor to it and all executables on compiles. With nothing left in the source.

Then, Thompson paper simply says:

"First we compile the modified source with teh normal C compiler to produce a bugged binary. We install this binary as the official C. We can now remove the bugs from the source of teh compiler and the new binary will reinsert the bugs whenever it is compiled. Of course, the login command will remain bugged with no trace in source anywhere."

Sounds like they're doing the same thing except MULTICS attack uses assembly code directly. They might have coded it in PL/I first, then directly inputed the code. That would make both attacks equal. Who knows. That they each bug the compiler at object level with no source-level evidence seems accurate. In that case, Thompson attack is MULTICS PL/I attack applied to C with clear use of C for subversion artifact.

You're right! I imagine Karger and Schell wrote their backdoor in PL/I too — it seems like it would just be a lot easier that way.

They could be his invention. I'll look at the papers again tonight to see if i can determine that.

Good for Ken!! Who? lol

You use things he worked on everyday, one of them is UTF-8. He also designed and implemented UNIX. He was creator of B language, thanks to which we have C language. He also worked on Golang and Plan9.

And even wider, almost everything in the modern world was invented at Bell Labs.

Or Xerox PARC.

Not even close to Bell-Labs.

Sorry downvoter but you really have no idea how much more Bell-Labs has done.

The Simplex algorithm, the transistor, C and C++, the R programming language, the ccd, the mobile phone...

Just some from the top of my head

No need to be dismissive. This isn't a peeing contest.

OOP, Ethernet, the mouse, GUIs, and tablets were some of the things invented at Xerox PARC.

But I am dismissive. Xerox Parc, as influential as it's been is still merely drops in the ocean compared to the Labs.

> He was creator of B language, thanks to which we have C language.

That's not something I'd brag about... ;)

Didnt he start DEC?

That was Ken Olsen

This link (pdf) provides the missing context: https://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thomp...

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact