
A2: Analog Malicious Hardware [pdf] - ltcode
http://www.impedimenttoprogress.com/storage/publications/A2_SP_2016.pdf
======
Animats
This has some similarity to the rowhammer vulnerability. There, if you access
some DRAM chips repeatedly in a specific way, some digital elements no longer
behave in the idealized way that's expected, and there's cross-coupling
between things that aren't supposed to be connected. This allows changing RAM
to which you don't have access. That was accidental, rather than being
designed in.

This new attack is deliberate, rather than accidental, and very explicit,
being wired to to the protected-mode bit. It points the way to even more
subtle attacks, perhaps something that misbehaves slightly as power management
is bringing some part of the CPU up or down. Maybe slightly more capacitance
somewhere, so that right after a core comes out of power save, for the first
few cycles some part of the protection hardware doesn't work right.

~~~
nickpsecurity
"Maybe slightly more capacitance somewhere, so that right after a core comes
out of power save, for the first few cycles some part of the protection
hardware doesn't work right."

That already happens in embedded systems (esp MCU's) in a different way.
You're thinking on the right track. All I can say.

------
Cyph0n
Thanks for this, just added it to my Zotero backlog. I don't see what this has
to do with Ken Thompson though. Did he believe that undetectable hardware
backdoors would be possible in the future, or what exactly?

I applied for a PhD at UMich this year hoping to work at MICL[1] under Prof.
Dennis Sylvester, who co-authored this paper, but I was sadly rejected[2].
MICL is one of the best places in the world to do IC design, and Prof.
Sylvester is absolutely amazing.

[1]: [http://www.eecs.umich.edu/micl/](http://www.eecs.umich.edu/micl/)

[2]: I got a funded offer at GA Tech, so it's all good :D

~~~
ghusbands
Indeed, the article title, per HN rules, should probably be "A2: Analog
Malicious Hardware [pdf]". Maybe with " \- undetectable CPU backdoor".

~~~
tremon
This isn't (just) a CPU backdoor though, it's a possible backdoor on any
integrated circuit.

~~~
impdmnt2Prgrss
To be precise, what was implemented was a CPU backdoor, but the attack applies
to all integrated circuits.

~~~
tremon
Ah, I get the confusion. I've added "just" to my earlier comment.

------
moyix
This is a neat paper. One of the tricky parts of making a malicious circuit is
the fact that you want your behavior to only be triggered on some unlikely
condition.

In other words you want:

if (counter == 0x686e686e) { do evil } else counter++;

But that requires a lot of hardware.

What the authors realized is that essentially a simple capacitor can be used
as a counter! Each time you reach the condition it adds a bit of charge to the
capacitor, until at the appropriate moment it discharges and changes the state
of the chip in some way.

This is a really clever bit of design, getting something general-purpose and
malicious out of a single capacitor!

~~~
impdmnt2Prgrss
The paper uses two capacitors that share charge to get the same effect as your
code. In effect Cunit (the smaller capacitor) is the increment (i.e.,
counter++), while Cmain (the larger capacitor) acts as the count-holding
variable (i.e., counter). Another circuit (Schmitt Trigger) acts as the
comparator.

~~~
moyix
Thanks for the clarification! I'm definitely not a hardware expert and was
going off of what I remembered from the talk on Monday.

------
Keyframe
Those tens of billions of dollars investments baffle me. I'm relatively well-
versed in technologies involved as well as their r&d efforts, I understand
construction parameters for fabs.. but I still can't understand how can
setting up a production process cost such an extraordinary amount of cash.

Also makes me think are chips (any smaller transistor scale, for arbitrary
amount of small) out of reach for DIY fabrications and smaller startups?

edit: I did stumble upon few youtubers doing their own chips with vacuum
plasma chambers and DIY lithography and whatnots. It strikes me odd that there
aren't more of 'hack' attempts at DIY chips. FPGAs and software is all fine
and dandy, but this really seems like a fun and great challenge.

~~~
avs733
the equipment...when you spend 50million or so on a litho scanner and the
associated track, plumb in the chemicals, the support equipment for the
chemicals, etc. it adds up fast. Remember they aren't setting up one line they
are setting up many because the economics only work at scale. If I need to do
40+ litho layers with 60k wsw you need probably a dozen or more litho tool
sets. That alone is over a billion dollars.

~~~
Keyframe
You're right. That's what I haven't considered! Something like an offset
printing press. You can do all four (or more) layers with only one
machine/press, but most machines have 4 layers/presses inside them in order to
expedite production.

~~~
avs733
If you forgive the shameless self-promotion, I did a talk on the scale and
economics and so forth a while back you might find interesting:
[https://www.youtube.com/watch?v=NGFhc8R_uO4](https://www.youtube.com/watch?v=NGFhc8R_uO4)
It has been posted on HN a few times before...the technology still puts a
tingle in my spine

[insert critcism of how I need to be a better public speaker here]

~~~
Keyframe
By the powers bestowed upon me by hacker news user account creation, I absolve
you from your sin of shameless self-promotion. Seriously though, I am as
remote as possible from that industry (film and TV content creation!), but it
absolutely fascinates me. I devour all and every article and paper (that I can
understand) about this and HPC as well. You did well in video, we should do a
documentary together on this theme!

~~~
avs733
email me...tmf7811 on gmail.

------
ebbv
This is really cool. It proves something that anyone with a reasonable amount
of knowledge about hardware and software should understand intuitively; that
the ultimate trust for any computing platform is put in the hands of not the
hardware designers, but the actual hardware manufacturer. That doesn't mean
Apple, that means TSMC or some other foundry.

It's great that is proven, though, and not just intuited. This looks like some
stellar work by the team.

~~~
sevensor
I agree that this is a really interesting exploit, but it requires a lot of
expensive part-specific work to get it right, as well as the assumption that
the foundry does their own masks. Of course, our attacker could work for the
mask shop instead, but then he has to do it exactly right the first time,
which makes it even trickier. This is all to say that, when we see this attack
in the wild, somebody with very deep pockets will have been responsible.

~~~
ebbv
If you have a bad nation state actor wanting these exploits in place I don't
think it's beyond reason that they would go to any steps. Imagine they could
get a backdoor into every smartphone on the market by getting TSMC to
manufacture chips with exploits in them. You really think there's any feasible
but difficult steps that would stop them from doing this?

------
tamana
After reading the other front page article on a possible new Physics force...

Ancient people thought everyday objects had powerful spirits in them or
controlling them, subject to whims. Modern science showed almost everything is
emergent complexity of very simple rules.

Now, technologists are replacing the gods of old, creating powerful nigh
invisible "spirits" that live inside everyday object: radios and batteries and
computer chips with microscopic logic, the tiniest pebbles or shreds of fabric
could be watching you and talking to you, controlled by an automated or remote
maliciously force.

------
qwertyuiop924
Ken proved himself right 32 years ago, this is just another variation.

~~~
nickpsecurity
Thompson didn't invent or prove anything. He based his work off MULTICS
Security Evaluation where Karger et al invented the compiler attack and
submitted it in the report. See p 17:

[https://www.acsac.org/2002/papers/classic-multics-
orig.pdf](https://www.acsac.org/2002/papers/classic-multics-orig.pdf)

They invented many other attacks and risk areas you see today despite INFOSEC
not existing back then. This was one 2 or 3 pentests that started the hacking
part of our field.

~~~
qwertyuiop924
I never said invented, but he did execute it successfully on a scale that may
be larger than he admits.

------
ChuckMcM
I find that fascinating! I have a faint recollection that one of bugs on the
80186 (the high integration 8086 that Intel built) was due to cross coupled
noise from the metal layer to one of the register bits and the fix was to
reroute one of the signals in polysilicon instead. I would never have
considered that sort of effect as being exploitable as a back door.

------
mrob
Time to start building discrete transistor CPUs? This discrete 6502 project
was posted here recently:

[https://news.ycombinator.com/item?id=11703596](https://news.ycombinator.com/item?id=11703596)

It's far too slow for most uses, but that's mostly because NMOS logic doesn't
handle the high capacitance well. NMOS logic uses MOSFETs as constantly
enabled pull-up resistors, so they can't be very strong pull-ups or power
consumption would be too high. I expect a CMOS design would be able to run
much faster, especially with high voltage and the smallest transistors
available.

Individual transistors can be sampled and destructively tested, and the order
they are placed can be randomized to make it harder to subvert the circuit by
replacing them with microcontrollers.

That leaves RAM, which is far too bulky to build from discrete transistors.
But you could encrypt the ram in hardware, and mirror it across multiple chips
each encrypted with different keys, and check they all read back the same once
decrypted. The same could be done with mass storage devices. EDIT -- on second
thoughts this will not defend against replay attacks within the storage
device. I'm not sure if reliable detection of a malicious storage device is
even possible without having some known good storage.

~~~
nickpsecurity
Hardware guru that taught me warned of another risk: you can't uninvent an
advance tech once it's invented. The point being such techniques assume you
can inspect what's going on because you're using components you know do only
X. Yet, as chips get nanoscale, you can actually embed entire CPU's and RF
systems in between larger components on others that are invisible to visual
inspection and might not show up in black box testing. You can try to act like
those nodes and their risks don't exist but subversives can still use them
against you.

The simple method here might be swapping more discrete chips or components out
for others that are those components plus entire SOC's. Then, once they know
your configuration, they hit you. Or they tell each to leak on a different
frequency or whatever all at once with them figuring it out later. All kinds
of crazy stuff is possible.

So, just make sure you buy older stuff under different names with cash at
unusual locations. Have proxies do it for you with legit excuses. Then, use
multiple systems with voter logic. Tends to work out better than alternatives.
Usability issues for sure, though. :)

------
Jach
Except... [http://www.dwheeler.com/trusting-
trust/](http://www.dwheeler.com/trusting-trust/)

I think 'nickpsecurity has previously made some interesting remarks on the
issue at lower levels...

~~~
aabaker99
Can you explain this a bit more? I've only read both the abstracts but if I
understand right your link deals with compiler-level attacks but the OP deals
with hardware-level attacks ("fabrication-time attacker").

~~~
Jach
It was basically a comment on the submission title -- Ken Thompson's famous
attack has a counter that's been known about for a while. For the submission
itself it's not as related as other commenters note they don't provide a way
for the exploit to propagate like KT's attack. I'll have to read the full
paper later but I wonder if it references
[https://www.usenix.org/legacy/event/leet08/tech/full_papers/...](https://www.usenix.org/legacy/event/leet08/tech/full_papers/king/king_html/)

------
aabaker99
For those who are also wondering, this is a preprint. [1]

Kaiyuan Yang, Matthew Hicks, Qing Dong, Todd Austin, and Dennis Sylvester,
“A2: Analog Malicious Hardware”, Proceedings of the IEEE Symposium on Security
and Privacy (Oakland), to appear May 2016.

[1]
[http://www.impedimenttoprogress.com/publications/](http://www.impedimenttoprogress.com/publications/)

------
polarix
This isn't _quite_ the same thing -- to be completely analogous, we'd need the
fabricator to also recognize that it was fabricating another fabricator, and
then change _THAT_ generated fabricator to have the same intervention but only
in the case that we care about compromising.

------
mrb
"Analog malicious hardware" made me think of
[https://en.wikipedia.org/wiki/The_Thing_(listening_device)](https://en.wikipedia.org/wiki/The_Thing_\(listening_device\))
that snooped on americans for seven years...

------
schultetwin1
Could one defense be to design the chip to not have any empty space? In other
words, fill in any empty area with test circuitry such that you couldn't tell
which areas were actually used and which weren't.

~~~
impdmnt2Prgrss
It would be very complex to fill the entire chip with cells---whose
functionality mattered, otherwise we the attacker could replace them without
the defender noticing---and get them wired in to the rest of the chip. There
is a tradeoff between area utilization and routability of the design: it gets
exponentially more difficult to route a design as its area utilization
increases. This is why most commercial chips have 20% to 30% of free space in
the layout.

Even worse, in many commercial chips, there are spare cells to allow for cheap
low-level patching. The attacker can just swap out one of these cells with
their own and have an attack that only modifies a single cell.

------
nickpsecurity
Called it! It was No 7 on a low-ranked comment [1], the third option in link
at bottom of another [2] for standard cells (knew they'd get messed with), and
mentioned repeatedly on Schneier's blog. Guy I learned risk from said he
actively countered analog poisoning of 3rd party I.P. his company licensed. He
said he was constantly finding it, mostly for I.P. obfuscation but sometimes
more nefarious. Here's one of his observations on subverting crypto processors
with digital or analog additions:

"Controlling bits like the Carry flag is essential to the security of all
crypto algorithms (techniques like DPA and "timing attacks" try to discover
this information by observing the operation of the CPU) if you have a hardware
way to transfer just this ONE bit, than most crypto available todays is
useless. "

He kept pointing out, probably from experience, that you could just modify a
bit here, an MMU there, or add RF circuit to bypass plenty of protections.
Nobody would even notice analog additions because "their digital tools can't
see it." It would take careful reverse engineering. Old risk already deployed
into production re-invented in a neat, new paper with new technique.

Honestly, I originally got the subversion idea from the MULTICS Security
Evaluation [3] [4]. Schell and Karger, _not Thompson_ , should get credit for
first attack like this as they introduced software that kept poking at a
memory location until the MMU experienced an intermittent failure. They got in
since software people assumed HW always worked. They also invented basis of
"Thompson Attack" (see note below). So, I predicted HW trojans sitting on MMU,
IOMMU, PCI, TRNG's, and some other things using non-standard circuits that
nonetheless preserve timing, etc. So, a few years ahead on this one.

Note: Karger and Schell also invented in same project the idea of subverting a
PL/I compiler to insert malicious code into stuff compiled with it, including
the OS. Thompson read that and expanded on it with Trusting Trust. Now, Karger
and Schell attack is called the "Thompson Attack." Nah, the founders of
INFOSEC thought of that one first, too. Take that Thompson fanboys! :P

[1]
[https://news.ycombinator.com/item?id=10906999](https://news.ycombinator.com/item?id=10906999)

[2]
[https://news.ycombinator.com/item?id=10468624](https://news.ycombinator.com/item?id=10468624)

[3] [https://www.acsac.org/2002/papers/classic-multics-
orig.pdf](https://www.acsac.org/2002/papers/classic-multics-orig.pdf)

[4] [https://www.acsac.org/2002/papers/classic-
multics.pdf](https://www.acsac.org/2002/papers/classic-multics.pdf)

~~~
kragen
AFAIK Ken invented the procedure of quining the compiler backdoor to remove it
from the compiler source code.

~~~
nickpsecurity
Ok, it's not clear here as I read each paper. Here's what each one says.
Multics paper first. Already has a discussion of source vs object. Source is
more visible but survives recompilations. That's the backdrop here. Here's
quote:

"It was noted above that while object code trap doors are invisible, they are
vulnerable to recompilations. The compiler (or assembler) trap door is
inserted to permit object code trap doors to survive even a complete
recompilation of the entire system. In Multics, most of the ring 0 supervisor
is written in PL/I. A penetrator could insert a trap door in the PL/I compiler
to note when it is compiling a ring 0 module. Then the compiler would insert
an object code trap door in the ring 0 module without listing the code in the
listing. Since the PL/I compiler is itself written in PL/I, the trap door can
maintain itself, even when the compiler is recompiled."

Given backdrop, hard to say whether they put it in the source or object code
of the compiler. It's ambiguous: "since PL/I compiler is written in PL/I."
Either it's because they have backdoor in its source code or because the
backdoored, object code is the PL/I compiler that will be used to re-compile
any PL/I source. Next paragraph indicates they insert the trapdoor in another
routine using object code that closely matches that produced from PL/I source.
So, I'm assuming... with some uncertainty... that they bugged the object code
of PL/I compiler to add trapdoor to it and all executables on compiles. With
nothing left in the source.

Then, Thompson paper simply says:

"First we compile the modified source with teh normal C compiler to produce a
bugged binary. We install this binary as the official C. We can now remove the
bugs from the source of teh compiler and the new binary will reinsert the bugs
whenever it is compiled. Of course, the login command will remain bugged with
no trace in source anywhere."

Sounds like they're doing the same thing except MULTICS attack uses assembly
code directly. They might have coded it in PL/I first, then directly inputed
the code. That would make both attacks equal. Who knows. That they each bug
the compiler at object level with no source-level evidence seems accurate. In
that case, Thompson attack is MULTICS PL/I attack applied to C with clear use
of C for subversion artifact.

~~~
kragen
You're right! I imagine Karger and Schell wrote their backdoor in PL/I too —
it seems like it would just be a lot easier that way.

------
Cozumel
Good for Ken!! Who? lol

~~~
lossolo
You use things he worked on everyday, one of them is UTF-8. He also designed
and implemented UNIX. He was creator of B language, thanks to which we have C
language. He also worked on Golang and Plan9.

~~~
SixSigma
And even wider, almost everything in the modern world was invented at Bell
Labs.

~~~
rubiquity
Or Xerox PARC.

~~~
SixSigma
Not even close to Bell-Labs.

~~~
SixSigma
Sorry downvoter but you really have no idea how much more Bell-Labs has done.

The Simplex algorithm, the transistor, C and C++, the R programming language,
the ccd, the mobile phone...

Just some from the top of my head

