
On CPU backdoors - Trusting hardware - emillon
http://theinvisiblethings.blogspot.com/2009/03/trusting-hardware.html
======
nicholas_tuzzio
This touches on what I think is one of the more interesting technological
problems that we have to worry about right now. As a disclosure, most of the
reason I think that is because I'm a PhD student doing research on hardware
security. Anyways, there has been a lot of interesting discussion on the
topic. DARPA has had at least two programs dedicated to trying to solve this
problem, IRIS and TRUST [1]. Both of them seemed to be more interested in
tampering by third-parties, perhaps because it's not in their best interest to
accuse the people designing their ICs of attacking them.

In the long run, verifying the functionality and intentions of software and
hardware are probably roughly the same problem, with no clear solution to
either in the foreseeable future.

[1] <http://www.wired.com/dangerroom/2011/08/problem-from-hell/>

~~~
mindslight
> In the long run, verifying the functionality and intentions of software and
> hardware are probably roughly the same problem

Both require trusting the source code (a languages problem), as well as
trusting the translator. In the case of software, the translator is an end-
user accessible compiler/interpreter which is itself more software, thus
recursively auditable.

In the case of hardware, the translator is an entire _institution_ , which can
only be trusted if you have recourse against said institution. As an
individual end-user (uber alles) can then never fully trust their hardware, it
makes sense to draw a line in the sand and proceed from that assumption.

(and suuure, put a picture of a pic16f84, the chip that started the revolution
of microcontroller DIY, at the top of an article on dodgy hardware..)

------
conductor
Indeed, a russian security specialist has proofs that there is a backdoor in
Intel's virtualization technology: <http://www.xakep.ru/post/58104/> And here
is the Google translated version of the article:
[http://translate.google.com/translate?hl=en&ie=UTF8&...](http://translate.google.com/translate?hl=en&ie=UTF8&prev=_t&sl=ru&tl=en&u=http://www.xakep.ru/post/58104/)

~~~
jakeonthemove
That's pretty interesting - I wonder if there are any other people/companies
who've delved into this matter?

------
microarchitect
DARPA and others are concernced about this exact scenario and are funding
research into reverse-engineering chips to detect these types of backdoors.
There are two parts this problem. One part is using electron microscopes and
lasers and whatnot to go from silicon to a netlist of gates. The second part,
which I'm a little more familiar with, is "decompiling" these gates into
higher-level structures like ALUs and multipliers. The hope is that we can
identify maybe 80% of the circuit to be good/recognized using purely
algorithmic techniques and then a human can dig in and look through the
remaining 20% for anything suspicious.

They do seem to be more concerned about chips the US buys from certain other
countries than about the likes Intel/AMD building in backdoors.

EDIT: I should also mention that this is not just a concern of the american
defence. I'm aware of the indian govt also funding this sort of research with
similar motivation. However, in this instance, the professor was trying to
attack the problem through the lens of formal techniques. I think the idea was
to prove that if the chip interacts with the outside world through these
limited set of channels then you can't sneak data out through some sort of
covert channel hiding in the "regular" communication. The specific concern
here was about routers/switches and the like equipment sneaking sensitive data
out of a secure network.

~~~
sliverstorm
Doesn't the government already make use of IBM's manufacturing capabilities
for Top Secret+ chips to try and mitigate the risk of this scenario?

~~~
microarchitect
I'm not sure but I think you may be right because the researchers have been
granted access to some IBM cell libraries. (I was wondering why IBM agreed to
this, but this probably explains it.)

My understanding is that the main concern here are chips in COTS equipment
bought from countries that are considered by some to be untrustworthy.

~~~
sliverstorm
_I was wondering why IBM agreed to this_

Allegedly the government requiring a domestic fab for some chips is one of the
biggest reasons IBM's fab remains funded.

All hearsay though.

------
js2
2009\. I'm surprised this hasn't been on HN before.

Related - <http://cm.bell-labs.com/who/ken/trust.html>

~~~
conductor
I just remembered another related case, the "Induc" virus which was infecting
a library file from the Delphi distribution, and then every compiled program
was infected. There were several pretty popular programs compiled on infected
computers of the developers and spreaded by the world.

[http://delphi.about.com/od/humorandfun/f/w32-induc-a-
delphi-...](http://delphi.about.com/od/humorandfun/f/w32-induc-a-delphi-
virus.htm)

<https://www.f-secure.com/weblog/archives/00001752.html>

------
mistercow
> So, if we buy a laptop from vendor X, that might be based in some not-fully-
> democratic country

Like, say, the US...

~~~
knieveltech
I have no idea why this is getting downvoted. The current political climate in
the US is fucking abysmal, and bears little resemblance to a representative
democracy.

~~~
grannyg00se
Probably because the comment didn't contribute to the discussion at hand and
had a high troll potential. It's unfortunate that the author decided to throw
that comment into the article because it is naive sounding and seems quite out
of place there as well.

~~~
mistercow
I suppose it doesn't open up much possibility for discussion afterward, but I
wasn't intending to troll. I just think we need to remember that we don't have
to imagine some undemocratic international threat to understand why these
issues are important.

------
lukeschlather
Never mind us. Why should Intel trust Intel? Like any good computing company,
I would imagine they are mostly self-hosting. The chips they built last year
are the chips they use to run simulations and design the chips they put out
next year. Backdoors can be exploited by any employee who knows about them,
and it would be extraordinarily damaging for Intel to allow backdoors into
hardware they depend on.

Even if they built in some sort of a kill-switch, how could anyone confidently
say that a rogue engineer involved in the design couldn't bypass it and use
the chip against Intel. Ultimately, I think there's so much danger that I have
to assume Intel is competent enough not to do something so foolish as
introduce deliberate backdoors.

~~~
javert
But there are SO many chipsets they put out. e.g. I have a Core 2 Duo system,
but the exact chipset is T9400.

Intel could keep track of which chipsets are vulnerable and which are not, and
carefully pick which kind gets released to who.

Obviously, Intel employees aware of the strategy would only use the
invulnerable chipsets themselves.

------
lmm
I'm expecting someone to produce a fully open hardware stack sooner or later -
there's already a freely available sparc processor design, and I recall some
open-source people working on a fully open graphics card. (Of course you still
have to trust your fab, but that's not very different from trusting your
compiler).

~~~
andylei
how is trusting your fab any different than trusting Intel?

> that's not very different from trusting your compiler

if you were paranoid enough to be worrying about CPU backdoors, why would you
trust your compiler?

~~~
lmm
>how is trusting your fab any different than trusting Intel?

You increase the cost of an attack - it's harder to change a processor's
behavior by editing the mask than the VHDL. If you were super-paranoid you
could source to multiple different fabs and run the chips you get back in
parallel, with some sort of trap that goes off whenever you get different
results from one or other processor.

>if you were paranoid enough to be worrying about CPU backdoors, why would you
trust your compiler?

If you don't trust your compiler, why are you even bothering worrying about
CPU backdoors when you've got a much easier attack vector open?

~~~
javert
_run the chips you get back in parallel_

Who's to say you're going to trigger the condition that causes the backdoor?
Seems very unlikely. If you have ideas on this, though, I'd be interested.

 _If you don't trust your compiler, why are you even bothering worrying about
CPU backdoors when you've got a much easier attack vector open?_

You may not trust your compiler, and therefore do certain things in a VM where
e.g. access to network is limited. See [1].

[1] <http://qubes-os.org/Home.html>

------
javajosh
The irony is that electron microscopes run on computers, too. And they are
probably even networked.

So really you can only trust an analog, optical microscope. Which, also
ironically, is not quite good enough to resolve individual transistors (being
limited to about 200nm or so, in green light.)

Last but not least, our CPUs are always designed by other computers, so it's
theoretically possible that a backdoor could propagate itself forever.

------
zokier
One thing to note is that having good perimeter security makes exploiting
hardware backdoors much harder. I mean if you are monitoring all of your
internet traffic then even if somebody with an access to a hardware backdoor
tried to steal data or log your activities the traffic caused by those
attempts would be caught at the perimeter.

------
robot
Problem is real, solution not so much. By hardware memory protection drivers
will be isolated, but you cannot protect yourself from backdoor logic built
into the hardware.

~~~
wmf
An IOMMU does protect against rogue peripheral devices, just not a rogue
processor.

~~~
simcop2387
Yes a driver written that uses IOMMU/VT-d for firewire devices will prevent
attacks over it from dumping memory and recovering keys, assuming that the
memory isn't reused and contains it already etc. Having this combined with a
quick way to zero out whatever DMA region is being used would be about as
foolproof as you could expect anything to be for protecting you from this kind
of attack.

------
kabdib
Note that you also have to trust the /tools/ that generate the circuits.
Nobody's doing to check every single gate on the chip against the source code;
it would be easy for a VHDL compiler to lay down extra stuff.

Shades of "Reflections on Trusting Trust," but in hardware. Doesn't have a
complete replication loop, though, which would have the compromised hardware
re-infecting the very VHDL compilers that generated the chip backdoor :-)

------
orblivion
I bet this is where Stallman does another 180 (like with cloud computing) and
will claim that open hardware is paramount.

~~~
SkyMarshal
What was his 180 with cloud computing?

~~~
orblivion
At one point I remember he said it doesn't matter that we can't see the source
code running on remote computers because they're not rightfully in your
control. It's just something you're connecting to with something you do
control. You have the potential to check on your safety because you can see
everything going in and out of your computer.

------
arnoooooo
Regarding open source, I think the point about security is not so much that
you will read the entire source yourself, but that the reading of the source
is, like its writing, a collective enterprise. If there's a backdoor, somebody
at some point will see it.

------
breakyerself
Aren't there laws against companies making Backdoor like this? Not that I'm
naive enough to think that means it won't happen.

------
VMG
typo in headline

~~~
emillon
Fixed - thank you !

