
CPU Backdoors - 2510c39011c5
http://danluu.com/cpu-backdoors/
======
Animats
An obvious place for a backdoor is in remote management CPUs embedded in the
network card.

[http://www.ssi.gouv.fr/IMG/pdf/csw-
trustnetworkcard.pdf](http://www.ssi.gouv.fr/IMG/pdf/csw-trustnetworkcard.pdf)

Network cards which support RMCP/IPMI protocol are obvious points of attack.
They can reboot machines, download boot images, install a new OS, patch
memory, emulate a local console, and control the entire machine. CERT has some
warnings:

[https://www.us-cert.gov/ncas/alerts/TA13-207A](https://www.us-
cert.gov/ncas/alerts/TA13-207A)

If there's a default password in a network card, that's a backdoor. Here's a
list of the default passwords for many common systems:

[https://community.rapid7.com/community/metasploit/blog/2013/...](https://community.rapid7.com/community/metasploit/blog/2013/07/02/a-penetration-
testers-guide-to-ipmi)

"admin/admin" is popular.

The network card stores passwords in non-volatile memory. If anyone in the
supply chain gets hold of the network card briefly, they can add a backdoor by
plugging the card into a chassis for power, connecting a network cable, and
adding a extra user/password of their own using Linux "ipmitool" running on
another machine. The card, when delivered to the end user, now has a backdoor
installed. If you have any servers you're responsible for, try connecting with
IPMI and do a "list" command to see what users are configured. If you find any
you didn't put there, big problem.

CERT warns that, if you use the same userid/password for multiple machines in
your data center, discarded boards contain that password. So discarded boards
must be shredded.

~~~
AlyssaRowan
Absolutely. NICs in general are a very fruitful vector for persistence, and
had been extensively studied by the NSA.

Generally, anything with a microcontroller that might run firmware (BIOS or
UEFI), access DMA (via PCI, PCIe, FireWire) or be a storage peripheral that
might pass code to the boot process (HDD/SSD/CD/DVD/BD/Flash drive/memory card
firmware, including USB) or input (USB) is a potential problem.

That is a pretty damn big attack surface, and civilian researchers are able to
do this too (the only big advantages Nation State Adversaries really have is
funding and _occasionally_ vendor cooperation, although I'd expect that to be
rare in this case for operational security reasons - they might get datasheets
under false pretenses, however, but so could we, we just wouldn't get away
with it if caught <g>).

The TPM arch isn't so much of a problem here as trying to be a solution, but
it falls short and has down sides too.

Supply chain integrity is a _huge_ , possibly unsolvable problem. I'd be
interested to see however some solutions to massively complicate any such
attack, like an open trusted processor which boots ROM externally readable in
hardware with no override and keeps secure hash chains of the firmware that
loads - again, which would be externally verifiable with no way to override in
firmware. That would put a crimp in their day.

~~~
sliverstorm
_Supply chain integrity is a huge, possibly unsolvable problem._

One of the arguments for domestic manufacture of at least some keystone parts,
and one of the reasons IBM is still in the fab business.

~~~
AlyssaRowan
Since those who say "domestic" usually mean USA, I'm guessing you live there.

Bad news: your government is one of the attackers. (So is mine,
unfortunately.) I take it you've seen the NSA interdiction guys taking
discreet hacksaws to Cisco parcels _en route_ to 'implant' (backdoor) them by
now?

Did you think that was something that only happens abroad?

~~~
sliverstorm
It's not something that only happens abroad, but to my (admittedly limited)
knowledge the US is a less severe problem for a US company.

That is, they snoop on you, yes. But they don't sell you shoddy knock-offs
instead of real parts, they don't give intel on you to your competitors, and
they don't actually attack you Stuxnet-style. (that I know of)

I could be way off base, but as a US company I'd much prefer a US gov't
backdoor to a Chinese gov't backdoor, and supply chains contaminated by knock-
offs is a nightmare unto itself.

P.S. Yes, I'm in the US, and yes, you're right about "domestic"

P.P.S. IBM's fab (to my knowledge) mostly exists to serve the US gov't anyway.
At least for the NSA themselves, the advantages to domestic production are
there :)

------
tomerv
While the main point of the article is interesting, some of the details don't
really make sense.

For example, it would be difficult to make an instruction like fyl2x or fadd
cause a privilege level change. The reason is that floating point instructions
are executed on a separate unit (the FPU), with a separate decoder. This unit
would not have the means to communicate back information such as "change
privilege level" (normally it can only signal floating point exceptions, and
other than that its only output is on the floating point registers). It would
make more sense to encode the backdoor on an illegal opcode, i.e. an opcode
that under normal conditions would generate a UD# exception, but with the
correct values in the registers it would trigger some undocumented behavior.

Another question is how to hide this backdoor in the microcode. Presumably, at
some point someone might stumble upon the backdoor and ask around about it. If
the backdoor depends on some "magic values", it would be relatively easy to
spot just by looking at the microcode.

There's also the point that the author mentioned of "fixing" the processor at
some point during the production process. I don't think that the author
understands the way mass production of microchips works. It's very much not
possible to do something like this while keeping the production price on the
same level (or someone noticing this extra step in the production process).

All in all, it sounds much easier to find security bugs in other parts of the
system.

~~~
nhaehnle
_The reason is that floating point instructions are executed on a separate
unit (the FPU), with a separate decoder._

I don't think that has been true for a very long time.

 _If the backdoor depends on some "magic values", it would be relatively easy
to spot just by looking at the microcode._

The problem with both your theory and the article's theory is that nobody
outside the chip companies themselves really knows how the microcode works.
This reduces both the people who could pull off such a backdoor and the people
who could discover it to a very small number.

A similar thing applies to your point about changes during manufacturing.

Overall, this means that CPU backdoors are a thing to be concerned about,
keeping in mind that it's probably a technique that will, for a long time, be
limited to the kind of people who were responsible for Stuxnet.

~~~
tomerv
There are so many people involved in the design and manufacturing of a
processor, that I don't see how it's possible to hide a backdoor, either in
the microcode or during manufacturing. We're not talking about some secret
government agency, we're taking about a place with many workers around the
world, with different agendas. Eventually someone will find about about the
backdoor and leak information about its existence.

~~~
Tuna-Fish
> There are so many people involved in the design and manufacturing of a
> processor, that I don't see how it's possible to hide a backdoor, either in
> the microcode or during manufacturing. We're not talking about some secret
> government agency, we're taking about a place with many workers around the
> world, with different agendas.

However, the end result of both CPU design and the microcode team is
essentially unreadable.

No-one outside Intel can read their microcode updates, as they are obfuscated
in some way, possibly encrypted. This means that compromising just the last
step, the people or tools doing the obfuscating, means you can output whatever
you want with no-one on the team being able to find it out.

The same is true for the CPU design. Created masks are generally not looked
at, other than to verify small spots if it seems there are bugs. Because of
this, compromising the last step between the model and the mask would allow
you to output whatever you want with no-one of the thousands of the people
working on it ever finding out.

------
higherpurpose
Who needs dirty trace-able CPU backdoors when Intel's SGX technology will
allow them perfect plausible deniability to give NSA (or China if they force
them by law) the key to all "secure apps" that will be using the SGX
technology:

> _Finally, a problem that is hard to ignore today, in the post-Snowden world,
> is the ease of backdooring this technology by Intel itself. In fact Intel
> doesn 't need to add anything to their processors – all they need to do is
> to give away the private signing keys used by SGX for remote attestation.
> This makes for a perfectly deniable backdoor – nobody could catch Intel on
> this, even if the processor was analyzed transistor-by-transistor, HDL line-
> by-line._

[http://theinvisiblethings.blogspot.com/2013_09_01_archive.ht...](http://theinvisiblethings.blogspot.com/2013_09_01_archive.html)

------
agumonkey
The Novena laptop seems almost devoid of backdoors.
[http://www.wired.co.uk/news/archive/2014-01/20/open-
source-l...](http://www.wired.co.uk/news/archive/2014-01/20/open-source-
laptop)

~~~
crucini
Cool. What about tempest? If you're buying a heavy, slow laptop for security,
it would be nice to know it's tempest-safe.

~~~
agumonkey
I wouldn't bet a dollar that it is tempest safe.

ps: for the ignorants, like I was two minutes ago,
[http://www.webopedia.com/TERM/T/Tempest.html](http://www.webopedia.com/TERM/T/Tempest.html)
tempest is about reading devices electromagnetic radiations to intercept data.

------
ce4
A serious flaw in AMDs System Management Unit Firmware was very recently
discovered:

[http://media.ccc.de/browse/congress/2014/31c3_-_6103_-_en_-_...](http://media.ccc.de/browse/congress/2014/31c3_-_6103_-_en_-
_saal_2_-_201412272145_-_amd_x86_smu_firmware_analysis_-
_rudolf_marek.html#video)

------
rdl
Wow..light involved in the lithography process causes wear on the lenses? To
what degree?

~~~
rdl
This video, mentioned here, from HOPE, is amazing:
[https://m.youtube.com/watch?v=NGFhc8R_uO4](https://m.youtube.com/watch?v=NGFhc8R_uO4)

~~~
spyder
and here is the direct link to the part about the wear on the lenses:

[http://youtu.be/NGFhc8R_uO4?t=12m32s](http://youtu.be/NGFhc8R_uO4?t=12m32s)

------
crucini
Cool article. I didn't understand how the privilege escalation would be
exploited. Obviously if the attacker already has access to the box, he can get
root with this exploit.

I think a chip backdoor could also be based on information leaking rather than
executing arbitrary code.

The steps would be: 1\. Identify critical info, like crypto keys, from
heuristics. This means keeping a special buffer, since you don't know at the
beginning of an RSA operation that it's an RSA operation. The heuristics are
not perfect, of course, but work with standard apps like Firefox, GPG and
Outlook.

2\. Exfiltrate the info. Via spread-spectrum RF, timing jitter in packets, or
replacing random numbers in crypto. The article implies that since OSes and
apps mix the hardware RNG with other sources, there's no point in subverting
it. But the CPU can recognize common mix patterns, like in the Linux kernel,
and subvert the final output.

In this case the output entropy is good, but also leaks some secret to a
listener who has the right keys.

~~~
treewash
1\. You design your CPU so whenever you execute an add instruction with $r1 =
x, $r2 = y (say these are the add inputs), the next add instruction will
switch to ring-0 mode and run code at address which is the result of the add.

2\. You don't need access to the box. You just get the target to load a site
with JS that sets x and y to those specfic values and adds them, and then adds
zero to some address you want to execute (which you can aim to be shellcode in
a JS string or something, but even if not there are a million tricks you can
use to execute arbitrary code if you can run code at an arbitrary address).

3\. Assuming the JS engine compiles sanely, you now have a way to control any
computer and make it do anything via some JS on any web site. Ring-0 can
totally bypass all virtualization and even the OS itself.

And this exploit is so simple and powerful that everything else is a waste of
time. No need to use statistical and entropy tricks to leak keys. You can own
any computer with JS on a web site and steal anything you want from memory,
including keys. And you can probably do this without the target noticing
anything.

~~~
TheLoneWolfling
If you're really being nasty you can potentially do this even without JS -
say, by using CSS layout.

~~~
crucini
But can one predict how CSS will look to the CPU? Or do you assume a memcpy
will happen at some point, and catch it then?

~~~
TheLoneWolfling
memcpy is one route.

But, given that (almost) everyone runs browsers for which the binaries, at
least, are avaliable, it's relatively straightforward to come up with
something that triggers the vulnerability.

And even if you don't know what browser they are using, you can make guesses.
For example - if you have an image that's 102px wide and immediately to the
right of an image that's 57px wide, at some point the CPU will probably add 57
and 102. Things like that.

------
bizarref00l
Another recent article on HN
[https://news.ycombinator.com/item?id=8813029](https://news.ycombinator.com/item?id=8813029)
on Intel Management Engine.

------
gaius
A CPU backdoor is impossible only in the sense that, say, sending a submarine
to tap an undersea cable is impossible...

------
dracolytch
CPU backdoors are a very real concern, but not only in the CPU but in the
growing complexity of the motherboard chipset. For example, a malicious memory
controller could manipulate data on the way to the CPU, causing a faithful CPU
to do malicious things.

For highly secured systems, this is of growing concern. With the amount of
stuff made in China the supply chain is considered a considerable attack
surface which has to be considered when sourcing electronics.

~~~
anonbanker
I abandoned x86 a few years back, because I'm far less concerned about China
knowing my secrets than I am with the Five Eyes countries violating my
privacy. The likelihood of the nsa or gchq tampering with an allwinner or
freescale chip en route is much lower than with Intel or AMD. And far more
resources would be involved than would be financially reasonable to tailor an
operation for a small-potatoes corporation running an all-ARM setup like mine.

So I'm seeing it as a decrease in attack surface, overall.

------
GigabyteCoin
Given the fact that the NSA targets linux users [0], is it really that far
fetched that they could be adding backdoors to CPUs ordered by certain NSA
targets?

I'm assuming most linux enthusiasts build their own rigs, as do I.

[0] [http://www.linuxjournal.com/content/nsa-linux-journal-
extrem...](http://www.linuxjournal.com/content/nsa-linux-journal-extremist-
forum-and-its-readers-get-flagged-extra-surveillance)

~~~
gedrap
Well, to be realistic, almost all Linux users are of no actual interest to
NSA. All it means is that statistically, someone browsing about encryption is
more likely to be thinking about committing a crime. If you count people who
go the extra mile to do heavy encryption for privacy reasons, and people who
do that to hide crimes... That could be interesting.

The Linux Journal is, in some way, an extremist forum - people who are
extremely technically advanced. Being extremist doesn't mean that you are a
terrorist.

Imagine, you have browsing history for every convict for some specific crime
and are tasked to derive a scoring formula. You'd probably see that there's a
positive correlation between hasBrowsedLinuxJournel and isConvicted.

Using Linux doesn't put you on the kill list. It just means you share
something in your behavior with people who are of interest to national
security. There probably are many more factors like that - shopping patterns,
movement patterns, etc. It's just that Linux made the headlines and media took
chance to generate some hype.

------
justcommenting
for many modern desktops/laptops (including recent Apple machines, which i
don't think was the case even just a few product cycles ago), Intel's vPro
appears capable of many forms of surveillance/subversion.

in terms of understanding/mitigating these types of threats, i wish an open,
crowdfunded project to reverse engineer the contents of intel's microcode
updates existed to the point they were understandable by the tech press.

i also wish an easy-to-use package for blacklisting cpu-based and crypto-
related kernel modules (like aes-ni) existed for a broad range of processors..

and of course only somewhat relatedly, i continue to wish the man page for
random(4) would be rewritten in light of the risk of these types of backdoors.

~~~
nezza-_-

        to reverse engineer the contents of intel's microcode updates
    

I don't think that's (sanely) possible. The amount of information needed about
the silicon is very high and modern x86 processors are (for now) pretty much
impossible to reverse engineer by delayering and taking pictures of the
insides of the chip (14nm = ~60 atoms)... Also the costs of people who are
able to reverse engineer such stuff would be very very high.

------
2510c39011c5
here is another article about CPU backdoors,

[http://theinvisiblethings.blogspot.com/2009/03/trusting-
hard...](http://theinvisiblethings.blogspot.com/2009/03/trusting-
hardware.html)

and the discussion in the comment section of that one is good and contains
some interesting pointers for further sources on this topic...

Also, here is a phrack article "System Management Mode Hack" on how to exploit
Intel system management mode (with code at the end of the article).

[http://phrack.org/issues/65/7.html](http://phrack.org/issues/65/7.html)

------
stephenmm
It seems very unlikely that someone would be able to "apply the edit to a
partially finished chip". The adding of a fix like this is probably some of
the most scrutinized processes in hardware design. After spending years
designing and verifying chip functionality and getting the timing exactly
right before production starts there is a very high bar for getting these
fixes in to the production flow because if the fix screws anything else up you
are FUBARed. Given that, it is probably the hardest place you could ever try
and put a back door.

