
Expert Says NSA Have Backdoors Built Into Intel And AMD Processors - foolrush
http://www.eteknix.com/expert-says-nsa-have-backdoors-built-into-intel-and-amd-processors/
======
jameshart
Steve Blank, "recognised as one of Silicon Valleys leading experts". I'll be
sure to come to him next time I need to figure out why my car is making that
noise. Or when I'm looking for a heart surgeon.

Wikipedia tells me Steve Blank is "recognized for developing the Customer
Development methodology, which launched the Lean Startup movement". That
doesn't sound to me like the kind of expertise that would be useful for
determining whether silicon chips contain backdoors.

If Steve Blank figured there must be something funny going on at Area 51,
would the headline be "Expert Says NASA Have Aliens in Nevada"?

~~~
larrys
"I'll be sure to come to him next time I need to figure out why my car is
making that noise. Or when I'm looking for a heart surgeon."

Do I ever agree with you on this one. I laughed when I read that as well.

I worked with Steve at a company he was VP of Marketing at. He is primarily a
creative marketing and business guy and is good with the BS (I say that as a
compliment by the way). While he does have technical abilities (just like I
can put together some code) that is almost certainly not his area of
expertise.

Of course what he is saying is possible. Sure. But anyone could have said that
and trying to enhance the statement by referring to him as "one of Silicon
Valleys leading experts" as if "he was on a high level team at Intel" or
something is really just lazy you know what at work.

Next we will hear what Woz thinks about all of this.

~~~
fnordfnordfnord
[http://steveblank.com/about/](http://steveblank.com/about/)

You'd have to read the whole series, but his posts here give a good idea of
his early background. It's pretty interesting, and it left me with the
impression that he knows a thing or two about hardware.
[http://steveblank.com/secret-history/](http://steveblank.com/secret-history/)

------
andreigheorghe
Title: "Expert Says NSA Have Backdoors Built Into Intel And AMD Processors"

First sentence: "Experts think the NSA has hardware level backdoors build into
Intel and AMD processors".

Second sentence: "one of Silicon Valleys leading experts, says that he would
be extremely surprised if the American NSA does not have backdoors built into
Intel and AMD chips".

Top notch journalism right here.

------
mikemoka
Steve Blank started by working in signals intelligence actually:

[http://steveblank.com/tag/signals-
intelligence/](http://steveblank.com/tag/signals-intelligence/)

------
abecedarius
Re how this kind of thing can work:
[https://www.usenix.org/legacy/event/leet08/tech/full_papers/...](https://www.usenix.org/legacy/event/leet08/tech/full_papers/king/king_html/)

"There is a substantial design space in malicious circuitry; we show that an
attacker, rather than designing one speciﬁc attack, can instead design
hardware to support attacks. Such ﬂexible hardware allows powerful, general
purpose attacks, while remaining surprisingly low in the amount of additional
hardware. We show two such hardware designs, and implement them in a real
system. Further, we show three powerful attacks using this hardware, including
a login backdoor that gives an attacker complete and highlevel access to the
machine. This login attack requires only 1341 additional gates: gates that can
be used for other attacks as well. Malicious processors are more practical,
more ﬂexible, and harder to detect than an initial analysis would suggest."

------
dguido
This entire article is based on an unsubstantiated rumor from Steve Blank,
someone who is not a security expert at all.

~~~
fnordfnordfnord
You might want to do a little more background on Steve Blank.

------
luu
There are a lot of comments saying this isn't feasible. I disagree. I don't
mean this as an appeal to authority, but, since people are attacking Steve
Blank's background, I used to design CPUs, and I've done pretty much
everything except low-level stuff like layout (which I only did in classes).

I have no opinion on whether or not there is a backdoor, but, here are some
possible mechanisms.

1\. Periodic SMI on steroids. Intel used to have a debug mode based on an
SMI++ like mode, where the chip would periodically dump the entire state of
the machine out to memory. That's not nearly as useful for debugging now as it
was 15 years ago, but it could dump out compromising information to some
buffer that your network card DMAs out.

2\. RNG weakness.

3\. Put the machine into ring 0, with no other changes.

4\. Put the machine into ring 0, while transferring control to some address.

5\. Access to the microcode patch mechanism.

It took me about 15 seconds to come up with those ideas (I thought of 4 when I
wrote down 3). Regardless of what you think of the NSA, they have the of the
best security people in the world. They can probably figure something out.

Any of these things could easily be triggered by a sequence of obscure
instructions. There are plenty of userland instructions that are never used
today. An arbitrary sequence of, say, 20 of them is likely to never be
discovered even by brute force attack. If you're really worried, you can load
up a few registers with some specific values, and now you've got a 192-bit
keysize (or more, if you want).

If you want to keep it secret from the companies themselves, you're probably
better off using a secret register key in some microcode instruction, since
that would be relatively easy to surreptitiously sneak in after the official
tapeout. Looking for a sequence of instructions wouldn't be technically
difficult (I doubt the decoder/translator is on the critical path, but,
Intel's might be custom, in which case it would be a lot of work to find spare
space), but, it would be more work to sneak it in seamlessly. Then again, they
almost certainly have free gates lying around so that post-silicon bugs don't
require a full-layer tapeout to fix. You could write a program that edits the
right mask layers to access those and patch your change in. But, if those
actually get used for debug purposes, you'd lose the ability to make the
change seamlessly, and, it's much more work than the first approach.

I suspect '5' would require restarting the machine, although you could design
a mechanism that lets you hot-swap microcode. Doesn't seem worth it, though,
considering how easy it is to compromise a machine if you control the
hardware.

~~~
methehack
Can someone else with the proper background confirm or deny the idea _in
theory_?

If it's possible, it would be a shame to loose the thread here.

Whether or not Steve Blank is qualified to make the observation seems like a
Red Herring to me.

~~~
duaneb
I've designed some processors, albeit not at the complexity of Intel, AMD, or
IBM. However, in theory it's definitely possible via any number of mechanisms.
This could be (provably) undetectable by software state, though perhaps
measuring e.g. deviance from average clock cycles of certain instructions to
determine whether something unusual was happening. For example, this is almost
certainly similar to what would be used:
[http://www.intel.com/content/www/us/en/architecture-and-
tech...](http://www.intel.com/content/www/us/en/architecture-and-
technology/intel-active-management-technology.html)

To me, this is the boring part—I'm much more interested in a) what they would
collect, b) how they would identify it from the processor (if they didn't load
software into memory), and c) how they expect to retrieve it. If they even
attempted to use IP to communicate they would be caught immediately.

~~~
Canada
Data could be leaked from the system by encoding it in the timing of
legitimate packet transmission.

~~~
mansr
This encoding would have to be maintained by every router until it reaches an
intercept point.

~~~
duaneb
Sure, and how many processor architectures do you think would be necessary to
backdoor to get a full route to be likely possible? This might be a good place
to start:
[http://en.wikipedia.org/wiki/List_of_Internet_exchange_point...](http://en.wikipedia.org/wiki/List_of_Internet_exchange_points).
I'm sure this is exposing a weakness in my graph theory-fu.

------
pasbesoin
Purely speculatively, I find it interesting that:

\- The described timeframe for the transition to update-able microcode
corresponds roughly with my vague memory of when the U.S. government started
to give up (in practice, if not on paper) on "keeping the lid on" strong
cryptography in the commercial and microcomputer worlds.

\- We read that Lenovo is "untrusted". Are they piggybacking on such a feature
-- which I've little doubt their more competent scientists and researchers
would have thoroughly explored -- trace by trace, if and as necessary? If so,
are Lenovo products then untrusted because a Chinese firm has control over
their update process -- via BIOS or however else?

In other words, did... "Western" agencies provide part or all of the mechanism
by which the Chinese are now supposedly compromising Lenovo PC's?

\----

P.S. I'm now further put in mind of all the industrial et al. espionage that
is now fairly well attributed to the Chinese government and agencies. Were
details of this functionality one or more of the prizes they obtained?

------
deedubaya
What horse shit.

I'm not saying this isn't possible. I'm just saying that there is absolutely
no evidence provided that this has happened.

Shit claims like this discredit actual claims backed by evidence.

------
mylorse
No one should doubt that these devices are FCC complaint, if not, these
companies could not sell in the USA:

[http://www.arrl.org/part-15-radio-frequency-
devices](http://www.arrl.org/part-15-radio-frequency-devices)

> Note: Computer terminals and peripherals that are intended to be connected
> to a computer are digital devices.

However to ignore the fact the these proprietary CPUs do not have additional
opcodes or techniques to read its users work is ludicrous. Here are some prime
examples from Intel:

[http://software.intel.com/sites/manageability/AMT_Implementa...](http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/DOCS/Implementation%20and%20Reference%20Guide/default.htm?turl=WordDocuments%2Fdetectingwhethertheplatformisinsideoroutsidetheenterprise.htm)

[http://blogs.intel.com/technology/2011/01/intel_insider_-
_wh...](http://blogs.intel.com/technology/2011/01/intel_insider_-
_what_is_it_no/)

[http://www.intel.com/content/www/us/en/architecture-and-
tech...](http://www.intel.com/content/www/us/en/architecture-and-
technology/anti-theft/anti-theft-general-technology.html)

AMD I have no evidence to support otherwise, but I would not doubt it.

------
mindslight
At least we have a rich body of theoretical work about how to defend against
dragnet snooping, and getting it implemented and adopted is really just a
social problem (hint: any 'web service' you create is ultimately part of the
surveillance system).

Is there even literature that formalizes the possible types of CPU backdoors
and attempts to layout means of defense? Let's assume there's some high level
rootkit above any virtualization/signed code/etc. It seems like there's
probably a continuum of how much effort is required to utilize this rootkit:

1\. Sandboxed user code can control rootkit (through sequence of instructions
or whatnot)

2\. Raw network packets can control rootkit (what looked like a weirdly-
fragmented http request contained extra data that instructed the rootkit)

3\. Nondeterministic crypto primitives are actually deterministic (anything
encrypted with them looks scrambled, but is easy to decrypt.)

4\. Anything that appears to be a crypto instruction sequence is side-
channeled into tiny correlated delays on network DMA.

In addition it seems like there's bounds on the complexity of this rootkit
(surviving audits), and bounds on what/when detectable changes it can actually
make (corrupting deterministic crypto functions on every CPU would be a non-
starter).

I'm rambling on this because I think that even assuming widespread
microprocessor backdoors, it seems that it should be possible to work our way
to creating things that are actually trustable in certain situations. For
example, much slower auditable processors handle all network communication and
check results from the faster possibly-backdoored microprocessor computing
only deterministic functions.

------
beagle3
Many of the "how would this work" theories on this thread assume it's using
microcode or some weird instruction sequence to trigger something. That's
extremely bad design if that's the case.

A much better way to do that would be an entirely independent CPU that has a
copy of the bus, and can tristate it (so that when needed, it can feed the CPU
any code/data it needs instead of main memory). If you do that, you don't have
to run code on the target machine - you just have to make it see some data -
e.g. by sending an email to that machine; no need for user interaction or
code. You might even use public key infrastructure to make it impossible to
tell which data it needs to see to activate.

And best of all -- only one layout guy needs to be any wiser, not the opcode
people. Such an additional CPU, depending on complexity, might fit in a couple
of thousands of transistors (the 8080, a general purpose 8-bit CPU that ran
CPM, was all of 4500 transistors - virtually invisible in today's modern 1-2
billion chips).

------
lucgommans
I've actually written an article about this for the Dutch website security.nl
two weeks ago. Looking at the facts alone, I find it far-fetched but most
certainly feasible and it should be considered a possibility. So I disagree
with anyone saying it's totally infeasible, but I'm skeptical about whether
they really did go through the effort of doing this.

Dutch article:
[https://www.security.nl/artikel/47135/1/De_onzichtbare_backd...](https://www.security.nl/artikel/47135/1/De_onzichtbare_backdoor%3A_een_voorspelbare_PRNG_in_de_CPU.html)

English summary:

Intel included a new CPU instruction called RdRand as of their Ivy Bridge
architecture. This RdRand instruction produces random numbers generated by the
chip itself. However the issue with this kind of random number generation,
especially since Intel says it's cryptographically secure, is that it can not
be audited or verified. Sure we can run statistical analysis on it, but we
cannot tell the difference between true randomness and the output of AES-CBC.
Nowadays the output is mixed with the existing entropy sources in Linux, but
in closed source systems such as Windows and OS X we don't know how RdRand's
output is used.

I also mention the potential consequences of such a bug, which are rather
wide-ranging. If RdRand's output is used directly then we can assume: if you
have an Ivy Bridge CPU or newer, all your ssl/tls traffic can be decrypted (it
needs randomness to setup a session). If the server that generated the private
key was Ivy Bridge or newer, all traffic to and from that server can be
forged. Furthermore, things like TCP sequence numbers or DNS source port
numbers could be predicted, allowing offensive capabilities as well as passive
cracking of our traffic.

Again, I'm personally skeptical about all of this, but the potential
consequences are quite bad, and it's not all that hard to hide their tracks...
I can provide a theoretical proof of concept on demand. The only thing I have
to guess at is whether the CPU can maintain a state between power cycles, that
question is best left for others to answer. It would not have to be large
though, a few bytes is all it takes.

------
mansr
Supposing there is some kind of "backdoor" in my CPU, how do they access it?
Is there hidden code in my router hardware too allowing packets to/from the
NSA through while hiding from any monitoring?

------
greenyoda
_" This is all made possible by the fact Intel and AMD can update the
microcode on the small reprogrammable part of the CPU which gets updated every
time a Microsoft update is installed."_

Do Microsoft (i.e., Windows) updates really update the microcode? That sounds
dubious to me, since it's a dangerous operation that could completely disable
the user's CPU if it goes wrong (and I've seen Microsoft push buggy updates
before).

Does anyone have any citations that would confirm that this statement is true?

~~~
pgeorgi
microcode updates are volatile - after a reboot the CPU is back to its
programmed state. BIOS installs updates (which is why some CPU bugs are
fixable by BIOS updates), and the OS can do this, too: Linux has drivers to
update microcode on CPUs. The microcode files comes from Intel and are mostly
incomprehensible blobs (see
[http://inertiawar.com/microcode/](http://inertiawar.com/microcode/))

------
optymizer
What's the definition of a 'backdoor' here? As someone who implemented a
version of ARM and MIPS in VHDL, I'm having a hard-time picturing a 'backdoor'
in a CPU.

In fact, I'm tempted to classify this article as 'complete non-sense', only
believable by someone who has no idea what a CPU is. Given some of the
comments about the Linux 'kernal' (sic), I'm not surprised this article made
it to the front page of HN.

~~~
Tuna-Fish
CPU backdoors that work by exploiting known software are entirely imaginable.
Think of a situation where the bad guys target the network stack. They make a
CPU that always executes instructions in exactly the correct way, except that
if certain registers are filled with precisely specific data, the CPU turns of
protection and suddenly jumps into whatever is pointed by one of the
registers. Then target a network packet handling routine with that -- so if a
specific malformed packet is received, the cpu jumps into the data payload. If
your magic data is long enough (4 32-bit registers would be enough), no-one
will ever trigger it by accident.

Doing this would be trivial with the microcode in modern x86 cpus, and while
it would break (no longer trigger) if the netcode is updated (or even
recompiled), that's rather rare, and cpu ucode can be updated too.

------
kghose
While this may be true, I wonder if intelligence agencies use feinting
strategies.

Say a popular chip A is well designed with low vulnerabilities. Say a less
popular chip B actually has exploitable bugs.

In order to get a rival to use B, you would spread a rumor that you knew how
to exploit A and see if you can nudge your rival to switch to using B.

------
diydsp
One way to detect if your processor is bugged would be to run benchmarks. If
the microcode is different, e.g. making backdoor operations between register
accesses, your benchmark results will be different.

A battery of tedious, slightly-differing benchmarks would be necessary to
probe individual instruction, but I don't think it would be possible to evade
this detection technique. It's simply too hard to make a backdoor operations
on top of regular operations without padding the original instruction
execution.

~~~
Tuna-Fish
AFAIK, intel microcode patching mechanism works in parallel with normal
instruction execution. That is, the instructions are decoded and executed
normally, and in parallel different circuitry looks for known fault cases, and
traps if necessary. So long as you won't find the special case, the execution
takes exactly as long.

And there is not enough time in the universe to iterate through a 128-bit
number, so you won't find the special case.

------
runn1ng
This is a terrible and purely speculative article.

------
cliveowen
How would such a thing work?

~~~
ancarda
My guess is it's a rigged random number generator. When the OS calls RdRand to
get a seed value, it could be predictable by the NSA in some way. That allows
them to more easily break crypto done on the machine.

~~~
dobbsbob
This is what I was thinking too, but it would also render the entire US
infrastructure insecure and I don't see that happening, unless gov agencies
and contractors buy a special non crippled CPU and that would raise red
flags/whistleblowers eventually. FBI likes to force backdoors but I think the
NSA realizes it could be turned against them if purposely sabotaging their own
country..... unless just the exports have been backdoored /tinfoil

Then again NIST did once recommend a feeble RNG presumably so the NSA could
break it with a RN skeleton key

This is probably BS but hopefully additional fear mongering promotes more open
hardware projects. If you want to see terrifying insecurity reverse engineer
any mobile baseband stack and processor its running in supervisor mode and
everything is executable w/no NX bit

------
cnbeuiwx
These days, the reasons to use Linux instead of Windows is so overwhelming.

~~~
Myrth
[http://www.eteknix.com/nsa-has-code-running-in-the-linux-
ker...](http://www.eteknix.com/nsa-has-code-running-in-the-linux-kernel-and-
android/)

~~~
ThatGeoGuy
Oh please that's complete garbage.

The code for SELinux that's in both GNU/Linux distributions and Android are
all open source, and anybody can go and review the code or change it.

The existence of Linux on the desktop doesn't remove the threat of having NSA
code built into your processor, but if you honestly believe SELinux is a
backdoor then feel free to point out where in the source code the backdoor is
located instead of spreading FUD for no reason. SELinux has nothing to do with
what the article is talking about, that is, having malicious firmware baked
into the processor at the hardware level.

~~~
WizzleKake
Have you read the code? What makes you think that they're above inserting
subtle bugs?

~~~
gcr
Because Linus signed off on it. To get that complex change into the kernel,
the NSA had to convince Linus that it's a good idea, which can be a next-to-
impossible task. I trust him to review the code more than I trust myself.

~~~
NegativeK
Linus doesn't review everything. He delegates and trusts.

That's tangential, though. More importantly, I expect NSA contributions to be
poured over because the NSA isn't highly trusted, and it would make a great
mailing list post to say "The NSA has a backdoor in our code here, here, and
here."

Many eyes and a suspect contributor make all backdoors shallow.

