
This is the most demonically clever computer security attack I've seen in years - T-A
https://plus.google.com/+YonatanZunger/posts/ayXVWrFpQus
======
danbruc
Previous discussion [1] of the paper.

[1]
[https://news.ycombinator.com/item?id=11768980](https://news.ycombinator.com/item?id=11768980)

------
maxander
So I suppose if we're going to be _exceptionally_ paranoid, we would run
everything on virtual machines. Unless I'm missing something, even if a
process in a VM managed to trigger a physical nonideality on the physical
machine granting access to supervisor mode, it would still be in user-mode on
the VM. Then at least _other_ clever things would have to be done to get out
of the VM sandbox and into exposed physical-machine supervisor space.

Of course, if we're going to be paranoid, we should probably assume that the
CIA and their friends have already come up with something even more nefarious
that goes through these countermeasures like tissue-paper.

~~~
Jabbles
Once you have control of the supervisor you could just bitflip something
within the VM to give you root there, if you needed it. Or you could just
read/write arbitrary memory.

~~~
maxander
But you _don 't_ have control of the supervisor if the attacking process is
confined to the VM, do you?

You certainly don't have access to the VM's supervisor mode, since its "chip"
wouldn't have this sort of physical vulnerability.

~~~
Avernar
Depends on the type of VM. A VM emulating the same type of processor just runs
the client code direct on the processor. What stops the client code from
messing with the reat of the system, which includes the VM supervisor code, is
the processor's protection circuitry. If client code does the exploit to kick
the processor into supervisor or hypervisor mode it will be running client
code in that elevated mode. Priveliged instructions would no longer trap to
the VM's supervisor or host OS.

Now if the VM is emulating the client's individual instructions, which is the
usual for running code for a different type of CPU, then it's a different
story. Under pure emulation it wouldn't work. The exploit would have to know
what the host CPU architecture was, get the emulator code to trip the extra
gate. As you said, all that would do is put the VM supervisor code into the
CPU's supervisor mode which it probably is already in if the VM is running
bare metal without a host OS.

But if the VM use JIT conversion of the instructions for speed then the
exploit becomes possible again.

------
dsfyu404ed
Makes perfect sense but implementing it at any sort scale seems
cumbersome/expensive with current tech. Implementing it without scale is
meaningless, you could just send a beautiful woman to seduce someone important
and have a ton less loose ends, fewer things to screw up, etc

~~~
underwires
As a beautiful man I take exception to this. Why can't I be sent to seduce
someone important?

~~~
kleer001
You can, there's just a slightly smaller demographic.

~~~
asimuvPR
I mean, this goes against community rules, but gotta say it: bravo! Perfect
setup and delivery. :)

~~~
kleer001
Thanks, I thought I would be down faded into oblivion, but instead I get 13
points and you get down faded. Sorry mate, what a fickle audience.

------
esmi
This totally ignores the non-functional test modes and dft modes of any modern
ic. There are many modes which work down at the gate level and even sometimes
transistor level to identify and isolate logic error, badly designed circuits,
and manufacturing defects. This can be done in an automated way on every IC. I
would be surprised if this attack couldn't be twarted by a series of cleverly
designed scan vectors. And since the attacker doesn't know what I'll run when
designing his circuit hiding should be very hard. For example, The attack in
the paper is basically just a cross talk bug and this is the kind of thing low
level testing chased out at this level.

[http://anysilicon.com/overview-and-dynamics-of-scan-
testing/](http://anysilicon.com/overview-and-dynamics-of-scan-testing/)

~~~
impdmnt2Prgrss
You should read the paper, all of this was considered.

~~~
esmi
I agree it's unfair to say modern testing techniques were totally ignored but
I think many standard techniques were not considered.

For example, they dismissed the idea of filling empty space. I think this is
easily achieved. Post place and route go find all empty space and fill it with
scan flops. If you need something smaller put inverters between them. Tap into
the chain in vicinity of the void and to minimize route impact. At test, scan
through a random bit pattern. This will ensure all flops are present as I can
determine the chain depth and all bits of the string must be retained. I also
am critical of the need to test against golden references as I can just
compare against the CAD. I've personally designed scan TD vectors which found
one weak buffer incorrectly selected by the mapper and I didn't know the
problem was a weak buffer when I started. I think this is equivalent to
loading the net with extra gates. On process attacks, process monitoring
circuits exist. I can verify them with external equipment on an analog test
bus which is how we DV them.

I definitely think the paper is very good and I will go through it in more
detail when I have time but I also think they are too quick to dismiss the
defenses and I don't think their defense list is exhaustive.

I should also mention, this is my opinion on detecting circuits inserted in
parallel to a given design. If the attacker is allowed to modify the design
directly... Then all bets are off. :)

------
joshumax
Forget going in and "hiding" a backdoor, the people over at Libreboot have
been professing about the dangers of Intel's Management Engine for years now
-- [https://libreboot.org/faq/#intelme](https://libreboot.org/faq/#intelme)

------
RoutinePlayer
Even "most demonically clever" than the cryogenically frozen RAM attack?
[http://www.zdnet.com/article/cryogenically-frozen-ram-
bypass...](http://www.zdnet.com/article/cryogenically-frozen-ram-bypasses-all-
disk-encryption-methods/)

~~~
impdmnt2Prgrss
Yes, A2 doesn't require physical access.

~~~
smitherfield
Physical access to a semiconductor fab isn't exactly easy pickings.

~~~
vermilingua
It is easier for a large agency with interest in controlling computers to
install a backdoor at the manufacture stage, than individually accessing
"suspects" computers. Compared to the RAM attack, physical access to a
semiconducter fab IS easy pickings for a clandestine government agency.

------
King-Aaron
Billions of transistors on a modern chip wafer... Anyone who claims to know
exactly what all of them are there for is either very smart, or a little
naive.

------
ams6110
There's likely enough undisclosed OS kernel vulnerabilities known to state
actors that this kind of attack would not be very necessary.

~~~
mtgx
And yet people are still seriously considering online voting.

~~~
ViViDboarder
Because paper is incorruptible? There is plenty of room for tampering with
physical votes too.

~~~
MatthaeusHarris
Yes, but it's much harder to hide a conspiracy to tamper with paper votes if
it comes right down to it, since they're not all counted at the same place.

------
kmiroslav
> I don't know if I want to guess how many three-letter agencies have already
> had the same ide

As of right now... all of them?

------
canada_dry
I guess is for "state actors" that aren't privy to this backdoor.

[http://hackaday.com/2016/01/22/the-trouble-with-intels-
manag...](http://hackaday.com/2016/01/22/the-trouble-with-intels-management-
engine/)

------
qwertyuiop924
Trusting trust attacks! Yay!

~~~
batistuta
Indeed. I think it's a pretty good idea to reread the Trusting Trust paper by
Ken Thompson from time to time. Here is an annotated version
[http://fermatslibrary.com/s/reflections-on-trusting-
trust](http://fermatslibrary.com/s/reflections-on-trusting-trust)

------
flyinglizard
Sorry, but that's a clickbait headline. I clicked it expecting to see a real
attack dissected, not someone's idea on attacking an hypothetical platform.

(cute idea though)

~~~
jMyles
A proof-of-concept is provided. If you are saying that the only way to gauge
that an attack is "clever" (an adjective I'm not sure I'd apply here anyway)
is to find it in the wild, I don't think that's fair.

~~~
flyinglizard
Proof of concept on your own platform doesn't count. Security wise, it gets
interesting when you attack an existing product, not when you set the rules
for the "attack" that you'll carry later.

Obviously with this attack it's a bit of a problem to provide a PoC - but at
this point comes the mislabeling of the post. Should have been called "a
clever attack concept".

~~~
lsb
The attack is carried out on an existing product, an open source chip OR1200
that has its schematics generally available.

~~~
flyinglizard
Calling it an "attack" is no less of an hyperbole than changing the sources of
nginx to introduce a remote exploit.

What this team did is to create a compromised version of an open source
product. They engineered it to be defective in the first place. This is
nothing beyond a thought experiment.

~~~
impdmnt2Prgrss
There is a crucial difference between hardware development and software
development that you are missing: hardware has several stages of
development/implementation that spans several parties only connected by
business contracts.

If you want to squeeze the attack into your analogy, it would be as if a
compiler writer was malicious an added an attack to any/all nginx binaries
without modifying the original source code.

