Hacker News new | past | comments | ask | show | jobs | submit login
[dupe] This is the most demonically clever computer security attack I've seen in years (plus.google.com)
199 points by T-A on June 7, 2016 | hide | past | favorite | 68 comments



Previous discussion [1] of the paper.

[1] https://news.ycombinator.com/item?id=11768980


So I suppose if we're going to be exceptionally paranoid, we would run everything on virtual machines. Unless I'm missing something, even if a process in a VM managed to trigger a physical nonideality on the physical machine granting access to supervisor mode, it would still be in user-mode on the VM. Then at least other clever things would have to be done to get out of the VM sandbox and into exposed physical-machine supervisor space.

Of course, if we're going to be paranoid, we should probably assume that the CIA and their friends have already come up with something even more nefarious that goes through these countermeasures like tissue-paper.


Once you have control of the supervisor you could just bitflip something within the VM to give you root there, if you needed it. Or you could just read/write arbitrary memory.


But you don't have control of the supervisor if the attacking process is confined to the VM, do you?

You certainly don't have access to the VM's supervisor mode, since its "chip" wouldn't have this sort of physical vulnerability.


Depends on the type of VM. A VM emulating the same type of processor just runs the client code direct on the processor. What stops the client code from messing with the reat of the system, which includes the VM supervisor code, is the processor's protection circuitry. If client code does the exploit to kick the processor into supervisor or hypervisor mode it will be running client code in that elevated mode. Priveliged instructions would no longer trap to the VM's supervisor or host OS.

Now if the VM is emulating the client's individual instructions, which is the usual for running code for a different type of CPU, then it's a different story. Under pure emulation it wouldn't work. The exploit would have to know what the host CPU architecture was, get the emulator code to trip the extra gate. As you said, all that would do is put the VM supervisor code into the CPU's supervisor mode which it probably is already in if the VM is running bare metal without a host OS.

But if the VM use JIT conversion of the instructions for speed then the exploit becomes possible again.


Host OS being compromised is one thing. Hardware compromised? VM won't help you


Makes perfect sense but implementing it at any sort scale seems cumbersome/expensive with current tech. Implementing it without scale is meaningless, you could just send a beautiful woman to seduce someone important and have a ton less loose ends, fewer things to screw up, etc


I'm pretty sure this attack consists of a slight alteration to the design of the chip (namely, adding that extra logic gate), so it automatically scales to precisely the number of chips made with that design.


I think it might be able to be pulled off for example by hacking a computer containing the design, for example at a fab in China.

We already know the NSA hacked the specialized computers of the Iranian centrifuges.


As a beautiful man I take exception to this. Why can't I be sent to seduce someone important?


You can, there's just a slightly smaller demographic.


I mean, this goes against community rules, but gotta say it: bravo! Perfect setup and delivery. :)


Thanks, I thought I would be down faded into oblivion, but instead I get 13 points and you get down faded. Sorry mate, what a fickle audience.


Probably quite a cheap route for a state actor. The problem is that if the secret gets out then anyone can do the same attack.


>you could just send a beautiful woman to seduce someone important and have a ton less loose ends

and they're even advertising these days: http://motherboard.vice.com/read/a-qa-with-the-woman-who-des...


Eh, for a state actor, I don't think there's such thing as "too cumbersome or expensive."


There's certainly "more cumbersome and expensive than the alternatives" though.


Can something like this be used to tamper with the Blackbox type voting machines used in India?

If so, why have there not been much of a hue and cry about it.


This totally ignores the non-functional test modes and dft modes of any modern ic. There are many modes which work down at the gate level and even sometimes transistor level to identify and isolate logic error, badly designed circuits, and manufacturing defects. This can be done in an automated way on every IC. I would be surprised if this attack couldn't be twarted by a series of cleverly designed scan vectors. And since the attacker doesn't know what I'll run when designing his circuit hiding should be very hard. For example, The attack in the paper is basically just a cross talk bug and this is the kind of thing low level testing chased out at this level.

http://anysilicon.com/overview-and-dynamics-of-scan-testing/


You should read the paper, all of this was considered.


I agree it's unfair to say modern testing techniques were totally ignored but I think many standard techniques were not considered.

For example, they dismissed the idea of filling empty space. I think this is easily achieved. Post place and route go find all empty space and fill it with scan flops. If you need something smaller put inverters between them. Tap into the chain in vicinity of the void and to minimize route impact. At test, scan through a random bit pattern. This will ensure all flops are present as I can determine the chain depth and all bits of the string must be retained. I also am critical of the need to test against golden references as I can just compare against the CAD. I've personally designed scan TD vectors which found one weak buffer incorrectly selected by the mapper and I didn't know the problem was a weak buffer when I started. I think this is equivalent to loading the net with extra gates. On process attacks, process monitoring circuits exist. I can verify them with external equipment on an analog test bus which is how we DV them.

I definitely think the paper is very good and I will go through it in more detail when I have time but I also think they are too quick to dismiss the defenses and I don't think their defense list is exhaustive.

I should also mention, this is my opinion on detecting circuits inserted in parallel to a given design. If the attacker is allowed to modify the design directly... Then all bets are off. :)


Forget going in and "hiding" a backdoor, the people over at Libreboot have been professing about the dangers of Intel's Management Engine for years now -- https://libreboot.org/faq/#intelme


Even "most demonically clever" than the cryogenically frozen RAM attack? http://www.zdnet.com/article/cryogenically-frozen-ram-bypass...


Yes, A2 doesn't require physical access.


Physical access to a semiconductor fab isn't exactly easy pickings.


It is easier for a large agency with interest in controlling computers to install a backdoor at the manufacture stage, than individually accessing "suspects" computers. Compared to the RAM attack, physical access to a semiconducter fab IS easy pickings for a clandestine government agency.


I wouldn't say physical access is as clever, though.


Billions of transistors on a modern chip wafer... Anyone who claims to know exactly what all of them are there for is either very smart, or a little naive.


There's likely enough undisclosed OS kernel vulnerabilities known to state actors that this kind of attack would not be very necessary.


And yet people are still seriously considering online voting.


Because paper is incorruptible? There is plenty of room for tampering with physical votes too.


Yes, but it's much harder to hide a conspiracy to tamper with paper votes if it comes right down to it, since they're not all counted at the same place.


cost-benefit.

if you can get to the same amount of tampering risk for cheaper, you should switch to that process.

and you obviously can, you can do - let's say - a multi-factor voting, so you use multiple devices to record your intent, and you use different kinds of hardware to assess the votes.


Or building a currency that relies on keeping your computer secure


cost-benefit.

if you can get to the same amount of tampering risk for cheaper, you should switch to that process.

we already spend billions on fraud detection, prevention, etc each year. and probably a significant portion goes undetected too.


Estonian Online voting relies on the smartcard chip not cpu


> I don't know if I want to guess how many three-letter agencies have already had the same ide

As of right now... all of them?


I guess is for "state actors" that aren't privy to this backdoor.

http://hackaday.com/2016/01/22/the-trouble-with-intels-manag...


Trusting trust attacks! Yay!


Indeed. I think it's a pretty good idea to reread the Trusting Trust paper by Ken Thompson from time to time. Here is an annotated version http://fermatslibrary.com/s/reflections-on-trusting-trust


This was brought up the last time it was posted, to the extent of the post itself being given a confusing, clickbaity, editorializing title about Ken Thompson. But I don't see it as the same thing.

Ken Thompson showed that he could hide a backdoor in a compiler where you'd think you could verify the compiler as safe, by reading its source code and even compiling from that source code, but in fact it was sneakily changing its own source code as it compiled it.

This demonstration, on the other hand, is saying that they can hide a backdoor in a microchip, where you can't verify it as safe.

The chip is not, for example, making the sneaky capacitor disappear when you use the computer to view schematics of the chip.

I don't see where "trusting trust" is involved, just "trust".


This work is in no way related to trusting trust. In Ken Thompson's work, you have a booby trapped compiler injecting code into programs and newly compiled compilers. Here you have a hardware attack that you can trigger using a magic sequence of instructions and go into kernel mode.


Sorry, but that's a clickbait headline. I clicked it expecting to see a real attack dissected, not someone's idea on attacking an hypothetical platform.

(cute idea though)


A proof-of-concept is provided. If you are saying that the only way to gauge that an attack is "clever" (an adjective I'm not sure I'd apply here anyway) is to find it in the wild, I don't think that's fair.


Proof of concept on your own platform doesn't count. Security wise, it gets interesting when you attack an existing product, not when you set the rules for the "attack" that you'll carry later.

Obviously with this attack it's a bit of a problem to provide a PoC - but at this point comes the mislabeling of the post. Should have been called "a clever attack concept".


The attack is carried out on an existing product, an open source chip OR1200 that has its schematics generally available.


Calling it an "attack" is no less of an hyperbole than changing the sources of nginx to introduce a remote exploit.

What this team did is to create a compromised version of an open source product. They engineered it to be defective in the first place. This is nothing beyond a thought experiment.


There is a crucial difference between hardware development and software development that you are missing: hardware has several stages of development/implementation that spans several parties only connected by business contracts.

If you want to squeeze the attack into your analogy, it would be as if a compiler writer was malicious an added an attack to any/all nginx binaries without modifying the original source code.


> They engineered it to be invisibly defective in the first place.

That's the difference and the interesting part. Unless most people who aren't me decap their chips and go over them with an electron microscope before use...


A compromised version that isn't noticeably different from any other version, which could be surreptitiously deployed on a large scale.


Nope, they built one.

> As +Andreas Schou said in his share, "Okay. That's it. I give up. Security is impossible."


Just because you clicked it thinking it would be something else doesn't mean it's clickbait.


Its hyperbolic though.


Demonstrated attacks with physical PoCs aren't hyperbolic.


The magic of hacking is cleverly exploiting a given situation, not setting the rules by modifying the product to be compromised. The "attack" here would be obtaining access to the ASIC design computers, while what the article discusses is simply a backdoor to be installed by the attacker.


No, I don't think you're seeing the proposed attack here. As I understand it:

1. A company tapes out their chip to be fabbed, as most chips are, off-shore. The design is clean.

2. The backdoor is added after it is out of the company's hands -- say, by the fab. A single register and some wiring are not that difficult to add by hand to the design masks -- effectively, it's just an ECO.

Post tape-out tampering is a mounting concern for our clients. [EDIT: I work for an ASIC Design & Verification consultancy]


Oh come on! As if security were a sport with rules. Your irrelevant pigeon-holing of the threat surface is just the sort of thinking that the black-hats like to see.


I was referring to the wording of the headline. Not the attack itself.

"Most demonically clever". And the old "this" curiosity trick I often see in belly fat ads.


I agree with that downvoted comment. This is not an attack. It's a backdoor.


That's a distinction without a difference imo. "Oh, thank goodness I was compromised by a hardware backdoor not a software flaw. What a relief!"


"backdoor" and "attack" refer to entirely distinct concepts: backdoor is, given administrative access to equipment or code, installing an hidden component that will be used later by an adversary.

"attack" is the operational word, sometimes using a backdoor but mostly just exploiting a given system to your benefit.

Coming to read this article, I'd have no problem if it stated "the most sophisticated backdoor I've ever seen", but it implied that it exposes an unprecedentedly sophisticated attack - I was expecting something operational of the Stuxnet variety, and instead I got a research paper.

If this article showed how this backdoor was embedded into a real (commercial and widely used) chip - unbeknownst to the chip maker - and later used by an actual adversary, then that would be an attack.


You do realize that Stuxnet required a failure of physical security. Either an agent inserted intentionally via USB or an employee inserted a compromised USB unwillingly. Neither should happen in a secure facility.

The paper shows simple yet sophisticated POC. The simplicity is the scary part. BTW, do you think the University of Michigan spends more on research than the NSA, GCHQ, BND, DGSE, 3PLA, etc.


Looking at my pay check tells me that Michigan does not pay more than said agencies :).


There is some value to precision in terminology, but here it seems to me to be merely pedantic. Are there any important practical consequences at issue here?


It's a big difference, especially in a legal sense, and if that attacker is a government player.

Walking in a back door which you have a key for, if argued correctly in a court, is a lot different than breaking in through a window.


No. If your job is to secure systems there is zero difference. If other than primary has key, it's not secure. If you can break in, it's not secure. Legal arguments are irrelevant.


I am not convinced that the legal difference will carry much weight with all the potential exploiters, but if it does, there is always the question of how you acquired the key.


I really hope you're not involved with securing sensitive systems. This is the nightmare right here. Who cares about hardened a OS when the hardware is compromised in a way that is virtually undetectable. Very few have the resources to verify every item on the board and its purpose. The fact that this is even possible is a problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: