So I suppose if we're going to be exceptionally paranoid, we would run everything on virtual machines. Unless I'm missing something, even if a process in a VM managed to trigger a physical nonideality on the physical machine granting access to supervisor mode, it would still be in user-mode on the VM. Then at least other clever things would have to be done to get out of the VM sandbox and into exposed physical-machine supervisor space.
Of course, if we're going to be paranoid, we should probably assume that the CIA and their friends have already come up with something even more nefarious that goes through these countermeasures like tissue-paper.
Once you have control of the supervisor you could just bitflip something within the VM to give you root there, if you needed it. Or you could just read/write arbitrary memory.
Depends on the type of VM. A VM emulating the same type of processor just runs the client code direct on the processor. What stops the client code from messing with the reat of the system, which includes the VM supervisor code, is the processor's protection circuitry. If client code does the exploit to kick the processor into supervisor or hypervisor mode it will be running client code in that elevated mode. Priveliged instructions would no longer trap to the VM's supervisor or host OS.
Now if the VM is emulating the client's individual instructions, which is the usual for running code for a different type of CPU, then it's a different story. Under pure emulation it wouldn't work. The exploit would have to know what the host CPU architecture was, get the emulator code to trip the extra gate. As you said, all that would do is put the VM supervisor code into the CPU's supervisor mode which it probably is already in if the VM is running bare metal without a host OS.
But if the VM use JIT conversion of the instructions for speed then the exploit becomes possible again.
Makes perfect sense but implementing it at any sort scale seems cumbersome/expensive with current tech. Implementing it without scale is meaningless, you could just send a beautiful woman to seduce someone important and have a ton less loose ends, fewer things to screw up, etc
I'm pretty sure this attack consists of a slight alteration to the design of the chip (namely, adding that extra logic gate), so it automatically scales to precisely the number of chips made with that design.
This totally ignores the non-functional test modes and dft modes of any modern ic. There are many modes which work down at the gate level and even sometimes transistor level to identify and isolate logic error, badly designed circuits, and manufacturing defects. This can be done in an automated way on every IC. I would be surprised if this attack couldn't be twarted by a series of cleverly designed scan vectors. And since the attacker doesn't know what I'll run when designing his circuit hiding should be very hard. For example, The attack in the paper is basically just a cross talk bug and this is the kind of thing low level testing chased out at this level.
I agree it's unfair to say modern testing techniques were totally ignored but I think many standard techniques were not considered.
For example, they dismissed the idea of filling empty space. I think this is easily achieved. Post place and route go find all empty space and fill it with scan flops. If you need something smaller put inverters between them. Tap into the chain in vicinity of the void and to minimize route impact. At test, scan through a random bit pattern. This will ensure all flops are present as I can determine the chain depth and all bits of the string must be retained. I also am critical of the need to test against golden references as I can just compare against the CAD. I've personally designed scan TD vectors which found one weak buffer incorrectly selected by the mapper and I didn't know the problem was a weak buffer when I started. I think this is equivalent to loading the net with extra gates. On process attacks, process monitoring circuits exist. I can verify them with external equipment on an analog test bus which is how we DV them.
I definitely think the paper is very good and I will go through it in more detail when I have time but I also think they are too quick to dismiss the defenses and I don't think their defense list is exhaustive.
I should also mention, this is my opinion on detecting circuits inserted in parallel to a given design. If the attacker is allowed to modify the design directly... Then all bets are off. :)
Forget going in and "hiding" a backdoor, the people over at Libreboot have been professing about the dangers of Intel's Management Engine for years now -- https://libreboot.org/faq/#intelme
It is easier for a large agency with interest in controlling computers to install a backdoor at the manufacture stage, than individually accessing "suspects" computers. Compared to the RAM attack, physical access to a semiconducter fab IS easy pickings for a clandestine government agency.
Billions of transistors on a modern chip wafer... Anyone who claims to know exactly what all of them are there for is either very smart, or a little naive.
Yes, but it's much harder to hide a conspiracy to tamper with paper votes if it comes right down to it, since they're not all counted at the same place.
if you can get to the same amount of tampering risk for cheaper, you should switch to that process.
and you obviously can, you can do - let's say - a multi-factor voting, so you use multiple devices to record your intent, and you use different kinds of hardware to assess the votes.
This was brought up the last time it was posted, to the extent of the post itself being given a confusing, clickbaity, editorializing title about Ken Thompson. But I don't see it as the same thing.
Ken Thompson showed that he could hide a backdoor in a compiler where you'd think you could verify the compiler as safe, by reading its source code and even compiling from that source code, but in fact it was sneakily changing its own source code as it compiled it.
This demonstration, on the other hand, is saying that they can hide a backdoor in a microchip, where you can't verify it as safe.
The chip is not, for example, making the sneaky capacitor disappear when you use the computer to view schematics of the chip.
I don't see where "trusting trust" is involved, just "trust".
This work is in no way related to trusting trust. In Ken Thompson's work, you have a booby trapped compiler injecting code into programs and newly compiled compilers. Here you have a hardware attack that you can trigger using a magic sequence of instructions and go into kernel mode.
Sorry, but that's a clickbait headline. I clicked it expecting to see a real attack dissected, not someone's idea on attacking an hypothetical platform.
A proof-of-concept is provided. If you are saying that the only way to gauge that an attack is "clever" (an adjective I'm not sure I'd apply here anyway) is to find it in the wild, I don't think that's fair.
Proof of concept on your own platform doesn't count. Security wise, it gets interesting when you attack an existing product, not when you set the rules for the "attack" that you'll carry later.
Obviously with this attack it's a bit of a problem to provide a PoC - but at this point comes the mislabeling of the post. Should have been called "a clever attack concept".
Calling it an "attack" is no less of an hyperbole than changing the sources of nginx to introduce a remote exploit.
What this team did is to create a compromised version of an open source product. They engineered it to be defective in the first place. This is nothing beyond a thought experiment.
There is a crucial difference between hardware development and software development that you are missing: hardware has several stages of development/implementation that spans several parties only connected by business contracts.
If you want to squeeze the attack into your analogy, it would be as if a compiler writer was malicious an added an attack to any/all nginx binaries without modifying the original source code.
> They engineered it to be invisibly defective in the first place.
That's the difference and the interesting part. Unless most people who aren't me decap their chips and go over them with an electron microscope before use...
The magic of hacking is cleverly exploiting a given situation, not setting the rules by modifying the product to be compromised. The "attack" here would be obtaining access to the ASIC design computers, while what the article discusses is simply a backdoor to be installed by the attacker.
No, I don't think you're seeing the proposed attack here. As I understand it:
1. A company tapes out their chip to be fabbed, as most chips are, off-shore. The design is clean.
2. The backdoor is added after it is out of the company's hands -- say, by the fab. A single register and some wiring are not that difficult to add by hand to the design masks -- effectively, it's just an ECO.
Post tape-out tampering is a mounting concern for our clients. [EDIT: I work for an ASIC Design & Verification consultancy]
Oh come on! As if security were a sport with rules. Your irrelevant pigeon-holing of the threat surface is just the sort of thinking that the black-hats like to see.
"backdoor" and "attack" refer to entirely distinct concepts: backdoor is, given administrative access to equipment or code, installing an hidden component that will be used later by an adversary.
"attack" is the operational word, sometimes using a backdoor but mostly just exploiting a given system to your benefit.
Coming to read this article, I'd have no problem if it stated "the most sophisticated backdoor I've ever seen", but it implied that it exposes an unprecedentedly sophisticated attack - I was expecting something operational of the Stuxnet variety, and instead I got a research paper.
If this article showed how this backdoor was embedded into a real (commercial and widely used) chip - unbeknownst to the chip maker - and later used by an actual adversary, then that would be an attack.
You do realize that Stuxnet required a failure of physical security. Either an agent inserted intentionally via USB or an employee inserted a compromised USB unwillingly. Neither should happen in a secure facility.
The paper shows simple yet sophisticated POC. The simplicity is the scary part. BTW, do you think the University of Michigan spends more on research than the NSA, GCHQ, BND, DGSE, 3PLA, etc.
There is some value to precision in terminology, but here it seems to me to be merely pedantic. Are there any important practical consequences at issue here?
No. If your job is to secure systems there is zero difference. If other than primary has key, it's not secure. If you can break in, it's not secure. Legal arguments are irrelevant.
I am not convinced that the legal difference will carry much weight with all the potential exploiters, but if it does, there is always the question of how you acquired the key.
I really hope you're not involved with securing sensitive systems. This is the nightmare right here. Who cares about hardened a OS when the hardware is compromised in a way that is virtually undetectable. Very few have the resources to verify every item on the board and its purpose. The fact that this is even possible is a problem.
[1] https://news.ycombinator.com/item?id=11768980