Hacker News new | past | comments | ask | show | jobs | submit login
V0LTpwn: Attacking x86 Processor Integrity from Software (arxiv.org)
102 points by gandalfgeek 47 days ago | hide | past | web | favorite | 52 comments



Intel SGX is really a big gift to security researchers, second only to speculative execution and friends. Intel is selling it as a way to keep secrets safe inside the processor against attackers with root/hypervisor software access or even physical access. Of course, a bevy of attacks in the recent months have demonstrated that this isn’t really achievable given the extremely large attack surface.

SGX has opened Intel up to physical attacks on the chip in a way that really hasn’t been interesting in the past. Previously attacking a CPU physically wouldn’t give you any more capabilities than you already had (with unlimited ability to tamper I/O and memory). Now, physically attacking the CPU can be used to reveal SGX secrets or mess with SGX computations. Expect a lot more attacks like this to come out in the future!


> Intel is selling it as a way to keep secrets safe inside the processor against attackers with root/hypervisor software access or even physical access. Of course, a bevy of attacks in the recent months have demonstrated that this isn’t really achievable given the extremely large attack surface.

As a layman I have to wonder, should we expect similar attacks on Apple's Secure Enclave in the future?


It greatly helps Apple that T2 is a separated chip specially designed to do one function well, that is to do crypto in a secure way even in presence of physical attacks. How to do that has been known for quite some time. For example, modern SIM cards or cards for satellite tv are very secure and a physical attack is possible if one is willing to spend like over 100K per card.

What Intel is trying to do is to allow a general purpose secure computing with minimal extra cost. This is relatively new and as various bugs demonstrates may not even archivable. I.e. it may be possible to create provably secure chip, but its cost will make it a niche product.


> physical attack is possible if one is willing to spend like over 100K per card.

Firmware recovery from "hardened" microcontrollers costs $15-25k here, and even that's most likely a "special foreigner price"


It’s not about firmware recovery: it’s about tampering it in a non-intrusive way OR extracting keys from its secured non volatile memory.


Yes, MCU with intentionally hardened flash blocks are what those firmware recoverers specialize. They do things like gemalto chips sim and credit cards.


the firmware should not be in internal flash though, where the keys are


It looks to me that having a standalone chip is not great in general due to hardware attacks: you can easily MITM the system bus for example. Whereas a number of attacks become much harder once you use an integrated secure element.

The form factor of the iPhone of course almost makes the T2 secure enclave an integrated secure module. I also don’t think hardware attacks are really considered anyway (and as we see most researchers focus on software attacks)


Apple's Secure Enclave is a coprocessor designed specifically to reduce attack surface, and minimize the surface area of untrusted code.

It physically separates the ephemeral secret-storing (touch/face ID) and the hardcoded crypto keys (not even the SE firmware has access to the key material, it's just allowed to run the circuits).

Check out the iOS Security Guide whitepaper.


Interesting question would love to read some insights about that too. From my really basic understanding Apple Secure Enclave is a co-processor so other rules should apply but I'm also a poor layman in hardware design.


AMD SEV has the same problem, with an even larger attack surface. But - at least it's compatible with the rest of the world, since it runs ordinary VMs, and can be used as a drop-in defense-in-depth measure.

Intel SGX, on the other hand, is a severely constrained alien environment and porting stuff to run in SGX is a massive undertaking, requiring special compilers and SDKs. The cost-benefit calculation does not add up, given that it does not provide the level of security it claims to have.

I wouldn't be surprised if SEV will eventually be adopted as the industry standard, while SGX disappears into irrelevance.


> Expect a lot more attacks like this to come out in the future!

Hopefully SGX will be so utterly broken they'll never make anything like it ever again.


Every time someone has uncovered a defect in Intel’s highest authoritative security-component, the answer has always been to add yet another, even more privileged component on top of that, to fix things “once and for all”.

The current standing is a overly complex Rube Goldberg contraption obviously not fit to the task at hand. And Intel can’t or won’t fix it because if compatibility.

And since each new component is even more privileged, it automatically becomes an even more valuable target to hack.

So expect the contraption to get worse, not better. Even if everyone (probably Intel engineers too) agrees it’s the wrong thing to do.


I feel dumb for asking this, but is this really something that's not supposed to be possible from software? My assumption was always that if you overlock, undervolt, etc. then (a) you already have high-privileged code running, and (b) anything can go wrong from crashes to physically breaking your CPU, so I'm not really shocked (not sure if pun intended) to hear undervolting can damage the software state. Should I be?


It's not a dumb question, not at all.

Processor vendors have been trying to tell us that they can protect parts of the CPU from its user. Intels version of that is called SGX.

There are very few good use cases of this. Also it doesn't really work.

But there are gazillion ways to attack it, so plenty of papers can be written about it.


To add to this ^ imagine that your enclave is computing a wrong result and then signing this result along with an attestation that it ran the correct code.

This could be fine if the computation first goes through a consensus mechanism that tolerates faults, but could be devastating otherwise.


Sounds like a security Perpetual Motion Machine.


> you already have high-privileged code running

I just glanced at the abstract, but they seem to be attacking the SGX enclave, which is supposed to be isolated even from privileged code.


...in other words, they attacked the horribly user-hostile anti-feature whose real practical purpose is DRM, and so I certainly think nothing of value was lost.

https://www.gnu.org/philosophy/right-to-read.en.html


There are many interesting uses for enclaves, most of which have nothing to do with DRM. A good example is Signal’s secure value recovery mechanism: https://signal.org/blog/secure-value-recovery/


It is interesting and useful as a workaround for existing situation, ie running under untrusted OS. It’s not useful as a proper solution (as opposed to a hack/workaround). Except for DRM.


Why not? An OS is a large beast, it’s a massive TCB. Much easier to audit a small amount of code running in an enclave with few dependencies.


Because, as shown by SGX, it doesn’t work. And if it did work, it would lead to a whole new magnitude of the malware problem - malware protected from you by hardware.

The right way to do it is the other way round: have a trusted hypervisor and run your untrusted OS in it. See TrustZone for example.


I’m not sure why you think that SGX shows hardware enclaves “don’t work”. I also don’t see why you think enclaves “protect the malware from you”. Enclaves are created and started from host code, which can interrupt or terminate them at any time.

The scheme you suggest, which isn’t typically how TrustZone is used, gives zero integrity and confidentiality guarantees for applications. I don’t know if it’s “the right way” for some threat model, but for the most typical TEE use cases which are trying to establish strong integrity and confidentiality guarantees in the presence of an untrusted host, it’s absolutely not right nor useful.


Critical flaws in SGX are being found all the time. It's a failed experiment that someone put into commercial products. At the same time there are no examples of SGX-like architecture that actually do work.

Enclaves are started from host code, but host code can't see into them. In other words you have no way of telling whether the enclave you've started is what you wanted, or if it includes malware.

The scheme I've described does give integrity - after all, that's precisely how the actual trusted components work, SecureEnclave, TEE etc: you have a trusted hypervisor, and then run your secure components outside the untrusted OS, in a trusted environment in that trusted hypervisor. It gives you everything SGX could, without its fundamental design problems.


What you’re saying is a different threat model: your application goes rogue. SGX and TEE in general attempt to solve the reverse: your host goes rogue.

Research has shown that it is not a panacea, but we already knew that. It’s hardware not a full proof cryptographic solution. Some solutions have enclaves gather their results in a fault tolerant way to increase security even more.

So we could say that Intel and hardware vendors in general are looking for a solution that doesn’t exist. Or we can say that this is greatly improving your option when you are really scared of host compromises in your product.


I don’t think it’s helpful to confuse side-channel or micro-architectural attacks with attacks on SGX itself. Stating that hardware enclaves don’t work and do not ship is absurd, they are present in virtually every modern phone for one thing.

Code running in an SGX enclave is measured and absolutely known at enclave launch. The fact that enclave memory is encrypted for confidentiality is unrelated.

I don’t understand why you think trusting the hyper visor is helping anything. You are still open to this attack, and to all side channel attacks as soon as you run any untrusted code.


Which phone uses enclave on top of untrusted OS, as opposed to running the enclave on the side of untrusted OS?


Every Android phone which has an ARM chip with Trustzone.


Which doesn't run on top of on untrusted OS, like with SGX, but rather on the side, which is what I'm describing.


Both SGX and Trustzone are TEEs, which are segregated from the rich execution environment (the untrusted OS)


Really interesting use of SGX. It says at the time of writing that they haven't decided if they'll use it in Signal yet, are there updates?


Not that I know, I suspect a limiting factor is that there aren’t many cloud providers with good support for SGX at the moment.


There’s been a couple of non-DRM uses of SGX, which I’m sure you’ve seen. And regardless of what it’s being used for, I think you know it’d be more productive to talk about the attack rather than potential uses of what they’re attacking.


Not sure why you're being downvoted. Software running on the user's machine has no business keeping secrets from the user. This "feature" is the antithesis of computing freedom as we know it.

Trusted computing is at least salvageable. The issue is not the technology but who owns the keys. If users can install their own keys, the technology will empower them with increased security. SGX cannot be used to empower the user like this. It's specifically designed to protect software from the user.


No, the user is still in control of what they execute on the machine, whether it is run in enclave or not. If anything, because it is deliberately unable to patch itself, software running in an enclave gives more control and auditability to a user who can know exactly what code they are running.

Importantly, a user who does not fully trust the machine administrator can still maintain integrity and confidentiality over their computation.

SGX memory encryption keys are ephemeral, they are generated at boot, and they do not need to be owned by anyone to be useful, on the contrary!


Is SGX actually used by DRMs? I read that TPMs were designed for DRMs as well and ended up being used for whole different purposes (secure boot, disk encryption)


I read somewhere that Netflix drm runs in sgx and they're basically the only ones.


I read that Netflix doesn't use SGX.


I might be misreading the paper, but it seems like that this is due to an unfaithful implementation of the hardware that allows for undervolting occur with the right mix of instructions, P-states, and software, rather than a fundamental design flaw? Basically the equivalent of shipping a chip with the operating voltage set too aggressively so that the right sequence of instructions trips it up.


How do security researches manage to find stuff like this? o they just run some sort of fuzzer until something interesting happens and they try to reproduce it? Do they scan intel manuals from top to bottom and are intelligent enough to read vulnerabilities through the lines? I am incredibly fascinated by this stuff, but reading things like:

> The hardware interfaces to adjust the voltage (Section 2.2) are undocumented. To use them, we had to rely on third-party reverse-engineered partial documentation and piece it together to develop a real-world setup running on our systems, which required substantial effort on our part.

Is so strange to me. I have no idea how people manage to, or even decide to take on tasks like that. I have trouble finding that sort of stuff even when I know exactly what I'm looking for.


Usually they read a lot, have a general idea of how things might be organized and where vulnerabilities might lie, and then they try a bunch of things to see what works.


This. After having read a lot of papers you can find out how you can use that knowledge to produce another one.


Is there a good site or journal that specializes in these things, other than occasionally getting aggregated to HN or Reddit. I'm always intrigued by processor level exploits, especially more obscure ones like this.


For having spent the last month reading about it:

1. google keywords like smart card, tpm, hsm, secure element, tee, sgx, secure enclave, trustzone, etc.

2. then add the keywords attack, threat model, etc.

3. Read all the papers


Worth noting that Spectre (or was it Meltdown) was found by just reading the processor manuals.


>the only software-based fault exploit Plundervolt came out around the same time and is similar, the authors may not have known about each other.



I only read a little bit of voltpwn, but plundervolt is really cool. In one of the attacks, a fault is caused in the RSA scheme initialization, leading to reduced strength that can be broken. I guess a similar attack is possible with AES, but i dont understand it as well.


It seems it requires root privileges to start with. Am I misunderstanding?


SGX is supposed to be able to hide stuff from root.


Extending the other answer, root on eg a hypervisor guest also shouldn't give you access to secrets hidden from even the bare-metal root account.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: