SGX has opened Intel up to physical attacks on the chip in a way that really hasn’t been interesting in the past. Previously attacking a CPU physically wouldn’t give you any more capabilities than you already had (with unlimited ability to tamper I/O and memory). Now, physically attacking the CPU can be used to reveal SGX secrets or mess with SGX computations. Expect a lot more attacks like this to come out in the future!
As a layman I have to wonder, should we expect similar attacks on Apple's Secure Enclave in the future?
What Intel is trying to do is to allow a general purpose secure computing with minimal extra cost. This is relatively new and as various bugs demonstrates may not even archivable. I.e. it may be possible to create provably secure chip, but its cost will make it a niche product.
Firmware recovery from "hardened" microcontrollers costs $15-25k here, and even that's most likely a "special foreigner price"
The form factor of the iPhone of course almost makes the T2 secure enclave an integrated secure module. I also don’t think hardware attacks are really considered anyway (and as we see most researchers focus on software attacks)
It physically separates the ephemeral secret-storing (touch/face ID) and the hardcoded crypto keys (not even the SE firmware has access to the key material, it's just allowed to run the circuits).
Check out the iOS Security Guide whitepaper.
Intel SGX, on the other hand, is a severely constrained alien environment and porting stuff to run in SGX is a massive undertaking, requiring special compilers and SDKs. The cost-benefit calculation does not add up, given that it does not provide the level of security it claims to have.
I wouldn't be surprised if SEV will eventually be adopted as the industry standard, while SGX disappears into irrelevance.
Hopefully SGX will be so utterly broken they'll never make anything like it ever again.
The current standing is a overly complex Rube Goldberg contraption obviously not fit to the task at hand. And Intel can’t or won’t fix it because if compatibility.
And since each new component is even more privileged, it automatically becomes an even more valuable target to hack.
So expect the contraption to get worse, not better. Even if everyone (probably Intel engineers too) agrees it’s the wrong thing to do.
Processor vendors have been trying to tell us that they can protect parts of the CPU from its user. Intels version of that is called SGX.
There are very few good use cases of this. Also it doesn't really work.
But there are gazillion ways to attack it, so plenty of papers can be written about it.
This could be fine if the computation first goes through a consensus mechanism that tolerates faults, but could be devastating otherwise.
I just glanced at the abstract, but they seem to be attacking the SGX enclave, which is supposed to be isolated even from privileged code.
The right way to do it is the other way round: have a trusted hypervisor and run your untrusted OS in it. See TrustZone for example.
The scheme you suggest, which isn’t typically how TrustZone is used, gives zero integrity and confidentiality guarantees for applications. I don’t know if it’s “the right way” for some threat model, but for the most typical TEE use cases which are trying to establish strong integrity and confidentiality guarantees in the presence of an untrusted host, it’s absolutely not right nor useful.
Enclaves are started from host code, but host code can't see into them. In other words you have no way of telling whether the enclave you've started is what you wanted, or if it includes malware.
The scheme I've described does give integrity - after all, that's precisely how the actual trusted components work, SecureEnclave, TEE etc: you have a trusted hypervisor, and then run your secure components outside the untrusted OS, in a trusted environment in that trusted hypervisor. It gives you everything SGX could, without its fundamental design problems.
Research has shown that it is not a panacea, but we already knew that. It’s hardware not a full proof cryptographic solution. Some solutions have enclaves gather their results in a fault tolerant way to increase security even more.
So we could say that Intel and hardware vendors in general are looking for a solution that doesn’t exist. Or we can say that this is greatly improving your option when you are really scared of host compromises in your product.
Code running in an SGX enclave is measured and absolutely known at enclave launch. The fact that enclave memory is encrypted for confidentiality is unrelated.
I don’t understand why you think trusting the hyper visor is helping anything. You are still open to this attack, and to all side channel attacks as soon as you run any untrusted code.
Trusted computing is at least salvageable. The issue is not the technology but who owns the keys. If users can install their own keys, the technology will empower them with increased security. SGX cannot be used to empower the user like this. It's specifically designed to protect software from the user.
Importantly, a user who does not fully trust the machine administrator can still maintain integrity and confidentiality over their computation.
SGX memory encryption keys are ephemeral, they are generated at boot, and they do not need to be owned by anyone to be useful, on the contrary!
> The hardware interfaces to adjust the voltage (Section 2.2) are undocumented. To use them, we had to rely on third-party reverse-engineered partial documentation and piece it together to develop a real-world setup running on our systems, which required substantial effort on our part.
Is so strange to me. I have no idea how people manage to, or even decide to take on tasks like that. I have trouble finding that sort of stuff even when I know exactly what I'm looking for.
1. google keywords like smart card, tpm, hsm, secure element, tee, sgx, secure enclave, trustzone, etc.
2. then add the keywords attack, threat model, etc.
3. Read all the papers