Any enclave technology will be reliant on the underlying security of the processor itself. Someone was going to have to go first. Intel happened to take greater risks in the name of performance, and all of their technologies (including their first-to-market enclave technology) are suffering reputational hits as a result.
I'll also just mention that CrossTalk is the more interesting vulnerability affecting SGX that was disclosed today.
Oh huh, I see. Thanks for the papers. "Someone was going to have to go first. Intel happened to take greater risks in the name of performance, and all of their technologies (including their first-to-market enclave technology) are suffering reputational hits as a result." Very true, and a point worth making. Just curious, do you work closely with SGX/SEV? You were quick with the links!
SEV is exciting because it has a much better cost-to-benefits ratio.
It provides useful defense in depth without requiring any changes to the application stack - you can run regular VMs with syscalls, plenty of memory and high-bandwidth IO.
SGX, on the other hand, is extremely limited and notoriously hard to target. It's even harder these days - you need specialized compilers and coding techniques to mitigate a number of attacks that can't be fixed by a microcode update.
I reckon it's almost impossible to do serious SGX work these days without being under NDA with Intel such that you can work on mitigations during embargoes for the never-ending stream of vulnerabilities.
SEV is exciting because it has a much better cost-to-benefits ratio.
I think that's not actually true.
The problem is if you believe SGX "needs" these sorts of defences/mitigations then so does SEV, because SEV VMs are not magically immune to side channel attacks and in fact suffer far more than just micro-architectural side channels because they also leak all disk and memory access patterns, network access patterns and so on. These sorts of side channels aren't the responsibility of any CPU to fix but are remarkably powerful.
Sometimes it feels like SGX gets placed under a rather nasty double standard. Enclaves are "hard" because you "must" mitigate side channel attacks. SEV VMs are "easy" because nobody even tries at all. Indeed they cannot try - normal software isn't built to eliminate app level side channels. That's why enclaves are special programs to begin with.
If you are happy to use existing non-hardened software though and just take the defence in depth argument, well, no problem - you can use SGX too. There are things like SCONE that let you run existing software inside enclaves. Unlike SEV, SGX is actually fixable when security bugs are found so it's meaningful to talk about using it. SEV has been theatre so far. It's not renewable, there's no equivalent of TCB recovery so nobody bothers trying to attack it because it's already been broken in unfixable ways before.
You are right about the double standard but using enclaves means restructuring your application. Even SCONE requires porting. SEV gives you the warm and fuzzy feeling that you are doing something to improve security without having to do a lot of work, assuming that your favorite OS version has been ported to run in a "secure" VM.
SEV is useful for defense in depth and absolutely not comparable a secure enclave like SGX or TrustZone.
However, it results in significant security gains with almost no extra effort.
Secure enclaves like SGX are designed with much stronger security guarantees and are therefore hard and expensive to use, and they're regularly broken due to design issues that may be impossible to fix, making it a bad investment.
The problem is, it's not really clear it results in significant security gains.
SEV (a hypothetical version that wasn't broken) would give you encrypted memory and some basic remote attestation protocols.
The point of this is to stop a cloud vendor from peeking into your VM.
But the problem is, none of the software inside your VM is designed to resist attacks by a malicious hypervisor. This isn't just side channel attacks but also so-called Iago attacks. This is where a formerly higher privileged piece of software is trying to break into a lower privileged bit of software by manipulating the API responses.
For SGX it was shown that running a whole standard Linux process inside an enclave and relaying system calls outside (i.e. a similar arrangement to what SEV does) could lead to the process being hacked the moment it did something trivial like call malloc(). The reason was, the software didn't expect the kernel to try and manipulate it via syscall return values because there is no point normally.
In SEV we have the same issue. Hypervisors were previously more privileged than kernels. Therefore kernels and apps aren't designed on the assumption of a malicious hypervisor. It's highly likely that not only side channel attacks but also Iago attacks apply there too.
Now you could say, well, one step at a time, maybe it doesn't matter, better than nothing etc. Sure. All true. At the very least it complicates the attack process and is more likely to generate an audit trail inside the cloud vendor.
But actually Intel had a somewhat comparable solution for a long time called TXT. It lets you start a trusted/measured hypervisor that can then supervise VMs on the box. This is in some ways stronger as you can actually check the hypervisor won't attack the VMs. But it's hardly used because cloud vendors use proprietary hypervisors, and actually the threat model people care about is "malicious cloud vendor trying to break into the VM", not "incremental security improvement of unknown utility".
I suspect Intel will implement encrypted memory for VMs at some point anyway because of the viewpoint you raise here being quite common - "I'll encrypt the RAM for free and then I'm done" although of course it needs tooling integration as well. It's not actually quite for free.
But I guess if this takes off then AMD will start to see a lot of research papers where people break into the encrypted memory space in various clever ways, and of course, you're also vulnerable to any exploits in the software stack on the VM itself. That was the main reason the security community ended up with the SGX type model. When your trusted computing base is an entire virtual machine it doesn't give you much because VMs have so much software in them, they get hacked all the time. The idea of enclaves is you design your app so the bulk of the software stack is untrusted, and only small parts get to directly touch sensitive data.
I fully agree with all of what you said. There's a big difference between "hey can you dump this VM for me please" and "hey please implement this paper to dump this customer's VM" and it provides cloud providers with plausible deniability when law enforcement comes knocking. It's certainly useless once your threat model includes grad students with lots of time at their hands.
The "trusted hypervisor" approach is actually what my company is working on, with SEV just as a convenient attestation mechanism and nice defense in depth.
Yes, with SEV the guest OS has to assume that the emulated hardware is untrusted and I'm quite sure that no OS was built with this threat model in mind. It's not as bad as proxying syscalls from SGX to the host kernel because there's a lot smaller attack surface, but I bet you could find a hundred ways to compromise a stock Linux kernel running in SEV.
I'm more optimistic about unikernels written in say, Rust, which would still be a much friendlier API than SGX.
Ah, my company is working with SGX. Perhaps that explains our difference of view ;)
The hypervisor ABI is quite large, perhaps not as large as a kernel's but it doesn't really make sense to expose the same API to secure code anyway. For instance an encrypted VM that then uploads its data somewhere else via raw TCP sockets doesn't make sense conceptually, even though the API allows it. You have to use SSL. Likewise an encrypted VM that downloads software updates from the repositories of a cloud provider, also doesn't make much sense, even though nothing in the tech stops it.
The nice thing about an enclave is you can understand completely the data flows in and out by examining the interface. That does mean compiling existing software for it will die with unimplemented functions. But those errors are probably telling you something meaningful - i.e. if the code attempts to call socket() or open() then you can just relay them outside the enclave, but it makes more sense to think about what you're really trying to do.
It's a more purist approach with worse compatibility, I agree. It's really focused on finding the minimal TCB of your program and excluding all else, like how Chrome moves the rendering engine out of its TCB. I suspect many apps can be designed in such a way that almost all of the code is untrusted, but it's a bit of a frontier and takes quite a bit of skill.
SEV is fundamentally less secure than SGX because it only provides memory encryption but no integrity protection. Enclaves are a challenging problem given the much more aggressive threat model, but SGX is the better security model of the two IMO.
Yes - in a recent paper by Wilke et al[0], they nicely demonstrate how the lack of integrity checking can be exploited.
SEV is a very new technology and its current (and previous) iterations have known weaknesses. The next generation of SEV will likely have SEV-SNP[1], which will prevent the host from writing guest memory/messing with the guest's page mappings.
Will probably take a few more iterations to stabilize. At that point, it should provide decent security guarantees.
Current-gen SGX has much stronger guarantees (conceptually, at least) with full memory integrity checking and less attack surface, but it suffers from CPU vulnerabilities, most of which AMD didn't have, and the integrity checks and architecture come at a large performance and development cost.
SEV has different tradeoffs that make it much more useful for real-world use cases, while still providing strong security guarantees.
I'm hesitantly excited for AMD's SEV enclave to roll out. Anyone know if it's shaping up to be any better?