a) How the hell I'm supposed to do so? It seems fairly arcane and in need of some higher level abstractions. fortanix seems like it could be good here?
b) What the implications are. What's it like to maintain an enclave? What do I lose in terms of portability, debug-ability, etc?
It reminds me of GPU to some extent - it's this thing in my computer that's super cool and every time I think about using it it looks like a pain in the ass.
The first part is what most people fixate on when they first look at Conclave. But an equally important thing is actually the second part - remote attestation.
The thing a lot of people seem to miss is that for most non-mobile-phone use-cases, running code inside an enclave is only really valuable if there is a _user_ somewhere who needs to interact with it and who needs to be able to reason about what will happen to their information when they send it to the enclave.
So it's not enough to write an enclave, you also have to "wire" it to the users, who will typically be different people/organisations from the organisation that is hosting the enclave. And there needs to be an intuitive/way to for them to encode their preferences - eg "I will only connect to an enclave that is running this specific code (that I have audited)" or "I will only connect to enclaves that have been signed by three of the following five firms whom I trust to have verified the enclave's behaviour".. that sort of thing.
Like I have service A and service B. A is going to talk to B, and has some secret that identifies it (maybe a private key for mTLS). I'd like for A to be able to talk to B without having access to that secret - so it would pass a message into the enclave, get a signed message out of it, and then proceed as normal.
Would that not be reasonable? Or I guess maybe I'd want to attest that the signing service is what I expect?
Exactly. If you have a threat-model where you want to limit access to your secrets from a limited code path, you need to attest that only specific, signed code is running within the enclave that can access the secrets. You might only need this to satisfy your own curiosity, but in practice it probably is something you need to prove to your internal security team, third-party auditor, or even direct to a customer.
Thanks for clearing that up.
And the idea, therefore, is that A sends the secret to an enclave, which inspects the secret and, if correct, signs a message to say "I, the enclave, have verified that A does indeed know the secret". (Apologies if I've oversimplified or got this wrong).
But assuming the above is roughly correct then, without remote attestation, you have a problem, and it comes down to the question of who's running the verification code, I think.
If A is running the checker, why should B believe what it says? If A is running the code, they can just change it so that it signs the statement irrespective of whether it's true.
But If B is running the checker, then A will have just sent their secret to a service run by B, violating the requirement that A doesn't send the secret to B!
You could ask a third party to run it of course. But if you don't want to introduce that third party then this is where remote attestation comes in:
If A is running the checker in an enclave then RA allows B to verify that the "A knows the secret" message really did come from a codepath that has actually done the right thing. In this scenario, B is the "user" of the enclave from the perspective of reliance on Remote Attestation. (Aside: I know it's weird to think of an actor that doesn't interact with a system to be a user of it, so I'm probably using poor terminology when I say 'user'... it's more that, in this scenario, B is _relying_ on properties of the enclave such as its attestation)
And if B is running the checker then RA allows A to verify that it won't just turn around and reveal the secret to B.
The secret never leaves the enclave.
The goal here is that if an attacker can execute code within A's operating system that they can not exfiltrate the password. They might be able to get the enclave to sign on their behalf, but that's significantly better than an exposed secret - simply removing the attacker from the box would be sufficient to remediate, vs having to rotate the secret.
To mitigate impersonation, I suppose one could do a number of things involving a second key, but I think that this simple version demonstrates the value of having a signing oracle. This is actually not an atypical approach, just not using sgx - I know companies that keep their signing keys in separate processes, which are mutually seccomp'd such that they can only pipe messages to each other for signing purposes of apps before publishing. But in the sgx case you have a much stronger guarantee than just seccomp.
So to me the only problem that attestation solves here is if the attacker is somehow in the SGX enclave, but actually the much more likely scenario is that they aren't, and that they just ask the oracle to sign on their behalf - because B can't verify that A is the one asking to sign. Given that there is a single entity deploying both the software in the enclave and the service that interacts with it at least, that seems to be the case to me - like, in this scenario A, B, and the software in the enclave are all deployed by me, barring malicious action to interfere with that - but again, the most likely scenario is the attacker just owned the box and has a regular user on there.
But... also also, A can prevent impersonation via tricking the oracle by having another keypair shared between it and the enclave, and then it becomes a matter of protecting that memory from an attacker who can almost certainly scrape your memory - a hard thing to do.
So you end up with:
System 0, A: Key 0
System 0, Enclave: Key 1, Key 0
System 1, B: Key 1
Key 0 is used to 'auth' A to Enclave. Key 1 is used to auth A to B (via enclave oracle).
This is just my perspective on it.
We don't presently support it in Conclave but SGX (which we use) does, I believe, support the idea of, in effect, packaging up a secret in a program and then encrypting it so it can only run in an enclave and hence keep the secret safe even when running on malicious hosts. I'd need to check but I suspect you're right that there are situations there where RA isn't required.
But to take your specific example (and maybe I'm still misunderstanding), does your scheme actually work in practice? Let's assume a simple model where the enclave runs on A's machine. So we can assume that requests to sign something come from A. This avoids us having to worry about A having to authenticate to the enclave, which just leads us to a circularity (how does A protect the key it uses for authentication, etc)?
And now we introduce the attacker, as in your scenario. As you say, eliminating the attacker removes their ongoing ability to interact with the enclave, since it expects to communicate only with locally running processes.
Except... if the attacker is on your box, they could simply take a copy of the enclave! And simply run it on their own machine. It's possible SGX contains the ability to lock an enclave to a specific CPU, in which case your scheme seems like it should work (to my untutored eye... I lead the Conclave team but am by no means an expert)... but I'm not actually sure it works that way. I'll look in to it.
Yeah this is the part I'm assuming isn't possible, perhaps out of ignorance. I believe that, at least in SGX's case, this is possible because SGX exposes per-CPU keys, and the ability to derive secrets from those keys. So if you moved the enclave (I actually have no idea how moving an enclave works either fwiw) it would no longer be valid.
But yeah, this all kinda goes to "I have no idea what I'm doing with enclaves" lol, this is just the use case I have - keeping a secret stored in one so that an attacker can not exfiltrate it.
> While Secretive uses the Secure Enclave for key storage, it still relies on Keychain APIs to access them. Keychain restricts reads of keys to the app (and specifically, the bundle ID) that created them.
Doesn't that mean the keys leave the enclave?
"When you store a private key in the Secure Enclave, you never actually handle the key, making it difficult for the key to become compromised. Instead, you instruct the Secure Enclave to create the key, securely store it, and perform operations with it. You receive only the output of these operations, such as encrypted data or a cryptographic signature verification outcome."
Thus I find most analysis and comments like this, based only on motivation and incentives, very lacking.
Add to that, to be more specific to this particular topic, secure enclaves are designed not only for DRM but for many other critical applications (that are actually far more important than DRM and are/were the key motivating use cases) - it, or the general concept, is the basis of the security guarantee for iPhone's fingerprint or face ID, or the confidentiality of the key materials in various end-to-end encryption, which allows things like the-phone-as-a-security-key.
You might want to check out Parsec (also from Arm).
Imagine websites that required proprietary Google Chrome on Win/OSX on bare metal - no VM, no extensions, no third party clients, no automation, etc. Imagine mandatory ads, unsaveable images, client-side form verification, completely unaccountable code, etc.
Protocols are the proper manner of allowing unrelated parties to transact at arm's length, and technologies such as remote attestation would completely destroy that.
No need to imagine, we already have that: it's called fingerprinting. Mainly via WebGL.
This paper from LowRISC outlines the possibilities pretty well: https://riscv.org/wp-content/uploads/2017/05/Wed0930riscv201...
Referencing that not to plug RiscV, but solely because it's a good explanation. I would guess Intel, AMD, and ARM have plans or some existing work going on.
In what concerns ARM, all Android 11 and later versions are going to support it.
Unfortunelly Intel's MPX was a failure, for reasons, however the rest of the CPU vendors seems to be going into the C machine model, as only way left to fix the language.
This is the opposite of respecting privacy (unless you mean e.g. the privacy of DRM code), all of this stuff is "trusted" as in TCG.
Right, but it can order you to turn over your private keys, like it did to Lavabit. Intel might instead voluntarily hand over a signed backdoored firmware, targeted at a specific set of CPUs, rather than the keys themselves, which could do more damage if the government lost control of them.
Similarly I don't suppose it would be much harder for the NSA to get access to the physical servers that Signal is running its software on. Perhaps you could say that merely having the access credentials wouldn't be enough information for the NSA to take control of Signal's servers without them noticing, but any cloud provider that wants to do business with the government surely has special backdoors that make this easy, when presented with an NSL.
Honestly it's not bad until or unless we get actually fast FHE chips. I think we're a ways away from that.
HE currently looks like a dead end anyway. >10^6 slower is going to be hard to overcome.
In order for this to be "perverted as a way to grow DRM", it would need some other main function.
The main purpose of this is to allow vendors, rather than users, to control personal computers.
It will be great for finally allowing truly unskippable ads, ads that track your eyeballs to make sure you are looking, text that cannot be copied, etc.
To resist this situation a device manufacturer could emerge with a privacy-first experience and solutions to proxy just the right amount of legitimacy to the outside systems (PSTN, payment, etc.) with an international legal structure optimised for privacy, akin to what wealthy people do already. A sort of anti walled-garden shared infrastructure. Technically I could see an offering providing mobile # to VOIP forwarding, data-only phone with mesh networking > wifi connectivity preference, MAC shuffling, darknet/VPN mixing, payment proxy, global snailmail proxy network, curated (well tested for pop sites) lightweight browser with reduced attack surface and a mobile network only as a last-resort connectivity paradigm (with dynamic software SIM and network operator proxying). Open source of course. Issue being, infrastructure would be expensive to offer and if it were ever popular there'd be political pushback. I guess privacy will increasingly be the exclusive domain of the rich.
We've already lost.
It's possible right now to have an FPGA running an open-source RISC-V implementation. The software isn't there yet, but I expect this to change as more RISC-V boards get on the market. God knows how useful this 'bootleg' computer will be, but it's a foundation to build on, at least. There's already work on porting Debian to RISC-V.
And with a free, open ISA, it's not impossible for smaller, independent manufacturers to crop up and produce their own chips, especially given that the US is now investing in chip-fab. (if there is demand. If nobody cares that they don't own their machines, we were screwed from the start.)
I see a lot more hardware content on Youtube nowadays. Channels like Ben Eater are educating people on how computers work, and more accessible FPGAs may help create a more flexible environment for hardware.
In the end, you still have to trust the root domain, which means you still need to trust system firmware and the secure boot chain. I don't see how this is any different from the existing TrustZone stuff other than having more flexible memory management.
In other words, it just makes it viable to run random third-party code in "secure" mode, but it doesn't do much to increase trust. You still need to trust the device manufacturer and boot chain. You could achieve a similar level of trust on any system with a regular hypervisor part of the device firmware, and which has standard memory encryption (which we should all be using for everything by now, it's a travesty we aren't).
So e.g. if you're thinking of using this to run VMs in the cloud without having to trust the provider, you have to trust that the device manufacturer has implemented all this properly in the root monitor code, and that they have working secure boot, and that the person who physically owns the machine can't break this security.
Personally, I think this is a dead-end model. Real secure compute in the cloud isn't going to happen. If someone else owns the machine, they own the machine. For non-general-compute use cases, like securing portable devices against theft or seizure, or DRM, the solution is to put the secure stuff in a separate CPU. There's a reason Apple is using a dedicated Secure Enclave Processor to implement all the critical device security stuff, and why every game console DRM scheme has a security coprocessor handling the crypto.
You need to trust the hardware, and you also need the firmware.
However, there is no need to trust the firmware, because the trust is provided by hardware and not by firmware.
And there are certificates that allow to check if the components are trustworthy.
This means that a malicious user can affect the availability of the realm, but not its integrity and the confidentiality of your data. (Assuming that hardware is trustworthy)
That said there is not much information in the article I may be wrong, this is my interpretation
No, the trust is provided by the firmware. That's what runs in the root domain. Monitor firmware. It's right there in the article. The Root domain is the ultimate trust level.
> And there are certificates that allow to check if the components are trustworthy.
Unless there are very specific hardware-backed attestation features baked into the chips - none of which has been mentioned, and which also relies on the inability to compromise the system through hardware mechanisms, and a bug-free monitor, which is an entire difficult to solve problem - then those certificates only allow you to check that the certificate owner has, at one point, provided key material to be used by the system, not that the software running on it continues to be what was intended.
You break into the monitor, you steal the attestation keys, and then you get to keep "proving" your "trustworthiness" to everyone else. This is how on-line DRM is broken every time.
It is unclear what AMD intends to do here for their TSV stacked mega-cache thing. Perhaps they'll declare the TSVs as not practically snoopable, similar to how on-die metal layer wires are treated now...
Intel had an SGX bug where the GPU could access the last 16 bytes of every 64 byte cache line. If you need the GPU enabled, you have no choice but to design your enclave to only use the first 48 bytes of each cache line. Fortunately if you don't need the GPU, whether or not the GPU is enabled (among other things) is measured and included in the attestation quote, so clients can simply choose to refuse to talk to servers with the GPU enabled...
Stuff like this makes me nauseous. This is not a good thing. Stop it.
That can't indeed be good.