Hacker News new | past | comments | ask | show | jobs | submit login
Intel SGX Fuse Key0, a.k.a. Root Provisioning Key Was Extracted by Researchers (twitter.com/_markel___)
234 points by tiagod 22 days ago | hide | past | favorite | 71 comments



Great! Always gives me a little more hope for computing when these sorts of things are compromised, I know at some point in the future I'll be able to break/reverse some annoying piece of proprietary garbage because of this and I'm grateful. Each security barrier compromised is one less thing I have to deal with trying to run my software on my computers. I only have one life to live and I don't want to spend it fighting onerous security restrictions. (how many minutes have i lost waiting for 2FA or figuring out how to run unsigned drivers) I vastly prefer the world where the king of my computer is me and maybe also some hackers vs the one where it's Intel/Apple etc.


Insecurity is freedom, as usual.

Intel already angered tons of enthusiasts who have no use for SGX by releasing a microcode update to lock out undervolting, due to that one exploit that relied on undervolting. Now everyone suffers from increased heat and power usage to protect against something they never needed protection from.


> Now everyone suffers from increased heat and power usage

Why?

And wasn't sgx removed from newer cpus anyways? At least in desktop/mobile space...


>Why?

because undervolting usually results in lower power draw/higher power efficiency


But they said "everyone" - I imagine undervolting would only be done by people trying to hack the sgx, or maybe try to save a bit of power. So I'd assume 99.99% of users would be unaffected? Or did the cpu undervolt automatically before the update?


FWIW I have undervolted every laptop I've ever had. It's a huge win in terms of battery life and often performance as well because you can avoid ever hitting the thermal envelope. It can be a 30% boost if you have good silicon.


Yes, and that also basically killed Blu-ray support on desktop, because DRM couldn’t be implemented in an acceptable matter otherwise. Fun thing of course is, that you can probably crack any Samsung or $random_cheap_brand Blu-ray player but there are too many of them so there isn’t any group focused on a “big fish to fry”.


Interesting how "security" these days almost always means securing corporations' bottom lines.


These days?

Intel, 1999: “The actual user of the PC — someone who can do anything they want — is the enemy.”

https://www.zdnet.com/article/the-biggest-security-threat-yo...


For some reason, I didn't really get your comment, and here is the quote from the article you linked that fixed it:

> Aucsmith said that more and more, software companies and content creators are targeting users as a major threat to security. The reason? With a few keystrokes, users could freely distribute "bits that have value," said Aucsmith -- copying such content as software, DVD video and other valuable data.

Ugh.

And that's not security. Nobody is compromised when someone copies those bits.


this is obviously entertaining, but the entire history of SGX is just getting owned over and over and over: https://en.wikipedia.org/wiki/Software_Guard_Extensions#List...


the failure mode of having a single provisioning key instead of limited batch keys derived from ones in an HSM creates the incentive where this is inevitable.

if one key gets you all chips, then someone is going to get that because the payout is total dominion. I've seen this trade off in similar protocols, and every time it's like "thanks for the advice, we're going to go with just the one, it's easier." There is one OEM/protocol I suspect actually uses diversified batch keys as their root provisioning secret, and really it's the way to go if you do business globally now.

What's more interesting is what the consequences will be, as if there are none (it's not like the stock will hurt) then the market is saying crypto can be sabotaged with impunity.


> There is one OEM/protocol I suspect actually uses diversified batch keys as their root provisioning secret, and really it's the way to go if you do business globally now.

seems like apple, since it’s actually an important factor to them and not just to third parties?


Is there a trade-off wrt attestation privacy here - i.e. using a single root key is better for privacy?


Ideally, at least for headless server hardware, if you have physical access, you could wipe the provisioning key and replace it with your own.

Doing so should blow away the disk keys, etc.

I’ve always assumed systems that relied on a key and didn’t support that were backdoored.


Which systems support that?


with batch sizes in the millions less so, but agreed it's still a differentiating attribute. I would solve that higher up in the protocols that used it, as the initialization keys shouldn't be related to post-provisioning keys in a way that is verifiable to anyone except the OEM, again arguable, but it's a higher level protocol question imo.


Not surprised at all to see where the researcher is from. I've noticed people from Eastern Bloc countries tend to have a certain mindset and skills that leads to being able to do such things.


Because they can be content and get by with what they've got without wasting time complaining. Also the mindset that there is no difference between use and misuse - intended purpose is meaningless, creative reuse is a virtue.


This ethos should be standard everywhere.


Because they managed to survive the communists and not get killed by Stalin


Or because their math education system was excellent and broadly accessible.


The testing part of math education in Russia is anything but excellent in my own experience. You have to do EVERYTHING from memory. So it tests your memory first, your actual understanding second. I've always had serious problems with this stupid memorization part.

And it's not just math. Most of the education is reading the book and then reciting the book at an exam. Nothing else. No original thinking required. Just dumb information retrieval. It's humiliating, really.

But maybe the actual teaching is better than in other countries. I don't know. It's the testing that traumatized me.


I've got an education in Russia and I don't recall such practice. Maybe some of professor do that, but that's more like an exception. Most others test your way of thinking. They even allow you to use text books when preparing the answer, though usually they allows only books they bring to the exam.


I don't know specifically about Russian education system, but I have impression that critics of rote memorization and pencil-and-paper testing are rarely from less successful countries or are themselves low achievers. I think it's just sausage making. Sausage is fine.


probably also because of piracy and the cracking scene. you're too poor to buy software, so you learn to crack it. you learn how to read assembly and use a debugger, a ton of obfuscation techniques and how to circumvent them, cryptanalysis through writing keygens. I'm not from the Eastern Bloc but I picked up some serious reverse engineering skills as a teenager from the warez scene.


Same here couple decades ago, but along with that being poor also limits your alternative forms of entertainment as well as access to the latest high performance HW. Which leads to both the boredom required to think that sitting in front of ones computer for days on end hacking as well as the motivation to figure our how to hack some game to run slightly better.

I freqently think that my parents best parenting choice was the choice to not have a TV in our house when I was growing up. It both saved them money but removed a very common time wasting/entertainment outlet. My own kids went from playing with legos, creating art, or messing with science kits to playing on their phones as soon as they were given that choice. And i'm still of the opinion that providing that choice was a big mistake, they are experts on the latest social media fad, and quite ignorant about things I wish they would have learned.

So, yes enviroment matters, some kids will hack even when given an xbox and a pile of games, but I think more of them will try it if they aren't given those choices.


I'm under the impression enclave keys have been extracted before, and Intel was able to mitigate by essentially publishing a key revocation update that made models with the extracted keys not be trusted for remote attestation. Is that also the case with these keys?


That doesn't solve the greater issue. Let's say you bought an Intel CPU because your company requires remote attestation. Then a researcher publishes an exploit. Then Intel pushes an update that revokes keys from your model of CPU. What would you do, go happily spend $500 on a new one? Should we landfill millions of CPUs everytime the mouse pulls ahead of the cat?


I disagree. This seems like fantasy. First, I don't think Intel has even done this -- "pushes an update that revokes keys from your model of CPU". If they did, there would be an enormous class action lawsuit. Remember that most CPUs are bought by large corporations, with extremely deep pockets. Even if Intel were to miraculously win the case, surely their reputation would irreparably harmed.


If that were the case, spectre, meltdown and similar vulnerabilities would surely have similar class actions?


This is pretty common FWIW.

Google revokes attestation keys for Android hardware a lot, especially Widevine Level 1 keys.

Ten years into that, the public doesn't seem very excised about it.


IIRC this is the first time a fused key was leaked; keys were leaked before but those were firmware keys which were encrypted with the root fused key which as it’s name suggests is literally fused into the silicon through programmable fuses.

It’s unclear if Intel has enough fuses to push a new key and if there is a mechanism to do it in software without a specialized programming station.

If the latter two are possible and they can fix the leak vector with a ucode update then they can likely revoke the key and patch this over.


I thought microcode was encrypted, how do they have its disasm? https://xcancel.com/pic/orig/media%2FGV7BLfeXsAEJ6HC.png




Are there any details on the attack? Does it require physical access in any way?


What's the impact of this?


Some of Signal's designs for contact privacy, including in the new usernames feature, rely on trust in SGX.

If anyone (including Signal) can pretend to be a secure SGX environment, you're back to trusting Signal's personnel/operations, rather than Intel/SGX, for some of the metadata/contact privacy they've historically touted.

More info (2020): https://medium.com/@maniacbolts/signal-increases-their-relia...


Just to expand on this, since it wasn't originally clear to me from reading your post, the contact privacy feature is about using SGX enclaves for the purpose of populating your known contacts on Signal. When you log into Signal for the first time your phone locally has all of your known contacts, and the Signal app wants to know which of these contacts already have Signal accounts. The secure enclave is a mechanism where you publish your entire contact list from your phone to the Signal servers, then they can send back the subset of those contacts that actually have Signal accounts. The point of the enclave is that this is all done in a way where Signal can't see what contacts you sent them, nor can they determine which contacts were matched and sent back to you.


[flagged]


It wasn't In-Q-Tel; but it was still essentially the CIA. Signal received millions in funding from the Open Technology Fund, an investment wing of Radio Free Europe, an organization founded as a CIA propaganda front.

https://web.archive.org/web/20191013092540/https://www.opent...


Citation needed?


[flagged]


Agreed, it's a fact that any US corporation will not be allowed to run lest they give the gov access to the data they hold (cref lavabit). I've never really understood the trust given to Signal in the tech community when a hard identity is required (phone #) and it immediately asks you to send your whole contact list to them on first run.

We know from Snowden that metadata about who is communicating with whom, and when, is one of their most valuable data streams. While signal may not be able to turn over the contents of your messages, they absolutely retain a rich stream of metadata.


You mean the telephone contacts / contact list stored in iCloud and Google cloud? Probably it’s a hard problem to solve and this, using the sgx is signals best guess of an acceptable approach. I think the nsa has simpler ways to access the data in question than through signal.


> they absolutely retain a rich stream of metadata.

* they absolutely receive a rich stream of metadata.


You raise an interesting point. In the case of Intel's SGX feature, can you propose how they might do this?


Users can generate key pairs themselves, once, and the public keys can be used to sign architectural enclaves post factum. Each enclave's cryptographic hash of the contents is only generated then.

This way, the users are only as secure as they want to be. The code would need to be signed using each one's public key, but we're talking about specialised software here.

Compare this to the de facto standard, where US corporations hold the private keys to everything hardware (off the top of my head, processor, UEFI) and everything software (SSL root keys, IP addresses, DNS).

We already have libpairip and play integrity on android, let's not bring it over to desktop processors.


> Please don't use Hacker News for political or ideological battle. That tramples curiosity.

https://news.ycombinator.com/newsguidelines.html


You can't prove the reverse


it's used quite a bit in finance for things like transaction signing. the keys used to create signatures only ever exist within the SGX enclave, similar to how yubikeys and HSMs do the same thing.

compromising SGX wouldn't suddenly open up all of these transactions to exploitation though, since the attacker would need (presumably root) access to the machine and the keys could always be rolled.

I'm no expert but I suspect it would mean urgent firmware updates for anyone relying on SGX


May help break DRM? But not sure


PC Blu-ray players all require SGX. Now that's permanently broken.


I think it was already broken?

37C3 - Full AACSess: Exposing and Exploiting AACSv2 Uhd Drm for Your Viewing Pleasure https://www.youtube.com/watch?v=SEBuiecLZGg


Apparently that's been on the way out for a while now: https://news.ycombinator.com/item?id=32442894

Now I'm curious, thouyh: Have there really never been any software Blu-ray players supporting AMD?


Blu-ray is fine, it's Ultra Blu-Ray (aka Blu-Ray 4k) that needs SGX for officially licensed playback. No AMD support there. But non licensed play back is fine.


For people who feel happy about this because "DRM bad, piracy good, capitalism evil", this is actually more complicated than it seems.

Yes, features like this can be used by corporations to enforce DRM and ensure that you aren't running unapproved or modified versions of their code. However, this works both ways, because it can also be used by customers to make sure that corporations are actually running the code they're claiming to be running, and that's pretty useful.

Signal is a prominent example of this. There is no way to securely implement a way to find people by their phone number without a solution like SGX. You can either remove the feature completely, making your app hard to use and making users more likely to choose closed-source solutions, or you can implement it insecurely, becoming susceptible to hackers and law enforcement actions. With SGX, users can just verify that the code you're running does not, in fact, send all their data to the NSA behind their backs.


> With SGX, users can just verify that the code you're running does not, in fact, send all their data to the NSA behind their backs

This is a big claim which needs extraordinary proof if we're going to rely on that assumption for security. Remember, we're talking about an organization that did things like sneak in backdoors in encryption standards or infiltrate Google's internal network to passively extract user data en masse. We should just assume the NSA has gotten the keys from Intel a long time ago, voluntarily or not.

The NSA doesn't respect the rules. Nor do they think about the wider consequences of their actions. They're a reckless and irresponsible organization with an enormous budget. If they want something from Intel, they will have it.


There are firms that sell phone scanning utilities that pull signal message off (unlocked?) ios and android (and presumably desktops).

Even if signal used an enclave key to encrypt the local chat database, sgx doesn’t protect the enclave from the keyboard, mouse or display drivers, so someone could simply write a screen scraper that displayed and captured each message of each thread.


I think you're misunderstanding what SGX is used for in the Signal context: It's only used server-side. The clients have no real use for it, because they don't have to attest the software they're running, only the keys they possess.

The server however is inherently untrusted, and the users of the server can benefit from some form of attestation of the software it's running. SGX tries to provide this, as the siblings in this thread explain.


> With SGX, users can just verify that the code you're running does not, in fact, send all their data to the NSA behind their backs.

I'm genuinely unclear. How exactly does a user accomplish this? What role does SGX play in this?


Obviously the remote code could send the data to whatever three letter agency the operator wants, so the remote operator need to publish the server's code. But how do you prove that the remote operator is running the code they claim they are?

That's what SGX does, it lets remote systems provide a cryptographic proof that they are running certain code. Including the ability to have a private key protected by the SGX, so you can public key encrypt your data, send it to the remote server, and know that only the code they've already published is processing your data.


But presumably now the SGX root key is published, anyone can still do all of the above, but in a simulated machine rather than on legit intel hardware, which means they see everything that SGX is supposed to hide from them.


The theory is that when you send your contact list to the service, the service can prove to you that it's running a specified set-intersection application (and nothing else!) in the environment that will have access to that data, using this attestation mechanism.

If it modified the application to make it log or leak your data somehow, it could no longer pass that attestation step.

This should work as long as there is no hardware or side channel attack that lets the service operator (or someone it rents hardware from) defeat the SGX security guarantees, as long as there's no backdoor in the enclave implementation, and as long as the signing keys are not leaked or extracted, and are only used in accordance with the published policies.


Signal users should be happy about this, because it exposes the SGX enclaves as the false security it is. It's not like this is the first SGX exploit.

You should assume that Signal engineers, if they chose to, could access the user data protected by SGX, just like they could log metadata about your message sending and receiving patterns. You only have their word that they don't. The NSA could certainly bypass the SGX given access to the server.

I suppose there may be some legal benefit in putting the data "out of reach" - it would be hard to prove in court that you were capable of leveraging an exploit towards SGX to provide the requested data. But NSA/others will happily take possession of the hardware and do it themselves.


The difference in number of situations where it's a corporation enforcing their will on you vs. you being able to trust them is massive. If Signal had used a different method then I don't think there would be a single example of "customers to make sure that corporations are actually running the code they're claiming to be running".


Here's a comment from some number of years ago suggesting an implementation of private set intersection for Signal's phone number / contact search without the use of SGX: https://news.ycombinator.com/item?id=11289223


> However, this works both ways, because it can also be used by customers to make sure that corporations are actually running the code they're claiming to be running, and that's pretty useful.

This still requires trusting Intel.


> it can also be used by customers to make sure that corporations are actually running the code they're claiming to be running, and that's pretty useful.

Has any company beyond Signal ever done this? Ever?


Apple is leaning into this for cloud AI: https://security.apple.com/blog/private-cloud-compute/

Probably enough to be genuinely hard for an APT to backdoor even with insider help. Still, ultimately the security relies on Apple hardware signed with Apple keys.


Dual authentication is common in the defense space.


[flagged]


Which step?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: