Hacker News new | past | comments | ask | show | jobs | submit login

> One option is to release a malicious software update, sign it, publish the signature on the public chain,

In this option it would be Apple releasing a malicious software update?

> If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...

This option reads like the keys are stored in apple-keys.txt

> Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure

They mentioned that the in-depth write up will be shared later, might they still address this concern in writing? Your wording makes you sound so certain, but this is just a broad overview. How are you so sure?




> In this option it would be Apple releasing a malicious software update?

Yes, compelled by something like the all writs act (if the US is the one doing the compelling).

> This option reads like the keys are stored in apple-keys.txt

They probably are. That file might live on a CD drive in a safe that requires two people to open it, but ultimately it's a short chunk of binary data that exists somewhere (until it is destroyed)...

> might they still address this concern in writing?

Can I say beyond all doubt that this won't happen? Of course not.

On the first approach I'm quite confident though, because it's both the type of attack they discuss in their initial press release, and pretty fundamental to and explicitly allowed by their model of updating the software.

On the second approach I'm reasonably confident. Like the first issue it's the type of issue that they were discussing in their initial press release. Unlike the first issue it's not something that is explicitly allowed in the model. If Apple can find a way to make the attestation keys irretrievable while still allowing themselves to manufacture hardware I believe they'd do it - I just don't see a method and think it would have warranted a mention if they had one. I tried to insert a level of uncertainty in my original writing on this one because I could be missing a way to solve it.

Ultimately I'd rather over-correct now then have people start thinking this is going to be more secure than it is and then have some fraction of them miss the extremely-likely follow up of "and we could be compelled to work around our security".



I'm well aware of these. They don't solve the problem at hand. You need a way to put keys into new hardware. Thus you need a way to get keys out of wherever you've stored your cryptographic material. Thus it can't be on a HSM (or it can be if it's a master key signing child keys, but in that case the attack only needs a signed child key).


From: https://support.apple.com/guide/security/secure-enclave-sec5...

“A randomly generated UID is fused into the SoC at manufacturing time. Starting with A9 SoCs, the UID is generated by the Secure Enclave TRNG during manufacturing and written to the fuses using a software process that runs entirely in the Secure Enclave. This process protects the UID from being visible outside the device during manufacturing and therefore isn’t available for access or storage by Apple or any of its suppliers.“


Sure, and even Apple can't imitate a different server that they made.

They're making new servers though. Take the keys that are used to vouch for the UIDs in actual secure enclaves, and use them to vouch for the UID in your evil simulated "secure" enclave. Your simulated secure enclave doesn't present as any particular real secure enclave, it just presents as a newly made secure enclave that Apple has vouched for as being a secure enclave.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: