Hacker News new | past | comments | ask | show | jobs | submit login

Unfortunately that's not the case.

All remote attestation technology is rooted by a PKI (the DCA certificate authority in this case). There's some data somewhere that simply asserts that a particular key was generated inside a CPU, and everything is chained off that. There's currently no good way to prove this step so you just have to take it on faith. Forge such an assertion and you can sign statements that device X is actually a Y and it's game over, it's not detectable remotely.

Therefore, you must take on faith the organization providing the root of trust i.e. the CPU. No way around it. Apple does the best it can within this constraint by trying to have numerous employees be involved, and there's this third party auditor they hired, but that auditor is ultimately engaging in a process controlled by Apple. It's a good start but the whole thing assumes either that Apple employees will become whistleblowers if given a sufficiently powerful order, or that the third party auditor will be willing and able to shut down Apple Intelligence if they aren't satisfied with the audit. Given Apple's legal resources and famously leak-proof operation, is this a convincing proposition?

Conventional confidential computing conceptually works, because the people designing and selling the CPUs are different to the people deploying them to run confidential workloads. The deployers can't forge an attestation (assuming absence of bugs) because they don't have access to the root signing keys. The CPU makers could, theoretically, but they have no reason to because they aren't running any confidential workloads so there's no data to steal. And they are in practice constrained by basic problems like not knowing what CPU the deployers actually have, not being able to force changes to other people's hardware, not being able to intercept the network connections and so on.

So you need a higher authority that can force them to conspire which in practice means only the US government.

In this case, Apple is doing everything right except that the root of trust for everything is Apple itself. They can publish in their log an entry that claims to be an Apple CPU but for which the key was generated outside of the manufacturing process, and that's all it takes to dismantle the entire architecture. Apple know this and are doing the best they can within the "don't team up with competitors" constraint they obviously are placed under. But trust is ultimately a human thing and the purpose of corporations is to let us abstract and to some extent anthropomorphize large groups. So I'm not totally sure this works, socially.






Hi Mike! Long time no see.

> simply asserts that a particular key was generated inside a CPU ... There's currently no good way to prove this step

Yes, but there are better and worse ways to do it. Here's how I think about it. I know you know some of this but I'll write it out for other HN readers as well.

Let's start with the supply chain for an SoC's master key. A master key that only uses entropy from an on-die PUF is vulnerable to mistakes and attacks on the chip design as well as the process technology. A master key memory on-die which is provisioned by the fab, or during packaging, or by the eventual buyer of the SoC, is vulnerable to mistakes and attack during that provisioning step.

I think state-of-the-art would be something like:

- an on-die key memory, where the storage is in the vias, using antifuse technology that prevents readout of the bits using x-ray,

- provisioned using multiple entropy source controlled by different supply chains such as (1) an on-die PUF, (2) an on-die TRNG, (3) an off-die TRNG controlled by the eventual buyer,

- provisioned by the eventual buyer and not earlier

As for the cryptographic remote attestation claim itself, such as a TPM Quote, that doesn't have to be only one signature.

As for detectability, discoverability and deterrence, transparency logs makes targeted attacks discoverable. By tlogging all relevant cryptographic claims, including claims related to inventory and provisioning of master keys, an attacker would have to circumvent quite a lot of safeguards to remain undetected.

Finally, if we assume that the attacker is actually at Apple - management, a team, a disgruntled employee, saboteurs employed by competitors - what this type of architecture does is it forces the attacker to make explicit claims that are more easily falsifiable than without such an architecture. And multiple people need to conspire in order for an attack to succeed.


Hello! I'm afraid I don't recognize the username but glad to know we've met :) Feel free to email me if you'd like to greet under another name.

Let's agree that Apple are doing state-of-the-art work in terms of internal manufacturing controls and making those auditable. I think actually the more interesting and tricky part is how to manage software evolution. This is something I've brought up with [potential] customers in the past when working with them on SGX related projects: for this to make sense, socially, then there has to be a third party audit for not only the software in the abstract but each version of the software. And that really needs to be enforced by the client, which means, every change to the software needs to be audited. This is usually a non-starter for most companies because they're afraid it'd kill velocity, so for my own experiments I looked at in-process sandboxing and the like to try and restrict the TCB even within the remotely attested address space.

In this case Apple may have an advantage because the software is "just" doing inferencing, I guess, which isn't likely to be advantageous to keep secret, and inferencing logic is fairly stable, small and inherently sandboxable. It should be easy to get it to be audited. For more general application of confidential/private computing though it's definitely an issue.

The issue of multiple Apple devs conspiring isn't so unlikely in my view. Bear in mind that end-to-end encryption made similar sorts of promises that tech firm employees can't read your messages, but the moment WhatsApp decided that combating "rumors" was the progressive thing to do they added a forwarding counter to messages so they could stop forwarding chains. Cryptography 101: your adversary should not be able to detect that you're repeating yourself; failed, just like that. The more plausible failure mode here is therefore not the case of spies or saboteurs but rather a deliberate weakening of the software boundary to leak data to Apple because executives decide they have a moral duty to do so. This doesn't even necessarily have to be kept secret. WhatsApp's E2E forwarding policy is documented on their website, they announced it in a blog post. My experience is that 99% of even tech workers believe that it does give you normal cryptographic guarantees and is un-censorable as a consequence, which just isn't the case.

Still, all this does lay the foundations for much stronger and more trustworthy systems, even if not every problem is addressed right away.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: