Hacker News new | past | comments | ask | show | jobs | submit login

> Well, your messages have to be congruent with the expected messages from the real hardware,

Yes, which is why you need the keys that are used to make real hardware. Provided you have those very secret and well protected keys (you are Apple being compelled by the government) that's not an issue.

> and your fake hardware has to register with the real load balancers to receive user requests.

Absolutely, but we're apple in this scenario so that's "easy".




I think I misunderstood your point -- I took it to mean someone impersonating a server, but you're saying it's Apple. So the part you're attacking (as Apple) is:

> The process involves multiple Apple teams that cross-check data from independent sources, and the process is further monitored by a third-party observer not affiliated with Apple. At the end, a certificate is issued for keys rooted in the Secure Enclave UID for each PCC node.

So, in your scenario, the in-house certificate issuer is compelled to provide certificates for unverified hardware, which will then be loaded with a parallel software stack that is malicious but reports the attestation ID of a verified stack.

So far, so good. Seems like a lot of people involved, but probably still just tens of people, so maybe possible.

Are you envisioning this being done on every server, so there are no real ones in use? Or a subset? Just for sampling, or also with a way to circumvent user diffusion so you can target specific users?

It's an interesting thought exercise but the complexity of getting anything of real value from this without leaks or errors that expose the program seems pretty small.


Well, the broader context of the proposal is as an alternative to the original comment in this HN thread

> Well, a 89-day "update-and-revert" schedule will take care of those pesky auditors asking too many questions about NSA's backdoor or CCP's backdoor and all that.

As a backdoor I am taking it to mean they can compel assistance from inside of Apple, it's not a hack where they have to break in and hide it from everyone (though certainly they would want to keep it to as few people as possible).

At least in the NSAs case I think it would be reasonable to imagine that they are limited to compromising a subset of the users data. Specific users they've gotten court orders against or something... so yes a subset of nodes and also circumventing user diffusion (which sounds like traffic analysis right up the NSAs alley, or a court order to whatever third party Apple has providing the service).


> so yes a subset of nodes and also circumventing user diffusion (which sounds like traffic analysis right up the NSAs alley, or a court order to whatever third party Apple has providing the service).

How does traffic analysis help? The client picks the server to send the query to, and encrypts with that particular server's private key. I guess maybe your have the load balancer identify the target and only provide compromised servers to it? But then every single load balancer has to have the list of targeted individuals and compromised servers, which seems problematic for secrecy at scale.


The load balancer is blind to which client sent a request via ohttp. You need to do something to bypass that (traffic analysis or ordering the ohttp provider to help).

> But then every single load balancer has to have the list of targeted individuals and compromised servers, which seems problematic for secrecy at scale.

It really doesn't. This seems well within the realms of what you could achieve with a court order without it becoming public.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: