Hacker News new | past | comments | ask | show | jobs | submit login

It attests the server, not the client.



Why does this matter? Are you saying that does create legitimate uses? If so, what are they?


Imagine you're connecting to a VPN server. The service provider says they don't store logs - nowadays you have to trust them, and perhaps any audits they might have done in the past. Remote attestation allows you to get information about the actual VM that's running the service, so the provider can give you the image for the machine (or build steps for it), and what you get via remote attestation is cryptographic proof backed by the hardware that the machine you're talking to is indeed running that software, and not something else that the provider has deployed.

And this is just a consumer-centric use, most use cases related to remote attestation have nothing to do with end consumer. The goal is usually to make sure that some workload or device that a company owns (say, a VM in the cloud, or an IoT device running in a field somewhere) are what the company expects them to be.


> cryptographic proof backed by the hardware that the machine you're talking to is indeed running that software

Assuming both the software and hardware are secure, where hardware security includes resistance for attacks like fault injection and so on. This is fascinating for me, because it's like you could break the system if you were able to flip one single bit (load?) somewhere inside the SoC. So I guess this topic is about to get more popular.


Yes, assuming some degree of security of the full TCB (both the software and hardware). Of course, no system is bulletproof, but it's slowly going in the right direction - and hopefully since Spectre/Meltdown people are taking more care :)

At the end of the day, it doesn't have to be perfect, just another layer in the Swiss cheese model.

> able to flip one single bit (load?) somewhere inside the SoC

TBH, capabilities at that level (i.e., within the SoC) are fairly difficult to pull off, so probably not in most threat models.


Yes, but I just wanted to highlight that here we have a completely new attack surface - hardware-level hacking, which is not the same as hacking hardware via software (like Spectre/Meltdown) :)


True! Advanced systems already have some degree of fault tolerance built-in, and the encryption of enclave memory is there against cold boot attacks and such.


That sure sounds great, but it trivially fails. The VPN provider can just stick a second machine on the network port that correlates input/output packets and creates logs based on that. So at best it's protecting against incompetence, but not maliciousness.


And how exactly would that second machine decrypt the packets, given that the keys are only available to the valid one?


"correlates input/output packets"

Timing and size will get you very far, especially as smaller packets are used for TCP set up. This is a worry with good faith TOR servers and malicious upstreams, even with multiple hops. It's certainly doable for a single machine where you're directly watching its network card.


Sure, but I'm not proposing a mitigation for that. It doesn't have to be a silver bullet to be useful. I'm arguing that the ability to know that your peer is running some expected code is useful as an extra layer of security for some use cases.


But it's already generally accepted that RA is useful as an extra layer of security in many cases.

The problem is that this layer of "security" steps over the traditional demarcation point of the protocol, destroying the customary separation of authority. So examples of "good things" that could be done with it aren't particularly relevant to the larger discussion about the threat posed by its widespread adoption with manufacturer-escrowed keys.

If owners controlled their devices' keys, we could still have things like auditing organizations that enrolled the servers of VPN providers. So that you could verify a remote computer was running specific code, reliant on your trust in the auditor. But with the current design, those auditors are the device manufacturers themselves and the ability to inspect is applied universally across every device. This will inevitably be abused to make less powerful parties less secure and undermine their own interests. That is the problem.


Signal uses SGX in a similar fashion to ensure that the contact hashes are only uploaded to trusted servers running good code.

https://signal.org/blog/private-contact-discovery/


That's not any better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: