Hacker News new | past | comments | ask | show | jobs | submit login

I wanted to draw attention to this since it has strong "Web Integrity API" vibes, but works at a level that imho might be even more worrisome than what goes on between your browser and a web server - the TLS connection layer.

The standard aims to integrate "platform attestion tokens" (like those provided by Arm CPU platforms, for instance: https://www.ietf.org/archive/id/draft-tschofenig-rats-psa-to...) into TLS handshake client metadata, which will enable remote services to deny serving clients that do not pass SafetyNet/Play Integrity/Web Integrity-like attestation schemes before any application layer data has been exchanged at all.

It is being drafted here: https://datatracker.ietf.org/doc/draft-fossati-tls-attestati...




This might be less bad - being part of the protocol, browsers might implement this with much more obtuse UIs that hinder adoption, causing it to fail and the ecosystem to move on. Whereas putting the capabilities under flexible software control like Google's treacherous proposal allows for this slowly-boiling-frog dynamic we've already experienced with IP and browser leak based discrimination. (Like these days even Amazon is hitting me with CAPTCHAs to search products. sigh)

But every remote attestation scheme is fundamentally an evil attack on personal computing, couched in legitimate sounding language like "security". The actual dynamic is that of a villain in a story arc who starts off with good intentions, but just wants increasingly more power to implement their desires, causing them to actually be evil.


Enclaves and remote attestations aren't bad per-se, it's just those where the user of the device is considered an "untrusted party".

It makes more sense to consider your cloud provider an untrusted party, however I'd not rely on these technologies anyway, in practice they might not be as secure as advertised.


In line with my second point, we can always see "legitimate" uses for more power. But novel centralized power generally accrues to those who already have power - hence your hedging about cloud providers ("however I'd not rely on these technologies anyway").

But yes, I've also put forth the argument that RA could be a neutral feature if it were under the control of the owner of the computer. Either prohibit manufacturers from bundling privileged keys with devices, or require that the chip be able to import and export all attestation keys through an appropriate maintenance mode. Then an end user would always be free to generate new keys or export bundled ones and run the attestation protocols in software, thus preserving their freedom to run whatever software they'd like.

But the above dynamic would not result in widely used protocols for doing attestations, rather they would be a bespoke thing for specific use cases - like say an internal corporate website that can only be accessed by corporate laptops. Hence why I think it's correct to judge every attempt at remote attestation for the general web and/or consumer machines as "evil".

Furthermore, the funny thing that wasn't part of these debates when both secure attestation and secure boot were only abstract threats - secure boot already covers most of the legitimate desires for secure attestation. For example deploying a server at a datacenter - secure attestation would be nice for knowing it's not been tampered with. But secure boot already provides that assurance. The main difference would seem to be that if a software bug allows arbitrary code execution, secure boot falls down immediately while secure attestation is supposed to catch that (although who knows if implementations would actually live up to that).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: