Hacker News new | past | comments | ask | show | jobs | submit login
Remote Attestation TLS (RA-TLS) (confidentialcomputing.io)
47 points by c0l0 10 months ago | hide | past | favorite | 34 comments



I wanted to draw attention to this since it has strong "Web Integrity API" vibes, but works at a level that imho might be even more worrisome than what goes on between your browser and a web server - the TLS connection layer.

The standard aims to integrate "platform attestion tokens" (like those provided by Arm CPU platforms, for instance: https://www.ietf.org/archive/id/draft-tschofenig-rats-psa-to...) into TLS handshake client metadata, which will enable remote services to deny serving clients that do not pass SafetyNet/Play Integrity/Web Integrity-like attestation schemes before any application layer data has been exchanged at all.

It is being drafted here: https://datatracker.ietf.org/doc/draft-fossati-tls-attestati...


This might be less bad - being part of the protocol, browsers might implement this with much more obtuse UIs that hinder adoption, causing it to fail and the ecosystem to move on. Whereas putting the capabilities under flexible software control like Google's treacherous proposal allows for this slowly-boiling-frog dynamic we've already experienced with IP and browser leak based discrimination. (Like these days even Amazon is hitting me with CAPTCHAs to search products. sigh)

But every remote attestation scheme is fundamentally an evil attack on personal computing, couched in legitimate sounding language like "security". The actual dynamic is that of a villain in a story arc who starts off with good intentions, but just wants increasingly more power to implement their desires, causing them to actually be evil.


Enclaves and remote attestations aren't bad per-se, it's just those where the user of the device is considered an "untrusted party".

It makes more sense to consider your cloud provider an untrusted party, however I'd not rely on these technologies anyway, in practice they might not be as secure as advertised.


In line with my second point, we can always see "legitimate" uses for more power. But novel centralized power generally accrues to those who already have power - hence your hedging about cloud providers ("however I'd not rely on these technologies anyway").

But yes, I've also put forth the argument that RA could be a neutral feature if it were under the control of the owner of the computer. Either prohibit manufacturers from bundling privileged keys with devices, or require that the chip be able to import and export all attestation keys through an appropriate maintenance mode. Then an end user would always be free to generate new keys or export bundled ones and run the attestation protocols in software, thus preserving their freedom to run whatever software they'd like.

But the above dynamic would not result in widely used protocols for doing attestations, rather they would be a bespoke thing for specific use cases - like say an internal corporate website that can only be accessed by corporate laptops. Hence why I think it's correct to judge every attempt at remote attestation for the general web and/or consumer machines as "evil".

Furthermore, the funny thing that wasn't part of these debates when both secure attestation and secure boot were only abstract threats - secure boot already covers most of the legitimate desires for secure attestation. For example deploying a server at a datacenter - secure attestation would be nice for knowing it's not been tampered with. But secure boot already provides that assurance. The main difference would seem to be that if a software bug allows arbitrary code execution, secure boot falls down immediately while secure attestation is supposed to catch that (although who knows if implementations would actually live up to that).


Note that in this case, the "untrusted" party is the cloud provider, not the user. It's meant to attest servers. Fortunately newer intel consumer-class processors don't support SGX anymore.


Unfortunately, that is only correct for the example cited in the article I linked to. The RFC being drafted contains attestation possibilities for both clients and servers, as per sections 4.1. (TLS Client as Attester) and 4.2. (TLS Server as Attester).

Nothing in this prevents servers from discrimincating against clients that fail to produce TPM-backed attestation that they run exclusively the exact software builds, cryptographically signed by a central arbiter of trust, that the server expects/requires them to.


That's why I mentioned intel deprecating SGX, because newer desktop CPU's don't support that, so it'd be impractical for websites to discriminate based on that.

Also note that servers running in the cloud can naturally be clients. It's only bad if this is meant to restrict the power of the consumer over their own (non-rented) devices.


I'm sorry to say that I am not convinced. In recent years, Microsoft has introduced [Virtualisation-based Security](https://learn.microsoft.com/en-us/windows-hardware/design/de...) - funnily enough while recycling one of actual security experts' most-loathed acronym, "VBS" - for OS components that are deemed critical, such as Windows Defender Credential Guard. If your OS, that is now also a hypervisor, shards off some "critical" computation, such as computing RA-TLS nonces - into a privileged domain that your regular user domain may not access, and that privileged, TMP-attested domain is protected by all the secure/measured boot shenanigans that are going on with Windows 11 and its mandatory TPM, it's as game over as it was with SGX directly in your CPU (barring some grave implementation bugs that will make everything vulnerable to state-level actors regardless).

Also, Arm silicon, for example, has TrustZone, which afaiui serves a similar purpose as SGX did.


What's the advantage (I can't see any) of this scheme over using client certificates where appropriate?

It seems to me that the "attestation" doesn't add anything (other than the ability to positively identify a device across multiple sites, which is a downside rather than an advantage) to the TLS session set up process.

If a site (i.e., a corporate VPN) needs to ensure that a particular client is authorized to access it, client certificates and user authentication already provide such confirmation.

I don't see why this is necessary for sites that, while they may wish to encrypt sessions with TLS, don't have any reason to track (and potentially block) devices that don't possess the properties (e.g., no ad blocker) they want to see.

ISTM that this is antithetical to the peer-to-peer nature of the Internet and is just another way to restrict access for those who are deemed to be acting against the financial interests of large corporations.

I have no interest in donating my CPU cycles and bandwidth to those who want to identify/categorize/advertise/restrict my access to the open internet.

This could also kill off tools like curl, yt-dlp and other non-browser tools.

Altogether a terrible idea. Or am I missing something important?


These APIs can be combined quite nicely with 'secure enclave' processing technologies like Intel SGX. The idea is that (at the very least) a processor can attest that the process communicating with your server is an unmodified copy of your binary. A further version might be that the data associated with that process remains encrypted and other processes are unable to read it. Apparently it mostly works! But as is usual with modern processors, there are many side channels.

This has some cool use cases! For example, Microsoft SQL server can already use SGX to implement additional data security. A user can run sql queries including pushdown filters on data the administrator of the server can never access, because certain columns of the data is encrypted and never held unencrypted in memory. If you're at a tech company and have ever worried about rogue administrators accessing the data of your users, these enclaves are great for that (in theory)!

Right now, people have to build their own transport layers, which interact with the attestation APIs. These folks are trying to build something that is as easy to set up as TLS.

A problem with all this tech is that to the extent it can be used to make business problems easier to solve, it makes it much harder to introspect what running software is doing, which from a software freedom perspective tends to raise hackles. My hope would be that in general, this is more used like 'corporate TLS interception' insofar as your personal device does not do it, but I'd expect that mobile device vendors use it before too long.


>A problem with all this tech is that to the extent it can be used to make business problems easier to solve, it makes it much harder to introspect what running software is doing, which from a software freedom perspective tends to raise hackles. My hope would be that in general, this is more used like 'corporate TLS interception' insofar as your personal device does not do it, but I'd expect that mobile device vendors use it before too long.

A fair point. Although, given that corporate environments have been doing this sort of thing securely (in a variety of contexts beyond just browsing) for a long time without such remote "attestation," I'm skeptical that this provides much value other than for tracking/spying.


> Or am I missing something important?

The fact that the internet != traffic going to your laptop/PC/phone.

There are a lot of non-consumer use-cases (think, processing of corporate data in the cloud, or deployment of industrial IoT devices) where you want very strong guarantees about the state of the entity you're talking to, stronger than a static x509 gives you.

> over using client certificates

The use-cases are also not restricted to client authentication (and "client" doesn't always mean outside of the cloud).


>The fact that the internet != traffic going to your laptop/PC/phone.

>There are a lot of non-consumer use-cases (think, processing of corporate data in the cloud, or deployment of industrial IoT devices) where you want very strong guarantees about the state of the entity you're talking to, stronger than a static x509 gives you.

I am aware that the Internet is not limited to websites and personal devices. In fact, I have implemented/secured/managed corporate networks with multiple secured data channels (both within the organization and its employees and in B2B and B2C contexts) many times.

Securing such data (whether within a cloud infrastructure or across the open internet) streams doesn't require such remote "attestation." In fact, it adds an additional layer of complexity that, at least AFAICT, primarily adds the ability to track personal devices and usage across the wider internet.

>> over using client certificates >The use-cases are also not restricted to client authentication (and "client" doesn't always mean outside of the cloud).

Fair enough. call such certificates whatever you like. However in the corporate context, I'd much prefer to be able to verify authorized connections via a process within my control (e.g., issuing my own certificates to network-based devices, servers/VMs/containers/whatever) rather than relying upon some third-party (whether that's an SGX store or a remote "attestation" authority) for such authentication/authorization processes.

I'll add that this (and by me, repeatedly, as well as many thousands of others) has been successfully accomplished without building a mechanism to uniquely identify arbitrary (as opposed to specific devices involved in sharing data streams) devices on the 'net. Which is, AFAICT, what this Internet Draft appears to be advocating.


(Disclosure: I work at Anjuna, a member of the Confidential Computing Consortium. Opinions my own.)

The value of remote attestation in general (setting aside the implementation detail of RA-TLS) is that you have a form of software identity which is tied directly to the exact code that you're running. For example, if someone steals your service's API key, token, or cert, then they can impersonate your service. But if you are using remote attestation, they will not be able to generate the proper "credential" (a hardware-backed attestation document) to do it.

Here on HN lately, I've seen a lot of focus on end user devices for things like Web Integrity. But at work I'm seeing mainly customer use cases which are purely internal and server-side - think reducing risk of insider threats, or simplifying cert management with SPIFFE/SPIRE.

For some more publicly-visible use cases, MobileCoin's Fog system https://mobilecoin.com/learn/fog use remote attestation to ensure validator nodes are coordinating using the same code (rather than some malicious version).

Or for a non-blockchain use case, Dashlane's Confidential SSO https://www.dashlane.com/blog/streamline-logins-dashlane-con... uses remote attestation so that encryption keys can only be accessed by approved code - reducing risk from insider threats.

I just wrote a blog post on another use case we are beginning to see: truly verifiable services. Say you want to operate a managed instance of an open-source project, to save users the operational hassle. How would a user know you're really running that OSS project, rather than a malicious version that steals their data? Using remote attestation, you can actually prove that: https://www.anjuna.io/blog/remote-attestation-enhancing-iden...


>The value of remote attestation in general (setting aside the implementation detail of RA-TLS) is that you have a form of software identity which is tied directly to the exact code that you're running. For example, if someone steals your service's API key, token, or cert, then they can impersonate your service. But if you are using remote attestation, they will not be able to generate the proper "credential" (a hardware-backed attestation document) to do it.

That's an excellent point and I agree it's important to ensure strong authentication/authorization for such applications.

However, as suggested in the Internet Draft, this would go far beyond simply ensuring that a specific peer is authorized to access another specific peer.

Rather, it would essentially assign (based on keys, presumably in a secure enclave) a globally unique identifier that can then be tracked across the entire internet, rather than just ensuring that a particular set of peers are authorized to securely communicate.

This is not a new issue and, for the most part, is a solved problem that absolutely does not require that arbitrary devices be permanently fingerprinted/identified.

I'm sure that many folks may disagree, especially since "remote attestation" is the new shiny, but that doesn't mean there aren't perfectly functional methods already in use that provide similar functionality -- without the potential for spying/tracking.


I see, it seems your objection is to the ARM PSA document - I'm not familiar with it but it seems that it is targeted towards IoT devices. I also don't want permanent fingerprinting, but I think it is a question orthogonal to remote attestation in general. All the tech is already out there and we can push back on specific uses; for example, the Estonian e-ID provides similar permanent fingerprinting _and association with a human_, but seems to be pretty popular among their citizens.

On the other hand, the RATS architecture doc has several reference use cases which are useful and limited: https://www.rfc-editor.org/rfc/rfc9334.html#name-reference-u... . I think these are worth pursuing.


>All the tech is already out there and we can push back on specific uses;

Yes, we can. And this proposal has privacy implications big enough to drive a column of tanks through.

I have no issue with folks confirming the validity of software installed on devices they own, or of businesses vetting specific configurations for their employees/contractors/partners/cloud instances/device and even customers to access their networked environments, where's that's appropriate.

In fact, that's been pretty much de rigeuer in corporate environments for decades, and I've helped implement such systems more than once.

The issue I have with this proposal is that given the current surveillance capitalism[0] environment, it's likely that a variety of greedy scum will use the tools provided to deprecate privacy on a broad scale.

As I said, securing networked environments is critical in many contexts, and we've been doing so for a long time. But building remote attestation (especially when it's not clear who, exactly, might do such attestation and what properties would make a device "untrustworthy") without clear boundaries in the current environment is a privacy disaster waiting to happen, IMHO.

>On the other hand, the RATS architecture doc has several reference use cases which are useful and limited: https://www.rfc-editor.org/rfc/rfc9334.html#name-reference-u.... I think these are worth pursuing.

It's not the architecture per se I object to, rather it's the almost certain abuse (in the absence of any regulation or limitations) of such an architecture that's the problem as I see it.

[0] https://en.wikipedia.org/wiki/Surveillance_capitalism

Edit: Added the missing link.


Allows them to whitelist clients. Kills off adversarial interoperability. Now you can't make you own compatible technology stack anymore, you have to use their approved ones which only ever do their bidding at your expense.


Which is a great feature for a locked down intranet, cloud cluster, or IoT network. Terrible for public facing websites, but with the time to adoption rates of TLS 1.3 (and disabling the outdated TLS 1.1 and 1.2) and the use of http/3, I don't think any important party is even going to try to put this into production for any customer facing system.


Does this have a single legitimate use? I can't think of any. It seems like it's only useful for things like DRM.


It attests the server, not the client.


Why does this matter? Are you saying that does create legitimate uses? If so, what are they?


Imagine you're connecting to a VPN server. The service provider says they don't store logs - nowadays you have to trust them, and perhaps any audits they might have done in the past. Remote attestation allows you to get information about the actual VM that's running the service, so the provider can give you the image for the machine (or build steps for it), and what you get via remote attestation is cryptographic proof backed by the hardware that the machine you're talking to is indeed running that software, and not something else that the provider has deployed.

And this is just a consumer-centric use, most use cases related to remote attestation have nothing to do with end consumer. The goal is usually to make sure that some workload or device that a company owns (say, a VM in the cloud, or an IoT device running in a field somewhere) are what the company expects them to be.


> cryptographic proof backed by the hardware that the machine you're talking to is indeed running that software

Assuming both the software and hardware are secure, where hardware security includes resistance for attacks like fault injection and so on. This is fascinating for me, because it's like you could break the system if you were able to flip one single bit (load?) somewhere inside the SoC. So I guess this topic is about to get more popular.


Yes, assuming some degree of security of the full TCB (both the software and hardware). Of course, no system is bulletproof, but it's slowly going in the right direction - and hopefully since Spectre/Meltdown people are taking more care :)

At the end of the day, it doesn't have to be perfect, just another layer in the Swiss cheese model.

> able to flip one single bit (load?) somewhere inside the SoC

TBH, capabilities at that level (i.e., within the SoC) are fairly difficult to pull off, so probably not in most threat models.


Yes, but I just wanted to highlight that here we have a completely new attack surface - hardware-level hacking, which is not the same as hacking hardware via software (like Spectre/Meltdown) :)


True! Advanced systems already have some degree of fault tolerance built-in, and the encryption of enclave memory is there against cold boot attacks and such.


That sure sounds great, but it trivially fails. The VPN provider can just stick a second machine on the network port that correlates input/output packets and creates logs based on that. So at best it's protecting against incompetence, but not maliciousness.


And how exactly would that second machine decrypt the packets, given that the keys are only available to the valid one?


"correlates input/output packets"

Timing and size will get you very far, especially as smaller packets are used for TCP set up. This is a worry with good faith TOR servers and malicious upstreams, even with multiple hops. It's certainly doable for a single machine where you're directly watching its network card.


Sure, but I'm not proposing a mitigation for that. It doesn't have to be a silver bullet to be useful. I'm arguing that the ability to know that your peer is running some expected code is useful as an extra layer of security for some use cases.


But it's already generally accepted that RA is useful as an extra layer of security in many cases.

The problem is that this layer of "security" steps over the traditional demarcation point of the protocol, destroying the customary separation of authority. So examples of "good things" that could be done with it aren't particularly relevant to the larger discussion about the threat posed by its widespread adoption with manufacturer-escrowed keys.

If owners controlled their devices' keys, we could still have things like auditing organizations that enrolled the servers of VPN providers. So that you could verify a remote computer was running specific code, reliant on your trust in the auditor. But with the current design, those auditors are the device manufacturers themselves and the ability to inspect is applied universally across every device. This will inevitably be abused to make less powerful parties less secure and undermine their own interests. That is the problem.


Signal uses SGX in a similar fashion to ensure that the contact hashes are only uploaded to trusted servers running good code.

https://signal.org/blog/private-contact-discovery/


That's not any better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: