Hacker News new | past | comments | ask | show | jobs | submit login

0-RTT sounds nice, until you get to appendix E.5. Everyone should read this:

    E.5.  Replay Attacks on 0-RTT

    Replayable 0-RTT data presents a number of security threats to TLS-
    using applications, unless those applications are specifically
    engineered to be safe under replay (minimally, this means idempotent,
    but in many cases may also require other stronger conditions, such as
    constant-time response).  Potential attacks include:

    -  Duplication of actions which cause side effects (e.g., purchasing
       an item or transferring money) to be duplicated, thus harming the
       site or the user.

    -  Attackers can store and replay 0-RTT messages in order to re-order
       them with respect to other messages (e.g., moving a delete to
       after a create).

    -  Exploiting cache timing behavior to discover the content of 0-RTT
       messages by replaying a 0-RTT message to a different cache node
       and then using a separate connection to measure request latency,
       to see if the two requests address the same resource.

    Ultimately, servers have the responsibility to protect themselves
    against attacks employing 0-RTT data replication.  The mechanisms
    described in Section 8 are intended to prevent replay at the TLS
    layer but do not provide complete protection against receiving
    multiple copies of client data.  
It seems practically guaranteed a lot of devs will enable it without understanding the ramifications.. I hope embeddings like Nginx add a nice configuration interface like "enable_0rtt YES_I_UNDERSTAND_THIS_MIGHT_BE_INSANE;" or similar. Meanwhile I wonder if concentrators like Cloudflare will ever be able to support it, without knowing lots more about the apps they are fronting

I guess e.g. Nginx could also insert an artificial header to mark requests received as 0-RTT, and frameworks like Django could use that header to require views be explicitly marked with a decorator to indicate support, or something like that




Cloudflare only supports 0-rtt for GET requests with no query parameters in an attempt to limit the attack surface.[0] It is enabled by default for all free accounts.

[0]https://blog.cloudflare.com/introducing-0-rtt/


> I guess e.g. Nginx could also insert an artificial header to mark requests received as 0-RTT, and frameworks like Django could use that header to require views be explicitly marked with a decorator to indicate support, or something like that

There is an Internet Draft for that [1]. It is co-authored by Willy Tarreau of haproxy and implemented within haproxy 1.8 [2].

[1] https://tools.ietf.org/id/draft-thomson-http-replay-01.html

[2] https://www.mail-archive.com/haproxy@formilux.org/msg28004.h... (Ctrl+F 'Early-Data') https://www.mail-archive.com/haproxy@formilux.org/msg27653.h... (Ctrl+F '0-RTT')


That would certainly require a lot of coupling from proxy to code, though. The nicest thing about TLS is that it's just a transparent dumb pipe that provides confidentiality and integrity (and less often, client authentication).

Having to write your app to understand that TLS 1.3 was used, and 0RTT was used, seems like a really really bad idea. The longer I'm in engineering, the more I realize that the number of people who understand the ramifications here is much smaller than the number who can throw together a TLS 1.3 listening HTTP server by following some dude's tutorial.

Framework support is not going to be enough. This seems like a bad, bad move.


Eh, it doesn't sound too bad. There's already a myriad of meta data (HTTP request headers) you need to process today if you want to be a good HTTP citizen, such as acknowledging Accept-Encoding, HTTP byte ranges, If-Modified-Since, X-Forwarded-For, X-Forwarded-Proto, etc; and sending correct response headers such as Cache-Control, Vary, and so on.


Which is why it should have never been implemented in TLS 1.3.

I believe the argument against not doing it was that some companies will just implement their own protocols instead. Eh, I think the chances of that happening were pretty slim. Now most of the problems we'll see with TLS 1.3 will likely be related to 0-RTT.

Also, wasn't that basically the same argument for implementing MITM in TLS 1.3? That if they don't do it the banks and middlebox guys will just stick to TLS 1.2 or whatever?

And who cares about a little bit of an extra HTTPS delay, when just adding Google analytics and Facebook Pixel to your site can increase the delay by over 400 ms? Some poor performance tracking tracking scripts add 800 ms on their own.


0rtt is still useful for static assets, and generally everything that is public. I have a handful of static websites (literally static, as in consisting of just HTML and CSS files), for those 0rtt is awesome. TLS is no longer used to only protect private pages (eg. access to your private emails, the admin section of a CMS). It's also used for privacy reasons on completely public websites.


Wouldn't that be a bit of a privacy leak? If 0rrt works, it was a request for a static asset.

Of course, TLS is not only for privacy reasons, but also for integrity reasons (preventing injection of malicious Javascript and similar attacks). For that purpose, 0rtt for static assets works fine.


> Wouldn't that be a bit of a privacy leak? If 0rrt works, it was a request for a static asset.

Response size and timing probably already leak this.


This honestly really bothers me.

We're encrypting everything, we have "Let's Encrypt", we have browsers telling users that their connections are "secure".

Meanwhile your DNS lookups are public (which leaks what site you're accessing) and size+timing analysis leaks which static assets you've retrieved. Which gives away for example what article you're reading on what news site. Which the site itself is telling google, facebook and other malicious third-parties anyway...

How is anyone supposed to understand digital privacy? Everything sucks, and I'm not even sure what could be done to make it suck less.


I think for the average user, the authentication part is a lot more important than the encryption part unless they're entering passwords. I want to be relatively sure that the site I'm visiting hasn't been replaced by something serving malware. I don't care as much about people knowing which articles I read.


For DNS lookups we're having people testing out DNS-over-HTTPS which would solve this entirely, lookups would be opaque to anyone but the DNS server involved.

For timing and size you can usually do something about it as a site owner (HTTP/2 for example will multiplex connections so it makes timing and size comparisons much harder)


Even without encrypted DNS lookups, HTTPS leaks the FQDN (i.e. exact subdomain) of what you are connecting to through Server Name Indication.

SNI was added to allow servers to know which SSL certificate to send to the browser, previously you would need to have one IP address per SSL certificate.


Just to be clear, 0rtt is only for "revisits", and for revisits of static assets the client is likely to have the asset cached still. So the only benefit is if the "static" asset has changed, or the client's cache is cleared. Which seems less useful.


The content of your page may be encrypted but the DNS lookup isn't


There's work now with DNS-over-HTTPS to prevent that.


As well as SNI


Tls connection reuse already works for that. And pipelining in http.


> And who cares about a little bit of an extra HTTPS delay, when just adding Google analytics and Facebook Pixel to your site can increase the delay by over 400 ms?

That's how you might feel if you live with fast broadband internet. A lot of the world is stuck with high latency, low-bandwidth connections. Applications that are incredibly lean will still be slow if the server is in San Jose and the client is in, say, Uganda.


I run a very large number of web performance tests and GA and the Facebook pixel do not add 400ms either alone or together of user-perceptible speed decreases. They will take some small amount of main thread time for parse and compile, but they are loaded async and generally not performance problems on the (hundreds) of pages I have profiled.

Intercom and other live chat solutions are typically the biggest offenders on modern pages. They serve 10-20x the script as GA and the Facebook pixel.


GA and FB-P both effect the document complete time; but your right, there are way way worse scripts out there.


The other thing I don't like about 0-RTT is that the client reveals that they've been to the server before, i.e. it removes a plausible case for anonymity. Just another implicit "cookie" that needs to be washed, I suppose.

I would love if instead the pre-shared secret enabling 0RTT could be something obtained through DNS instead, if that's possible. But that would require a secure DNS, which we don't have.


But that problem is just session resumption, and that isn't new or specific to 1.3. Another way to do this would be session tickets too (also not new with 1.3). Your client can remove support for both, and always connect as a new connection.


If you're concerned about it, couldn't you turn it off clientside?


Yes. And like so many other behaviours in the web-stack, I feel like I'm in a constant fight with my client software to please choose privacy over convenience. So it's worth being aware of where these tradeoffs exist. Especially when I'm writing that client software.


If you're that paranoid about your privacy, then I recommend you choose a user agent whose philosophy on privacy more closely represents your own.

Tor Browser, for example, is highly likely to "choose privacy over convenience" whenever possible with it's default settings.


How exactly does the client reveal that?


From the spec: "When clients and servers share a PSK (either obtained externally or via a previous handshake), TLS 1.3 allows clients to send data on the first flight (“early data”). The client uses the PSK to authenticate the server and to encrypt the early data."

The client initiating the 0RTT provides a pre-shared key, thus revealing to the server that they're not a newcomer. I don't know exactly how many bits of that PSK could be used by the server to identify specific clients. For QUIC I think it's a 15-bit identifier. Browsers will need to clear the PSK (and so remove the 0-RTT) when they clear cookies or in a "private browsing" mode.


Is there any way to do a 0RTT request for a completely new connection/session?

I mean, if I want to get weather data from let's say NOAA, so a simple GET / HTTP/2, why would I want to send any PSK? Let the server send the response and the Server Cert and the client can decide whether to trust the reply or not.

CloudFlare only "allows" 0RTT for GETs, for example. Is that different, or they also need the PSK?


0-RTT is defined with a PSK (pre-shared key). There are two ways you might have a PSK. The only one that would come up in a web browser as they're constructed today is a "resumption" PSK, agreed between the two parties during a previous connection.

For the Internet of Things it's also envisioned that some devices might know a PSK at the outset to use TLS rather than some custom protocol to secure their traffic. Maybe your lightbulb controller knows a PSK for the lightbulbs baked in at the factory. But it's not expected that web browsers will care about this case.


I'm pretty happy with the strong confidentiality guarantees offered by TLS 1.3, and a finished standard is better than more draft and committee turns, but I think the simple use case of securely accessing "public" information with 0-RTT seems to be left out.

Or simply serving static content faster would have been a nice few percentage efficiency gain.


For the nontechnical, what does 0rtt do?


"Zero round trip time," i.e., if your web browser previously had an encrypted session with the server and cached the cryptographic keys involved, the next time you visit the website, it can immediately encrypt an HTTP request to that public key and send it in the first packet.

Normally there's a handshake involved: your browser and the server send packets to each other to set up an encrypted channel, then the server uses its certificate to prove that it's in control of its end of the private channel, then you can send a request. So if you and the server are, say, 50 ms apart, there's usually an extra 200 ms for this handshake, which 0-RTT can save you.

The danger is that because your browser isn't setting up an encrypted channel but just sending a request and hoping for the best, someome who can capture the packet can just re-send it to trigger the request twice. Duplicaing the request is fine for, say, the HN home page, but annoying for a comment reply and a real problem for an online purchase.


> Normally there's a handshake involved: your browser and the server send packets to each other to set up an encrypted channel, then the server uses its certificate to prove that it's in control of its end of the private channel, then you can send a request.

Not an expert on this but this seems a little bit wrong or at least very misleading when I reason through it? I don't imagine the server needs to prove anything before it is sent data encrypted with its public key... if it doesn't have the private key then it simply can't decrypt; it wouldn't need a certificate for that. Rather I expect this is because the server & client want to generate ephemeral keys (for forward secrecy), which fundamentally requires a round-trip. Is that correct?


A few things:

Yes, normal setup for TLS 1.3 always does ephemeral keys for forward secrecy first.

If some alternate protocol started by sending data encrypted with a remote server's public key this data can be replayed by attackers, just like with 0-RTT in TLS 1.3, this problem is unavoidable for 0-RTT protocols.

But where should we get a public key from anyway? If it came from a previous session, the resumption PSK is better. If we got it by guessing, maybe checking a central store of known public keys, then it might be wrong and we have to start over any time it was wrong anyway.

We have to wait to see the certificate (and transcript signature) in the normal case because until we see the certificate (and signature) we have no proof we're talking to whoever we wanted to talk to, and even if the wrong person can't decrypt the message they can replay it at their leisure.

Note that "waiting" for these is an exaggeration, in TLS 1.3 the server sends both its half of the key exchange AND the certificate with the transcript signature AND any extra metadata in a single message, it's just conceptually separate because the latter part of this message is encrypted while the first part agreed the keys for encryption, so the client needs to think about them separately.


The server needs to transmit its certificate to the client. Before that the client generally doesn't know the server's public key.

For PFS suites with ephemeral-ephemeral DH/ECDH (DHE/ECDHE in TLS parlance) the client generates a DH key pair for each connection and so does the server; both public keys need to be exchanged before secrecy can commence.

EE-DH-based handshakes have innate entropy (due to the ephemeral keys), but TLS was initially build without EE-DH. For the historic RSA key exchange, client and server random nonces supply the handshake entropy and liveness proof; again necessitating a transmission of both nonces to the other party. RSA-KEX was removed in TLS 1.3. The nonces are always there, mostly for PSK and PSK-only handshakes (otherwise you could use PSKs only once).

TLS 1.3 resumption essentially uses a previously negotiated shared secret (PSK) which allows both parties to forego authentication-by-signature, because knowledge of the PSK authenticates them. Forward secrecy is added back in by EE-DH, but can actually be disabled.

TLS 1.3 0-RTT extends session resumption. Essentially, the early data is encrypted only under the PSK. It has neither forward secrecy (relative to the session under negotiation) nor liveness [I think it might be hypothetically possible to reject replays server-side by rejecting duplicate ClientHello.random values but this is hugely out of spec and completely negates any performance benefits 0RTT might have had].

(It's important to realize that TLS is and always has been a meta-protocol with a lot of knobs you can tweak. Now, for use in HTTPS/FTPS/STARTTLS the set of parameters is relatively restricted, because e.g. browsers simply won't support PSK-only handshakes. For general discussions of TLS properties this is something to keep in mind, however.)


Rejecting replays by remembering the conversations you had previously forever is permissible in the standard and even called out as something an application can do. It's not required because whilst it's trivial for a toy web server anybody at scale can't do it.


The first time you connect to a server, a "handshake" needs to be performed in order to generate a shared secret key. If you've already performed the handshake with a given server in the past, 0-Rtt allows you to skip it and use the key you generated before.


Great presentation by a couple of Cloudflare employees:

"Deploying TLS 1.3: the great, the good and the bad”—https://www.youtube.com/watch?v=0opakLwtPWk


By the looks of it, Cloudflare already does support 0-RTT as a (generally available) beta feature in the crypto tab of a website. Maybe TLS 1.3 not being enabled on the origin machine protects the origin from this type of attack.


A quick thought is that the protocol could require a sequence number on 0-RTT and only accept newer ones.


Section 8 of the Draft lists three plausible ways to prevent replay attacks. There isn't a shortage of ways to prevent them, but it takes actual effort by the application software, because you need to store state. How much state do you want to store - and where?

This looks trivial when you are running one Apache httpd on a Raspberry Pi on your desk. Why not just build any of these approaches right into the standard? And then you try to figure out how to make it work for Netflix or Google, who have thousands of clusters of servers - and your brain explodes.

So that's why the standard doesn't specify one solution and require everybody to use it but it does say if you need 0-RTT then you need to figure out what you're going to do about this, including specifying some of the nastier surprises that shuffle message orders and change which servers get which messages.

Example: let's say you think you're clever, you have two servers A and B, load balanced so they usually take distinct clients but can fail into a state where either takes all clients. You might figure you can just track the PSKs in each server, offer 0-RTT and if a client tries to 0-RTT with a PSK from the other server (somehow) it'll just fail to 1-RTT, no big deal.

Er, nope. Bad guys arrange for a client to get the "wrong" server, they capture the 0-RTT from that client, the "wrong" server says "Nope, do 1-RTT", the client tries again and in parallel the bad guys play the 0-RTT packets to the "right" server, which does the exact same thing as the "wrong" one - the replay has succeeded.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: