In fact it's possible it isn't even talking about the proposed IETF protocol QUIC at all, but instead Google's QUIC ("gQUIC" in modern parlance) in which case this might as well be a paper saying the iPhone is vulnerable to an attack but it turns out it means a 1980s device named "iPhone" not the Apple product.
It certainly references a bunch of gQUIC papers, which could mean that's blind references from a hasty Google search by researchers who don't read their own references - but equally could mean they really did do this work on gQUIC.
More like it means the iPhone X instead of the newer iPhone 11
Just checked, mine used version Q050 just now.
Of course, there are many versions of gQUIC, some more like IETF QUIC than others as they are transitioning parts of the protocol in stages. Version Q050 is the pre-TLS version.
As it's used between Chrome and Youtube, a lot of the world's internet traffic is over gQUIC at the moment.
On the other hand, since this set of vulnerabilities seems to be about passive fingerprinting by intermediaries, what actually matters is how much traffic is still running gquic, right? And I think most servers have moved over to HTTP/3?
(Disclosure: I work for Google, speaking only for myself)
Ah, ok, that kind of fingerprinting. I suppose then this might be where our local ISP's find a way to replace their now threatened DNS query sniffing. Assuming 95.4% accuracy means what I think it does, that's pretty impressive.
Traffic analysis is a harsh villain.
The concept is the clear and obvious way forward in web security; what's the point in encrypting DNS when the SNI is still readable. I'm also glad that eSNI has been dropped because of the obvious flaws found in it rather than continuing attempts to secure an inherently flawed protocol. However, this does mean that deployment will take longer than previously estimated, which is kind of a bummer.
Each webpage comes from an IP, but includes subresources from a whole set of domains. I would guess that for the vast majority of web sites, that set of domains is unique.
"What can you learn from an IP?" https://irtf.org/anrw/2019/slides-anrw19-final44.pdf
So, yes, IP addresses are plenty sufficient to figure out what someone is browsing.
Anything which is non-uniform is an attack surface. If an efficiency improvement makes things more predictable, then it adds a possibility of a new snooping attack. All your interactions have to be as inefficient as your most inefficient interaction, else an adversary can tell them apart
Basically my browser needs to "keep watching some YouTube stream", so an attacker can't figure out that I switched to reading emails.
Most of the time, it's not the case, and just the fact you are checking emails does not give your adversaries enough information to care about. Being uninteresting and unimportant is one of the best possible defenses. The higher the stakes go, the more minor details become important to conceal. These constant-rate white-noise channels are used by military and intelligence.
And a huge knowledge hole on the p2p/dweb and IETFF people both.
May be misguided, but I feel a little uneasy about bundling TCP functionality, TLS and HTTP into a single protocol over UDP.
Measuring the workload that I care about (Netflix CDN serving static videos), loosing TSO, LRO, and software kTLS, like you would with QUIC, takes a server from serving 90Gb/s @ 62% CPU to serving 35Gb/s and CPU limited. So we'd need essentially 3x as many servers to serve via QUIC as we do to serve via TCP. That has real environmental costs.
And these are old numbers, from servers that don't support hardware kTLS, so the impact of loosing hardware kTLS would make things even worse.
For YouTube, the CPU cost of QUIC is comparable to TCP, though we did spent years optimizing it.  has a nice deep dive.
Other CDN vendors like Fastly seem to have the similar experience .
I believe sendmmsg combined with UDP GSO (as discussed, for instance, in ) has solved most of the problems that were caused by the absence of features like TSO. From what I understand, most of the benefit of TSO comes not from the hardware acceleration, but rather from the fact that it processes multiple packet as one for most of the transmission code path, meaning that all per-packet operations are only invoked once per chunk (as opposed to once per individual IP packet sent).
Also, sendmmsg still touches data, and this has a huge cost. With inline kTLS and sendfile, the CPU never touches data we serve. If nvme drives with big enough controller memory buffers existed, we would not even have to DMA NVME data to host RAM, it could all just be served directly from NVME -> NIC with peer2peer DMA.
Granted, we serve almost entirely static media. I imagine a lot of what YouTube serves is long-tail, and transcoded on demand, and is thus hot in cache. So touching data is not as painful for YouTube as it is for us, since our hot path is already more highly optimized. (eg, our job is easier)
I tried to look at the Networking@Scale link, but I just get a blank page. I wonder if Firefox is blocking something to do with facebook..
For the CDN/edge this is irrelevant and should not even be part of the discussion. It is obvious it will not change (or will be even better) for them.
The comment you are replying talks exclusively about the "middle boxes". They will have a hardtime with quic, not matter what. (IMHO, a small price to pay, too bad it is something flawed like quic instead of a true distributed solution)
> Measuring the workload that I care about (Netflix CDN serving static videos)
Since QUIC is based on UDP, it doesn't face the same limitations that TCP does. TCP also suffers from head of line blocking, which QUIC also eliminates. It is also more secure due to header being encrypted, unlike TCP. Plus you also get multiplexing built-in. QUIC is far better than TCP and for serving websites & APIs, it is a much better alternative.
It seems like if the tradeoffs are this bad at scale Google wouldn't be pushing QUIC.
Anyone who's looking to replace HTTP/1.1 and HTTP/2 with HTTP/3? AFAIK QUIC is completely integral to that, in order to avoid this issue you'll need a mixed deployment of HTTP/3 and an older version of the standard.
Right now, HTTP/1.x is 23% of traffic, HTTP/2 is 75%, and HTTP/3 is just 2%. Since this is about overall traffic (visible to Cloudflare), big sites like Google and Facebook dominate. All that HTTP/3 traffic is probably Chrome users watching YouTube.
Despite Google meddling with the standards, "HTTP/3" isn't HTTP.
Please let that misconception die. I’m not at all fond of Google, but HTTP/3 is not a Google thing: Google is just one (admittedly significant) party involved in making a thing that lots of parties want and have worked together to make. https://news.ycombinator.com/item?id=25286488 is a relevant comment I wrote a couple of months ago.
a) what parties?
b) what exactly did they contribute?
Look, yeah, it's a good thing that standards still exist and Google has to play social engineering games instead of suddenly dropping this stuff on us as a fait accompli.
But let's not pretend like Google doesn't own the Internet; it'll save us grief and tears in the future if we see things clearly as they are.
Look, I wanted something like QUIC before they produced it, because HTTP/2 was better than HTTP/1.1 in most ways, but had the critical TCP HOLB problem that makes it markedly worse than HTTP/1.1 in some situations—and something like QUIC is the only way of fixing that, since the SCTP approach is no longer socially viable. (And before that, I had long wanted something like HTTP/2, because the limitations of HTTP/1.1 were sometimes rather frustrating. —Though I must confess that I wasn’t immediately taken with SPDY and the particular way Google pushed it initially.)
Google doesn’t own the internet. Let it go. Yes, they have an unhealthily large influence on consumer matters—but also nowhere near as large in infrastructural. Google were the ones to advance the initial SPDY and QUIC proposals because their position as massive server operators and manufacturers of a popular browser meant that they were simply the company best-suited to making such initial development (and sure, it served their interests well too).
We advertise the following alt-svc for most devices:
> h3-29=":443"; ma=2592000,h3-T051=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"
HTTP/3 (over IETF QUIC) is still not as widely deployed due to lesser client support, but that's growing as both Chrome, Firefox and Apple (in iOS Safari, when enabled) adopt it and the latest drafts.
Is there any media transfer protocol based on QUIC?
This is not dissimilar from SRT.
So do ads, and no government seems to care much.
Pixar movies cost resources, prolonging life with medical treatment costs resources, chocolate candy bars cost resources, operating the Olympic Games costs resources.
In each case, various people believe the resource costs are worth it to get the thing that is produced. Same for bitcoin, same for ads.
You can’t just say they cost resources. If you’re advocating these things shouldn’t exist, you either need to get into the specifics of why they offer value but just not enough to offset their resource costs, or why they don’t offer value, and persuade others to agree with you.
Judging by the success of ads and bitcoin, I think, “these things consume resources” is a pretty weak and useless argument. It would not matter if the resource costs were astronomical or tiny, it’s a specious chain of reasoning in principle.
That's a feature.
Otherwise the industry will die off slowly. Case in point, OpenStack first, and later kubernetes, were exactly this kind of innovation. It offers very tiny benefit and a whole slew of plugging points for the industry players to hook upon; but is beneficial enough for mostly everyone to be comfortably endorse a piece of technology that at the time were already starting to decaying... Of course in this case, OpenStack has been a much worse tech because of it was developed by people haven't been the mainstream...
MPTCP is available in iOS and recent linux kernels. So you don't need QUIC for that.
> decoupling protocol upgrades from OS upgrades
This is as much a downside as it is an upside. Now you have to potentially juggle hundreds of applications each shipping their own transport protocol implementation. And maybe their own DNS client too.
The idea of every application coming up with its own DNS implementation for instance annoys me greatly. I can already anticipate the amount of time I'm going to lose hunting down the way application X handles DNS because it won't apply my system-wide settings.
I also firmly believe that SSL/TLS should've been handled at the OS level instead of everybody shipping their own custom OpenSSL implementations (complete with various vulnerabilities). I wish I could just use some setsockopt() syscalls to tell the OS I want a secure socket and let it figure out the rest.
Moving stuff to higher layers is, IMO, mainly because Google wants to become the new Microsoft/Apple and moving that stuff up is a way to take the control away from Windows/Mac OS. The browser is the OS. You use web apps instead of apps.
You turn every device out there into a glorified ChromeBook.
IBM z/OS actually does this with a feature called AT-TLS (Application Transparent TLS) . You don't even need to call setsockopt (actually z/OS uses ioctl instead of setsockopt for this, but same idea). You can turn a non-TLS service into a TLS service simply by modifying the OS configuration.
- Linux kernel TLS doesn't provide a full implementation of TLS in the Linux kernel, only the symmetric encryption part. The handshake still has to be performed by the application in user space, and the application then needs to call setsockopt() in order to inject the keys into the kernel. By contrast, z/OS AT-TLS implements all of TLS (including the handshake), so the application doesn't have to do anything. As far as the application is concerned, a TLS socket looks exactly the same as a plain one (unless the application starts calling the AT-TLS ioctl, which will tell it whether a socket is plain or TLS)
- The Windows "kernel mode SSL" is only for the built-in HTTP server. It doesn't apply to clients, or to other protocols than HTTP. By contrast, z/OS AT-TLS works for any protocol, not just HTTP, and supports both clients and servers
So nobody has built the same functionality as z/OS AT-TLS in Linux or Windows. People have built some small subsets of z/OS AT-TLS's functionality, but not the generality that AT-TLS provides.
Notice that in Linux because sockets "are" file descriptors you actually can put all connection and handshake work into a separate processes altogether, and just pass the socket (now TLS encrypted) to a process that pumps data back and forth, without it needing to be aware of TLS at all.
For z/OS AT-TLS, the primary use case is that I can take a legacy app which knows nothing about TLS, and make it support TLS with zero code changes. You create policy rules in the OS configuration, they resemble firewall rules (they can match on source and destination address, port and process name), and if an application's socket call matches the rule, the plain socket gets transparently swapped with a TLS one.
Linux kernel TLS is trying to address a completely different use case: I have an app which knows all about TLS and already has the code (most likely via some shared library such as OpenSSL) to do it all by itself, but I'm going to modify it to move some of the processing to kernel space for improved performance.
What @simias was asking for, "I also firmly believe that SSL/TLS should've been handled at the OS level instead of everybody shipping their own custom OpenSSL implementations (complete with various vulnerabilities). I wish I could just use some setsockopt() syscalls to tell the OS I want a secure socket and let it figure out the rest" isn't exactly either use case, but it is closer to z/OS AT-TLS use case than Linux kTLS use case. Indeed, AT-TLS supports a mode of operation in which, rather than configuring TLS to be automatically applied by rules, it is manually requested by the app calling an ioctl on the socket - but, all the TLS configuration like which certificates to trust is pulled from the OS configuration - that seems like almost exactly what @simias was requesting.
> The handshake is at once the most complicated part (which means you don't really want it in Ring Zero)
I don't have a lot of insight into how AT-TLS is implemented internally, but I know some parts of it run in user space (inside a daemon called "pagent"). Having TLS implemented in the operating system doesn't necessarily mean it has to be implemented in the kernel. Some operating systems (e.g. Windows NT) say that the system call interface is private and the public interface is a library call. If an OS goes with that model, then the OS-supplied TLS implementation could be in a shared library not inside the kernel. Of course, that isn't the model Linux went with, it made system calls a public interface and so even if you put an AT-TLS-style transparent TLS implementation in the C library, some apps would bypass it by using system calls directly.
That isn't necessarily problematic. Actually, z/OS AT-TLS gets bypassed by some apps too. z/OS, being an operating system with decades of legacy, actually has multiple sockets APIs, and AT-TLS doesn't hook them all (whether due to technical limitations or simply IBM didn't decide it was worthwhile spending resources on.) In particular, the so-called "Pascal sockets API" (I believe it is called that because originally it was written in Pascal, and designed for use by Pascal code, but programs written in other languages such as C or COBOL or assembler can call it) doesn't go through AT-TLS. Likewise, apps that use the UNIX SystemV STREAMS API don't work with AT-TLS either. However, only a minority of z/OS apps use either of those legacy sockets APIs, and using those APIs is discouraged for new code. So, somewhat like z/OS does, if someone wanted to add this feature to Linux, they could always put it in the C library, and if an app bypassed that, well "sorry you don't get the transparent TLS feature".
Another option would be to do something like FUSE – have a daemon in user-space which does the TLS handshake. Inside the connect() or accept() system calls, the kernel could send the socket to that daemon, get it to do the TLS handshake, and then return the syscall when the daemon says it is done. All doable, the only real questions are (1) is anyone sufficiently motivated to add this feature to Linux (quite possibly not); (2) if they implement it, will the Linux kernel devs accept it (don't know about that either)
QUIC explicitly addresses "network middlebox ossification" of TCP, by encrypting almost everything about the protocol. This type of ossification is where the network blocks or corrupts any deviations that it doesn't recognise but thinks it does.
TCP "OS ossification" is a different kind. By itself, this could have been overcome by streaming TCP-in-UDP and amending it from there. Even that is only because of OS APIs. If OSes allowed applications to bind to a TCP port and process the raw packets, applications that want to could implement TCP themselves.
If that kind of dual-binding seems strange, I think it's actually more or less what we'll end up with with QUIC in the end: The OS providing QUIC sockets (socket(AF_INET,SOCK_QUIC,...)) to applications that don't care to implement the QUIC stack themselves, while allowing other applications to bind to UDP in the usual way and implement their own application-level QUIC.
I think current QUIC solves application vs. OS ossification issues in the short term, allowing things to evolve beyond TCP, but it's a temporary benefit.
In time I think we will see QUIC ossify when it's implemented in many applications that don't get updated.
Most browsers are updated often. But I'm seeing more obscure applications starting to include QUIC and depend on it (without any TCP fallback even) - leading eventually to hundreds of "hand-written quality" QUIC implementations, all destined to get out of date. Various languages are implementing their own QUIC libraries, and these aren't all going to be maintained the same way. Shipped applications ossify too.
That's why I think it'll end up in the OS kernels eventually.
(E.g. the rationale for TOU, now-dead Facebook's UDP-based transport explicitly stated userspace-only deployability as requirement. They needed a solution for all their other requirements to be deployed in 2015, not with a consensus design approved in 2020 and in wide deployement in 2025.)
The middle-box driven ossification happened by making things harder. The OS upgrade driven ossification happened by reducing/delaying the payoff.
The benefits are: future protocol evolution (TCP is extremely difficult to change because of middleboxes, QUIC encrypts almost all of the header to stop middleboxes from messing things up) faster connection establishment (fewer RTTs thanks to bundling TCP+TLS handshakes), no head-of-line blocking when sending multiply streams over a single connection.
That's the whole point of the troublesome middleboxes. They're not passive listeners. They're active participants, with fragile implementations.
I suspect I'm being misunderstood. I don't intend to suggest these pasky middleboxes spring up randomly on the Internet and anyone is at risk. These are deliberately installed on the edge of corporate networks, and their sole purpose is to intercept, unencrypt and filter traffic.
There's no technological solution for a middleware box that emulates a client to servers, and emulates a server to clients, and is operating exactly as designed.
This shouldn't be new or surprising or groundbreaking to anyone. It's just a fancy name for a poorly implemented proxy.
Application layer developers are fighting corporate TLS interception with things like certificate pinning, not using the OS certificate store, making it harder to modify the app's certificate store. The goal is to heavily discourage corporations from breaking end-to-end security to spy on their employees.
In those places where they are adding a "trusted" local middlebox CA to the client for interception, modifying the browser's pin store (or its logic) is not such a big stretch on top of that.
At least other devices, such as users own devices on the corporate network, are immune to this.
If only this was true. Almost all of TLS 1.3 would be more straight forward if you were right. As a trivial example, clients could begin by saying they want to speak TLS 1.3 rather than pretending they want TLS 1.2 even though they actually don't.
Implementing a security device as back-to-back proxy in which there are two connections, one from a client to the device, then another from the device to the server, would be entirely compliant with the standard, it would trivially enable forward compatibility, and it would make it easier to reason about the provided security features (if any).
But it would make them (much) more expensive.
The true purpose of such middleboxes is not security, it is to transfer money from gullible customers to the vendor. "Security" is just an excuse, if they thought you'd buy it to prevent COVID-19 that's what they would claim it does.
So, a common trick went like this: When a client makes an HTTPS connection, you let it ride, but you passively watch the first few packets they receive. You're looking for the X.509 certificate from the server, which you will then inspect to figure out if it's somebody you trust. If not, you reset the connection and next time they try you intercept, perhaps to display a "Warning: Unauthorized Terror Porn Gay Propaganda Web Site. Your Boss Has Been Notified" message.
Notice this doesn't actually provide any security at all, X.509 certificates are public documents, my servers can give you the certificate for Google, or a local hospital, or PornHub, even though I am not in fact Google, a local hospital or PornHub. The middlebox won't know this was bogus because the protocol only proves the server knows the keys for this certificate to the client not a third party eavesdropping.
But this "trick" does break TLS 1.3 as originally designed because since a plaintext X.509 certificate is only useful for snooping and provides no security benefit it is now encrypted, and these middleboxes would freak out.
So what was done about that? Well they don't actually provide any security so we just sidestepped it. TLS 1.3 claims to be a TLS 1.2 resumption. "Oh you were already talking to this server about something else. I'm sure I must have checked that time and it was fine. On you go" says the middlebox.
This worked very well for their actual purpose, as you can see from the revenues of middlebox suppliers.
Still, some vendors are adapting to QUIC by blocking QUIC:
Resolving the QUIC issue
The good news is that if QUIC communication does not work between a client and a server, the traffic will fall back to traditional HTTP/HTTPS over TCP, where it can be inspected, controlled, logged and reported on as usual.
At the time of writing, the advice from most firewall vendors is to block QUIC until support is officially added to their products. This recommended method will vary from firewall to firewall. Some firewalls allow QUIC by default while others block it by default, but all firewalls are able to allow or block it.
HTTP/2 was introduced to solve head-of-the-line blocking on the application level (one HTTP/1.1 connection could only handle one request at the time, barring poorly supported pipelining)
TCP itself also only allows one stream of data to flow at the time (in each direction) - in case of poor network quality you are going to lose some TCP segments, and if you fill in the receiver's window, the transfer might halt until retransmission succeeds (this includes ALL streams within HTTP/2). Using UDP allows you to circumvent that.
I'm sure there's hard research on it. Anyone know some good papers?
There's also a bunch from Google at various stages of QUIC development.
But since the current NAT infested Internet requires smuggling SCTP over UDP anyway you may as well go all out and use QUIC which collapses the various protocol layers into a single, more efficient protocol.
Oh, and connection migration (wifi to cell), but that's a bit more tricky.
That way I have a place for high-security low-bandwidth text to go if I ever need it.
Our paper, in case you want to read more on this area: https://petsymposium.org/2021/files/papers/issue2/popets-202...
(Recently had to deal with a client who said nothing about MITMing our HTTPS, and a coworker who then installed a root cert for them anyway... really pissed me off.)
How exactly does widespread adoption of QUIC destroy decentralization?
Is it any different from widespread adoption of HTTPS, e.g. with browsers becoming increasingly discouraging towards unencrypted HTTP?
I'm not the one you're asking, but I think it's not, and I have the same concerns with HTTPS too.
It could be replaced, but it would take years. In many ways, cert authorities are more decentralized than ICANN.
This isn't to say we shouldn't worry about these single points of failure, rather, I think it's just an exaggeration of QUIC's impacts on decentralization.
Before widespread TLS adoption, your ISP could (and did!) inject ads into arbitrary websites. Now they can't. That's a good thing. It is actually amazing how little power your ISP has to tamper with your network traffic -- all they can do it shut it off completely.
My point is, switching from HTTP/1.1 to QUIC doesn't really make a difference here. The battle against centralized control of parts of the Internet has been lost and no protocol spec is going to change that outcome. Instead, there needs to be major innovation from the camp that wants to fundamentally change the world.
It's not like switching from WhatsApp to Telegram, where you have to go from one hermetic silo to another.
Can you elaborate on to why QUIC different from HTTPS-only in that regard?
So the reason you want certificates from the Web PKI for public facing servers is only that those are widely trusted for identity, whereas that "XKCD comic" idea is clearly untrustworthy and doesn't identify much of anything.