... which in practice means that you would be able to actually use QUIC with the most widely deployed client implementation (which might in-fact be the only client implementation that's not a development build right now).
IETF QUIC doesn't really exist yet, so offering it over Google's QUIC which exists in Chrome would provide next to no benefit to everybody.
However, I do hope, or rather, expect google to very quickly move both Chrome and the load balancers to IETF QUIC once that is finalized.
Until then, I keep my pitchfork safely stored in its closet because what is being offered here certainly is the practical offering and the correct offering would be completely impractical.
At least with SPDY it was clearer that SPDY was something different from HTTP 2.
In many enterprise networks, TLS MitMboxes rule QUIC out completely. I'm not aware of any middleboxes that support it. Nevermind that even on many guest networks, I see udp/53 hijacked, and all other UDP dropped.
From a technical perspective, HTTP/2 makes little sense when you're commingling requests from different user agents. Header compression is ineffective. Persistent connections provide little benefit, as keep-alive works just as well in a data center environment. Between servers, there is no connection limit (as there is with browsers) so there isn't really head-of-line blocking issues that would go away when using HTTP/2.
In general, you're unlikely to benefit materially from using HTTP/2 (versus vanilla HTTPS) between your edge servers and your application servers.
Servers don't have unlimited connections so HTTP/2 provides more throughput and concurrency over the same TCP connection pool, while also preventing any slow requests from affecting all other requests queued on the same connection. HPACK compression operates on individual headers and saving the keys alone can save significant overhead for smaller payloads, along with the binary protocol.
It's not different than building a HTTP/2 (or HTTP/QUIC) to HTTP/1 gateway. On both sides are HTTP APIs.
The statefulness is on the lower level, where streams/requests are multiplexed. However that doesn't need to be conveyed or transferred during proxying.
But see https://datatracker.ietf.org/wg/httpbis/charter/
"It is expected that HTTP/2.0 will: ... Address the "head of line blocking" problem in HTTP."
Dislaimer: I am a longtime (satisfied) HTTP pipelining user for retrieving multiple resources from single domain over single connection and a CurveCP user since 2011 on own network. Never had any problems with HOL blocking that I am aware of, but I am only a dumb end user and wish to learn what I might be missing.
In HTTP/2, it's all multiplexed - so the server can interleave responses.
The authors instead talk of the analogous problem at the TCP level, which I don't remember enough of TCP to comment on.
HTTP/2 connection allows multiple assets to be delivered simultaneously and out of order on the same TCP connection, but the underlying packets in the TCP connection can still be held up by ordering.
EDIT: oh come on down voters. This comment is stating a fact. What's your problem with it?
It would be interesting to see how it plays out.
What I'm worried about is that, in the QUIC case, the ISPs do have a valid reason to traffic shape UDP, at least in the name of making the network usable under load. There are too much uncontrollable UDP traffic that ISPs can do little about other than just dropping them altogether.
Overbooking your network to the point that it is regularly overloaded is not a valid reason to drop packets of paying customers. You have to either kick out customers or expand your network.
> There are too much uncontrollable UDP traffic that ISPs can do little about other than just dropping them altogether.
Well, as for DDoS and the like: Tough luck? They could have abstained from abusing their power to inspect traffic, and probably noone would be working on making inspection impossible. They thought it was a good idea to abuse the power, now the power is taken from them, and that's their problem to deal with.
For example, my mobile plan and fiber broadband are from the same ISP, yet they appear to be running on two different backbones. The mobile network rarely experience traffic shaping, even at peak hours. My guess is that because mobile generates higher revenue, it gets better infrastructure. The fiber broadband is likely run at loss, it would be de-prioritized.
If that's the case, it would be difficult for QUIC to get traction on desktops.
The bigger problem I see is whether websites will deploy it ...
QUIC also adapts to packet loss, but I guess it's those other fire-and-forget-style UDP traffic that gets dropped, taking QUIC down as casualty.
You are sneaking in an opinion: "unfortunate".
A protocol that implement congestion control + multiplexing + encryption over UDP is excellent progress over TCP. It should have happened decades ago.
QUIC is a characteristic Google standard -- dictated, poorly thought through, and destructive for the rest of the web. See e.g. https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-mobi...
"our experiments showed that QUIC always consumes more than half of the bottleneck bandwidth, even as the number of competing TCP flows increases."
QUIC improves speed for high-latency connections (at the cost of cohoused TCP flows, as above), but is worse than no change for the reality of poor connectivity on the ground, which has highly variable latency, packet loss and/or duplication, and (sometimes profoundly) asymmetric routing.
I don't know enough about QUIC to say whether it's good or bad, but the standardization process by which SPDY evolved into HTTP/2 seemed good to me: there was a lot of external feedback that was incorporated and addressed, and the protocol evolution and standardization happened over a reasonable timeline. Do you see a problem with how QUIC is being developed by the IETF?
TCP also uses CUBIC. So, you would have just as well already been able to be a dick by configuring your CUBIC in TCP to be more aggressive.
Lots of research has gone into an optimal configuration of CUBIC for TCP, under the presumption that everyone will use this implementation, and therefore with the goal of optimal results for everyone.
QUIC now falls out of line because early adopters get better results, as they can unfairly compete against the fair TCP implementation. In the long term however, if QUIC is adopted by everyone, this will lead to a worse result for everyone, because this optimal configuration that's been researched under TCP has been thrown out for no good reason.
At least I'm not aware of a good reason. Maybe QUIC is actually somewhat less impacted by packets being dropped or something like that and does work better with everyone just trying to squeeze everything through aggressively, but even if that is the case, I cannot imagine it being the case to the degree that this CUBIC configuration has been made more aggressive.
It's an interesting point though that move to UDP and doing congestion-control in higher levels, in user-space, makes it easier for applications to be overly aggressive, and there's some incentive for them to do so. Clear downside of how frozen the lower levels of the stack have become in some ways...