Hacker News new | comments | show | ask | jobs | submit login
Introducing QUIC support for HTTPS load balancing (googleblog.com)
170 points by Sami_Lehtinen 11 days ago | hide | past | web | favorite | 59 comments





Quicly is another interesting repo which is developed by the same author who built H2O HTTP server now at Fastly (https://github.com/h2o/h2o/)

https://github.com/h2o/quicly


Because QUIC + Fastly = Quicly. Genius.

Note that this seems to be gQUIC (the Google-internal variant of QUIC that it has been using for a few years), and is thus not interoperable with most non-Google QUIC implementations, which follow the IETF drafts.

> most non-Google QUIC implementations, which follow the IETF drafts

... which in practice means that you would be able to actually use QUIC with the most widely deployed client implementation (which might in-fact be the only client implementation that's not a development build right now).

IETF QUIC doesn't really exist yet, so offering it over Google's QUIC which exists in Chrome would provide next to no benefit to everybody.

However, I do hope, or rather, expect google to very quickly move both Chrome and the load balancers to IETF QUIC once that is finalized.

Until then, I keep my pitchfork safely stored in its closet because what is being offered here certainly is the practical offering and the correct offering would be completely impractical.


Similarly, SPDY was not compatible with HTTP2 before the standard was finalized, but then Google quickly switched to the standard once it was available. I don't think there is too much to worry about there.

I don't necessarily disagree with all that; however, I do think it would have been good for Google to disambiguate in their announcement and documentation that they don't mean standards-track QUIC here, or more generally, what version of QUIC they mean.

At least with SPDY it was clearer that SPDY was something different from HTTP 2.


I'm afraid to enable this. When I had AT&T Business Internet (maybe irrelevant), we frequently got QUIC protocol errors for many Google sites like YouTube (see others crying for help in [1]). My fix was to disable QUIC in Chrome. I don't want our users to hit that as it's not something a normal user could easily diagnose and remedy. No idea how common it is.

[1] https://support.google.com/chrome/forum/AAAAP1KN0B0xKFfhDvhj...


From a client perspective, QUIC seems like it would provide the most value on mobile devices. Any word on how well supported this is on Android clients? I'm assuming iOS adoption will take some time.

In many enterprise networks, TLS MitMboxes rule QUIC out completely. I'm not aware of any middleboxes that support it. Nevermind that even on many guest networks, I see udp/53 hijacked, and all other UDP dropped.


HTTP/1.1 to the backend? How about HTTP/2 so people can run gRPC services straight through without translations?


To be fair, the majority of benefits for HTTP/2 and QUIC are for the browser and experienced at the edge of your service. While that's not to say having the option to pipe HTTP/2 to your gRPC servers wouldn't be nice the majority I've seen are using HTTP/1.1 at their application layer and what happens from there is a matter between consenting servers.

If QUIC allows arbitrary length datagrams and retransmits them out of order then it's still going to be useful for video games and other systems with long lived connections. This is how I imagine how a replacement for TCP and UDP should work. But it doesn't look like there are standalone implementations of QUIC so I have no idea what the API actually looks like. There is only google's implementation which is part of chromium.

There are a few. This page lists the IETF QUIC implementations and gQUIC implementations (which is supported here) at the bottom. https://github.com/quicwg/base-drafts/wiki/Implementations

The counterpoint to that would be that, inside Google, it's all gRPC or Stubby.

I counter your counter with this neither being an internal tool nor gRPC/Stubby owning even plurality utilization outside of Google.

HTTP/2 would still be great for upstreams to ensure HTTPS and make things much more efficient via compression and multiplexing. It makes a difference in higher-scale situations.

HTTP/2 as a spec doesn't demand HTTPS. Only browsers enforce this requirement.

From a technical perspective, HTTP/2 makes little sense when you're commingling requests from different user agents. Header compression is ineffective. Persistent connections provide little benefit, as keep-alive works just as well in a data center environment. Between servers, there is no connection limit (as there is with browsers) so there isn't really head-of-line blocking issues that would go away when using HTTP/2.

In general, you're unlikely to benefit materially from using HTTP/2 (versus vanilla HTTPS) between your edge servers and your application servers.


Yes, with low traffic it won't matter, but "it makes a difference in higher-scale situations." We benefit materially when doing billions of daily requests.

Servers don't have unlimited connections so HTTP/2 provides more throughput and concurrency over the same TCP connection pool, while also preventing any slow requests from affecting all other requests queued on the same connection. HPACK compression operates on individual headers and saving the keys alone can save significant overhead for smaller payloads, along with the binary protocol.


A QUIC to h2 adapter is probably very complex to implement in a fully compliant way since both protocols are highly stateful. gRPC has a Cronet transport which again has experimental support for QUIC. I don't think it's reached production level yet...

It shouldn't be that complex. On top of both protocols are HTTP semantics. You get a request, which has some headers in both direction and then contains a bidirectional stream. You forward both in each direction.

It's not different than building a HTTP/2 (or HTTP/QUIC) to HTTP/1 gateway. On both sides are HTTP APIs.

The statefulness is on the lower level, where streams/requests are multiplexed. However that doesn't need to be conveyed or transferred during proxying.


You're basically describing attaching an h1 to h2 adapter behind the existing QUIC to h2 adapter with HTTP/1 as the intermediary protocol, which is a simplification for both sides. Of course that will not be that complex. But the downside is this approach limits both protocols from reaching their full potential by having to operate at a higher level.

gRPC over QUIC has been in use by some Google apps in production for over a year :)

"But HTTP/2 uses TCP as its transport, so all of its streams can be blocked when a single TCP packet is lost-a problem called head-of-line blocking."

But see https://datatracker.ietf.org/wg/httpbis/charter/

"It is expected that HTTP/2.0 will: ... Address the "head of line blocking" problem in HTTP."

Dislaimer: I am a longtime (satisfied) HTTP pipelining user for retrieving multiple resources from single domain over single connection and a CurveCP user since 2011 on own network. Never had any problems with HOL blocking that I am aware of, but I am only a dumb end user and wish to learn what I might be missing.


Different kind of head of line blocking. In HTTP it refers to a technique where a client sends multiple requests without waiting for their respective responses. However, the server has to send the responses in order - so if the client makes request A and request B, the response for B can't be delivered until A's response is delivered. So if A's response is large, B will be delayed.

In HTTP/2, it's all multiplexed - so the server can interleave responses.


My understanding is that was head of line blocking "at the HTTP level": When multiple HTTP/1.1 requests shared a TCP connection, the responses had to come in order, so if the first response was slow, all others were blocked behind it.

The authors instead talk of the analogous problem at the TCP level, which I don't remember enough of TCP to comment on.


HTTP/2 is a higher level protocol that runs on top of the TCP protocol.

HTTP/2 connection allows multiple assets to be delivered simultaneously and out of order on the same TCP connection, but the underlying packets in the TCP connection can still be held up by ordering.


It is about TCP Head of line blocking. For more check https://hpbn.co/building-blocks-of-tcp/#head-of-line-blockin...

head of line blocking in HTTP and TCP are different issues. The latter impacts HTTP/2.0 as well, and it's an important selling point for QUIC.

How does QUIC work together with µTP/LEDBAT? Also is there any plan to implement this type of congestion control in QUIC? Would it make sense? It could certainly be useful for background synchronization in PWAs.

The unfortunate fact of QUIC is that it's based on encrypted UDP packets, which many ISP routers will happily drop during peak traffic/high load periods.

EDIT: oh come on down voters. This comment is stating a fact. What's your problem with it?


The fortunate fact of QUIC is that it's based on encrypted UDP packets, so ISPs will have to adapt to the fact that not randomly dropping encrypted UDP packets is essential to their users' experience.

It's a chicken-and-egg problem. For many bad ISPs to care about this problem, QUIC adoption has to be high enough to cause mass user complaints. Yet QUIC's mass adoption requires it providing better user experience before ISPs change their stupid policies.

It would be interesting to see how it plays out.


Clearly Google is willing to move forward with it regardless of the potential for a diminished experience for some customers. And Google has more weight than most ISPs involved here. Furthermore they have shown before that they are not afraid to direct the blame right at ISPs for lagging behind in situations like this.

I'm very grateful that Google puts its weight behind many of the recent improvements to the Internet, even if some would argue that it's abusing its dominant market position.

What I'm worried about is that, in the QUIC case, the ISPs do have a valid reason to traffic shape UDP, at least in the name of making the network usable under load. There are too much uncontrollable UDP traffic that ISPs can do little about other than just dropping them altogether.


> What I'm worried about is that, in the QUIC case, the ISPs do have a valid reason to traffic shape UDP, at least in the name of making the network usable under load.

Overbooking your network to the point that it is regularly overloaded is not a valid reason to drop packets of paying customers. You have to either kick out customers or expand your network.

> There are too much uncontrollable UDP traffic that ISPs can do little about other than just dropping them altogether.

Well, as for DDoS and the like: Tough luck? They could have abstained from abusing their power to inspect traffic, and probably noone would be working on making inspection impossible. They thought it was a good idea to abuse the power, now the power is taken from them, and that's their problem to deal with.


Well, yeah, to a degree. But then, it probably provides a better user experience most of the time, and especially on mobile networks, so that might be enough to gain adoption.

True, I do hope that mobile roaming (and switching between WiFi and mobile) would be the killer app for QUIC. But AFAIK many ISPs treat mobile network differently from home broadband.

For example, my mobile plan and fiber broadband are from the same ISP, yet they appear to be running on two different backbones. The mobile network rarely experience traffic shaping, even at peak hours. My guess is that because mobile generates higher revenue, it gets better infrastructure. The fiber broadband is likely run at loss, it would be de-prioritized.

If that's the case, it would be difficult for QUIC to get traction on desktops.


Well, QUIC already has traction on desktops, as Chrome supports it?

The bigger problem I see is whether websites will deploy it ...


Pure speculation, but maybe most TCP implementations are ECN-aware while custom UDP transports aren't, thus making the former more responsive to congestion without forcing the routers to drop packets?

Even without ECN, routers can indirectly slow down sending sides by dropping TCP packets because most TCP implementations behave politely.

QUIC also adapts to packet loss, but I guess it's those other fire-and-forget-style UDP traffic that gets dropped, taking QUIC down as casualty.


> This comment is stating a fact

You are sneaking in an opinion: "unfortunate".

A protocol that implement congestion control + multiplexing + encryption over UDP is excellent progress over TCP. It should have happened decades ago.


But should disagreement be expressed as downvotes?

No, according to policy, but people don't seem to care.

Actually yes, this has been determined to be OK here.

HN doesn't really have a policy about what downvotes are for, and the community is clearly split about it.

It's "unfortunate" because ISPs drop UDP packets. How else would you phrase it?

This sounds like an urban legend. What fraction of Internet traffic is non-congestion-controlled UDP? VoIP and gaming are pretty low bandwidth.

Given that it retransmits and TCP will also be suffering (slightly less?) during such periods, it doesn't seem like a show stopper.

Depends on the ISP's traffic policy. I've witnessed many times when UDP packet loss is over 50% while TCP loss is below 10%.

If you are an ISP and you're preventing people using Google Chrome from accessing Google's properties (like youtube and gmail) during peak hours because you're dropping UDP packets, then you'll hear from your customers.

if UDP drops too much, chrome will almost certianly switch back to TCP anyways.

UDP is the stepchild of protocols. Many devices drop it first during congestion. It is not the peer of TCP in that regard.

You're getting downvotes because you aren't proposing anything. Your statement may very well be true, but what else is there to do? Stick with tcp forever and not try to innovate?

I'm not proposing anything because I don't have a solution to it. If pointing out an obvious and real problem deserves a downvote, I'm not sure what's the point of this place after all.

He is stating a valid concern that were not pronounced. So while we don't have a solution to the problem at hand, It helps us to think twice before any usage of it.

QUIC makes the web faster, particularly for slow connections

QUIC is a characteristic Google standard -- dictated, poorly thought through, and destructive for the rest of the web. See e.g. https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-mobi... "our experiments showed that QUIC always consumes more than half of the bottleneck bandwidth, even as the number of competing TCP flows increases."

QUIC improves speed for high-latency connections (at the cost of cohoused TCP flows, as above), but is worse than no change for the reality of poor connectivity on the ground, which has highly variable latency, packet loss and/or duplication, and (sometimes profoundly) asymmetric routing.


Criticizing QUIC becasue of its interaction with existing TCP flows is tricky, because the most efficient protocol isn't necessarily the one that interoperates best with existing protocols. There have been a lot of iterations on TCP congestion control algorithms that improve bandwidth/latency of TCP connections under the new congestion control algorithms at the detriment of legacy TCP connections.

I don't know enough about QUIC to say whether it's good or bad, but the standardization process by which SPDY evolved into HTTP/2 seemed good to me: there was a lot of external feedback that was incorporated and addressed, and the protocol evolution and standardization happened over a reasonable timeline. Do you see a problem with how QUIC is being developed by the IETF?


As the article that the guy linked to points out, this unfairness is due an aggressive configuration of the CUBIC congestion control algorithm in QUIC.

TCP also uses CUBIC. So, you would have just as well already been able to be a dick by configuring your CUBIC in TCP to be more aggressive.

Lots of research has gone into an optimal configuration of CUBIC for TCP, under the presumption that everyone will use this implementation, and therefore with the goal of optimal results for everyone.

QUIC now falls out of line because early adopters get better results, as they can unfairly compete against the fair TCP implementation. In the long term however, if QUIC is adopted by everyone, this will lead to a worse result for everyone, because this optimal configuration that's been researched under TCP has been thrown out for no good reason.

At least I'm not aware of a good reason. Maybe QUIC is actually somewhat less impacted by packets being dropped or something like that and does work better with everyone just trying to squeeze everything through aggressively, but even if that is the case, I cannot imagine it being the case to the degree that this CUBIC configuration has been made more aggressive.


To be fair, that's from my understanding "aggressive configuration of CUBIC in Chrome", not something dictated by the protocol.

It's an interesting point though that move to UDP and doing congestion-control in higher levels, in user-space, makes it easier for applications to be overly aggressive, and there's some incentive for them to do so. Clear downside of how frozen the lower levels of the stack have become in some ways...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: