
Introducing QUIC support for HTTPS load balancing - Sami_Lehtinen
https://cloudplatform.googleblog.com/2018/06/Introducing-QUIC-support-for-HTTPS-load-balancing.html
======
proyb2
Quicly is another interesting repo which is developed by the same author who
built H2O HTTP server now at Fastly
([https://github.com/h2o/h2o/](https://github.com/h2o/h2o/))

[https://github.com/h2o/quicly](https://github.com/h2o/quicly)

~~~
wmf
Because QUIC + Fastly = Quicly. Genius.

------
dochtman
Note that this seems to be gQUIC (the Google-internal variant of QUIC that it
has been using for a few years), and is thus not interoperable with most non-
Google QUIC implementations, which follow the IETF drafts.

~~~
pilif
> most non-Google QUIC implementations, which follow the IETF drafts

... which in practice means that you would be able to actually use QUIC with
the most widely deployed client implementation (which might in-fact be the
_only_ client implementation that's not a development build right now).

IETF QUIC doesn't really exist yet, so offering it over Google's QUIC which
exists in Chrome would provide next to no benefit to everybody.

However, I do hope, or rather, expect google to very quickly move both Chrome
and the load balancers to IETF QUIC once that is finalized.

Until then, I keep my pitchfork safely stored in its closet because what is
being offered here certainly is the practical offering and the correct
offering would be completely impractical.

~~~
delroth
Similarly, SPDY was not compatible with HTTP2 before the standard was
finalized, but then Google quickly switched to the standard once it was
available. I don't think there is too much to worry about there.

------
zbjornson
I'm afraid to enable this. When I had AT&T Business Internet (maybe
irrelevant), we frequently got QUIC protocol errors for many Google sites like
YouTube (see others crying for help in [1]). My fix was to disable QUIC in
Chrome. I don't want our users to hit that as it's not something a normal user
could easily diagnose and remedy. No idea how common it is.

[1]
[https://support.google.com/chrome/forum/AAAAP1KN0B0xKFfhDvhj...](https://support.google.com/chrome/forum/AAAAP1KN0B0xKFfhDvhjU8/?hl=en)

------
souterrain
From a client perspective, QUIC seems like it would provide the most value on
mobile devices. Any word on how well supported this is on Android clients? I'm
assuming iOS adoption will take some time.

In many enterprise networks, TLS MitMboxes rule QUIC out completely. I'm not
aware of any middleboxes that support it. Nevermind that even on many guest
networks, I see udp/53 hijacked, and all other UDP dropped.

------
puzzle
HTTP/1.1 to the backend? How about HTTP/2 so people can run gRPC services
straight through without translations?

~~~
seangrogg
To be fair, the majority of benefits for HTTP/2 and QUIC are for the browser
and experienced at the edge of your service. While that's not to say having
the option to pipe HTTP/2 to your gRPC servers wouldn't be nice the majority
I've seen are using HTTP/1.1 at their application layer and what happens from
there is a matter between consenting servers.

~~~
manigandham
HTTP/2 would still be great for upstreams to ensure HTTPS and make things much
more efficient via compression and multiplexing. It makes a difference in
higher-scale situations.

~~~
bastawhiz
HTTP/2 as a spec doesn't demand HTTPS. Only browsers enforce this requirement.

From a technical perspective, HTTP/2 makes little sense when you're
commingling requests from different user agents. Header compression is
ineffective. Persistent connections provide little benefit, as keep-alive
works just as well in a data center environment. Between servers, there is no
connection limit (as there is with browsers) so there isn't really head-of-
line blocking issues that would go away when using HTTP/2.

In general, you're unlikely to benefit materially from using HTTP/2 (versus
vanilla HTTPS) between your edge servers and your application servers.

~~~
manigandham
Yes, with low traffic it won't matter, but " _it makes a difference in higher-
scale situations_." We benefit materially when doing billions of daily
requests.

Servers don't have unlimited connections so HTTP/2 provides more throughput
and concurrency over the same TCP connection pool, while also preventing any
slow requests from affecting all other requests queued on the same connection.
HPACK compression operates on individual headers and saving the keys alone can
save significant overhead for smaller payloads, along with the binary
protocol.

------
textmode
"But HTTP/2 uses TCP as its transport, so all of its streams can be blocked
when a single TCP packet is lost-a problem called head-of-line blocking."

But see
[https://datatracker.ietf.org/wg/httpbis/charter/](https://datatracker.ietf.org/wg/httpbis/charter/)

"It is expected that HTTP/2.0 will: ... Address the "head of line blocking"
problem in HTTP."

Dislaimer: I am a longtime (satisfied) HTTP pipelining user for retrieving
multiple resources from single domain over single connection and a CurveCP
user since 2011 on own network. Never had any problems with HOL blocking that
I am aware of, but I am only a dumb end user and wish to learn what I might be
missing.

~~~
charleslmunger
Different kind of head of line blocking. In HTTP it refers to a technique
where a client sends multiple requests without waiting for their respective
responses. However, the server has to send the responses in order - so if the
client makes request A and request B, the response for B can't be delivered
until A's response is delivered. So if A's response is large, B will be
delayed.

In HTTP/2, it's all multiplexed - so the server can interleave responses.

------
deno
How does QUIC work together with µTP/LEDBAT? Also is there any plan to
implement this type of congestion control in QUIC? Would it make sense? It
could certainly be useful for background synchronization in PWAs.

------
riobard
The unfortunate fact of QUIC is that it's based on encrypted UDP packets,
which many ISP routers will happily drop during peak traffic/high load
periods.

EDIT: oh come on down voters. This comment is stating a fact. What's your
problem with it?

~~~
eeZah7Ux
> This comment is stating a fact

You are sneaking in an opinion: "unfortunate".

A protocol that implement congestion control + multiplexing + encryption over
UDP is excellent progress over TCP. It should have happened decades ago.

~~~
madmulita
But should disagreement be expressed as downvotes?

~~~
eeZah7Ux
No, according to policy, but people don't seem to care.

~~~
manigandham
Actually yes, this has been determined to be OK here.

------
dogecoinbase
_QUIC makes the web faster, particularly for slow connections_

QUIC is a characteristic Google standard -- dictated, poorly thought through,
and destructive for the rest of the web. See e.g.
[https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-
mobi...](https://blog.apnic.net/2018/01/29/measuring-quic-vs-tcp-mobile-
desktop/) "our experiments showed that QUIC always consumes more than half of
the bottleneck bandwidth, even as the number of competing TCP flows
increases."

QUIC improves speed for high-latency connections (at the cost of cohoused TCP
flows, as above), but is worse than no change for the reality of poor
connectivity on the ground, which has highly variable latency, packet loss
and/or duplication, and (sometimes profoundly) asymmetric routing.

~~~
eklitzke
Criticizing QUIC becasue of its interaction with existing TCP flows is tricky,
because the most efficient protocol isn't necessarily the one that
interoperates best with existing protocols. There have been a lot of
iterations on TCP congestion control algorithms that improve bandwidth/latency
of TCP connections under the new congestion control algorithms at the
detriment of legacy TCP connections.

I don't know enough about QUIC to say whether it's good or bad, but the
standardization process by which SPDY evolved into HTTP/2 seemed good to me:
there was a lot of external feedback that was incorporated and addressed, and
the protocol evolution and standardization happened over a reasonable
timeline. Do you see a problem with how QUIC is being developed by the IETF?

~~~
Sylos
As the article that the guy linked to points out, this unfairness is due an
aggressive configuration of the CUBIC congestion control algorithm in QUIC.

TCP also uses CUBIC. So, you would have just as well already been able to be a
dick by configuring your CUBIC in TCP to be more aggressive.

Lots of research has gone into an optimal configuration of CUBIC for TCP,
under the presumption that everyone will use this implementation, and
therefore with the goal of optimal results for everyone.

QUIC now falls out of line because early adopters get better results, as they
can unfairly compete against the fair TCP implementation. In the long term
however, if QUIC is adopted by everyone, this will lead to a worse result for
everyone, because this optimal configuration that's been researched under TCP
has been thrown out for no good reason.

At least I'm not aware of a good reason. Maybe QUIC is actually somewhat less
impacted by packets being dropped or something like that and does work better
with everyone just trying to squeeze everything through aggressively, but even
if that is the case, I cannot imagine it being the case to the degree that
this CUBIC configuration has been made more aggressive.

~~~
detaro
To be fair, that's from my understanding "aggressive configuration of CUBIC in
Chrome", not something dictated by the protocol.

It's an interesting point though that move to UDP and doing congestion-control
in higher levels, in user-space, makes it easier for applications to be overly
aggressive, and there's some incentive for them to do so. Clear downside of
how frozen the lower levels of the stack have become in some ways...

