
A Technology Preview of Nginx Support for QUIC and HTTP/3 - signa11
https://www.nginx.com/blog/introducing-technology-preview-nginx-support-for-quic-http-3/
======
baybal2
Question to ones following quic/http3 development, what does it bring over
http2?

From my point of view, http3 is more of a sidegrade and has no net benefit
over http2.

We can make TCP fast, very fast, and the use of UDP was not a prerequisite for
any of its functionality.

The "multistream" functionality of QUIC is mainly to benefit companies with
big, highly loaded CDNs, and kind of obviates the economic point of squeezing
multiple virtual streams into a single TCP connection as used in http2.

By throwing away TCP, they are throwing away decades of optimisations, and
hardware offloading that network hardware makers made to handle TCP well.

If the talk was really about extracting single digit improvements from it, I
think it would've made more sense to finally put SCTP, and DCCP to good use.

~~~
drewg123
_By throwing away TCP, they are throwing away decades of optimisations, and
hardware offloading that network hardware makers made to handle TCP well_

Indeed. I work at Netflix on optimizing cpu efficiency on our Open Connect CDN
nodes, largely to reduce power use and capital expenses. We use FreeBSD, ngnix
& TCP, and make heavy use of offloads like async sendfile(), TSO, LRO, kTLS
and more recently hardware kTLS offload.

Right now, I have a single socket 32c/64t AMD Rome server delivering over
350Gb/s of real Netflix customer traffic. This traffic is all TLS encrypted,
and is served across hundreds of thousands of TCP connections.

From measurements we've done, current QUIC would cost about 3x as much as TCP
when using software crypto. So my back-of-the-envelope guess is that this box
would do about 77Gb/s with QUIC (230Gb/s is the limit when disabling hardware
TLS offload and using software crypto).

Are the benefits of QUIC really worth an a 4x increase in the amount of energy
required per stream?

Once QUIC has optimizations similar TCP in place, the story will obviously be
different. But we're not there yet.

~~~
Jonnax
Would you say that QUIC might not be worth it for video content as it's the
transferal of large files over the network.

Whilst QUIC shines when you have a lot of small assets that you want to fetch
as quickly as possible?

Like could we have a system where we choose http 2 or 3 depending on the type
of data?

~~~
microcolonel
> _Would you say that QUIC might not be worth it for video content as it 's
> the transferal of large files over the network. Whilst QUIC shines when you
> have a lot of small assets that you want to fetch as quickly as possible?_

The way video is generally served now is actually as a large number of
dynamically-selected chunks of the video and associated audio. QUIC makes
perfect sense for YouTube/Netflix/Vimeo type VOD, and especially the MPEG-DASH
style of streaming.

------
chucke
Humm, will nginx enable header compression this time? Their HTTP/2 module
disabled the hpack dynamic table, as I recall. Will they serve us again a
poor-man QUIC and tell us "it's all fixed in the paid version" again?

~~~
taf2
IIRC wasn’t this due to security exploit (information leak) with compression
enabled you can figure out what is in the headers by doing some kind of
observation of the changes in byte sizes?

~~~
patrickmcmanus
you're thinking of an exploit in spdy (the h2 predecessor) in which the
headers were just run through the same gzip context. The HPACK format in h2
and h3 is meant to remove those oracles. (though it is less effective bytewise
than gzip).

------
l4hel
Google ruined the simplicity and orthogonality of the internet while we all
were there looking. There is nothing of the original design grandiosity of the
first batch of internet protocols here. It's just engineering work of
sacrificing every elegance and modularity to seek some percentage (not order
of magnitude) performance gain.

~~~
benkuhn
Simplicity, orthogonality, elegance, modularity, etc. are useful when you want
to build lots of different things easily.

When you're building one single thing that's used by 4.6 billion people, it
turns out that percentage optimizations matter!

(I work for a company that built a shitty half-baked homegrown QUIC equivalent
because in rural Ethiopia, HTTPS handshakes were so slow that they literally
just didn't work. Glad that Google is optimizing our percent-of-a-percent use
case!)

~~~
baybal2
The thing is, what Google does often doesn't work.

Unsound hacks that kind of work "acceptable" in A/B test telemetry and slowly
break in real life from inherent design deficiency, are almost always worse
than something saying from the start "will not work on bugged os/hardware
version, but work really well on standard compliant ones"

The TLS 1.3 hack a Google engineer has forced through IETF is now backfiring
for example. They did it to hack around a certain brand of middleboxes, but
the hack instead broke few other ones, and embedded http servers. They may
well errata it, and go back to normal versioning in 1.4, despite putting it on
paper in 1.3 that the hack is here permanently.

~~~
tialaramex
> The TLS 1.3 hack a Google engineer has forced through IETF is now backfiring
> for example.

How is it "backfiring"? It seems to be working for billions of people. If
you've got a non-compliant TLS implementation that broke you get to keep both
halves, good luck with that.

------
Neil44
OpenLitespeed has QUIC support too, and it can read apache configs.

~~~
mobilio
I've used them and works great. Also have a plugins for popular CMS platforms
(WordPress, Joomla, Drupal, Magento, OpenCart, PrestaShop, MediaWiki, etc.)
that REALLY helps.

------
xfalcox
I have already tested the quiche patch from Cloudflare, and even reported a
bug on it that was fixed.

Can someone comment on differences between the two patches?

~~~
mobilio
CF patch is "unofficial", nginx is official.

There isn't huge difference between them from technical point of view. But
it's your decision what of them you will use.

~~~
secondcoming
It's really interesting that they chose to use rust

~~~
mobilio
Actually seems that someone in CF really love Rust:
[https://blog.cloudflare.com/boringtun-userspace-wireguard-
ru...](https://blog.cloudflare.com/boringtun-userspace-wireguard-rust/)

They have make WireGuard implementation in Rust. Here is original WG-RS:
[https://git.zx2c4.com/wireguard-rs/about/](https://git.zx2c4.com/wireguard-
rs/about/)

But CF is using their own implementation for WARP:
[https://blog.cloudflare.com/1111-warp-better-
vpn/](https://blog.cloudflare.com/1111-warp-better-vpn/) "We built WARP around
WireGuard, a modern, efficient VPN protocol that is much more efficient than
legacy VPN protocols."

Here is other repo about projects written in Rust:
[https://github.com/cloudflare?q=&type=&language=rust](https://github.com/cloudflare?q=&type=&language=rust)

------
zelly
Looking forward to not every browser implementing it and having to implement
WebSockets in addition anyway

------
fulafel
Is it memory safe?

------
garganzol
I'm very skeptical about QUIC/"HTTP3". TCP works extremely well already. Yes,
one can do kind of better for some particular workloads but I have yet to see
a successful implementation that goes ahead of TCP.

Take a Remote Desktop protocol used in Microsoft Windows. It can work over TCP
but recent revisions tend to automatically switch to UDP. And know what? They
are not reliable to the point that customers have to turn the UDP layer off.
TCP gives a slightly worse latency but it is much more reliable and thus
usable. Thanks there is a Group Policy for that.

I'm not even talking about Google as a company who constantly tries to attack
the network infrastructure with its variant of EEE (Embrace, Extend and
Extinguish). What's the end game? The crippled protocols worldwide imposed by
the ad casino company? No, thank you. Internet must remain free of all of
that.

~~~
floatboth
> They are not reliable to the point that customers have to turn the UDP layer
> off

I guess they didn't implement reliability then. This has nothing to do with
QUIC.

> Embrace, Extend and Extinguish

Yeah, working on an IETF standard is EEE now? LOL

~~~
garganzol
EEE 2.0. The social aspect is just an antitrust indulgence.

Seems to be working fine for hoi polloi. For now, at least.

