
HTTP-over-QUIC will officially become HTTP/3 - payne
https://daniel.haxx.se/blog/2018/11/11/http-3/
======
tomohawk
We were doing testing several years ago with a custom UDP based protocol to
overcome some of the TCP issues, and ran into a perplexing problem.

To meet the data rate, we were sending datagrams that were 64KB in size (there
was no system call at this time to send multiple datagrams in one go).

The datagrams were being fragmented into MTU sized fragments by the OS and
reassembled on the remote end. However, with the data rate and the fact that
there is only a 16 bit IP ID in the IP header, the fragments would often get
missassembled on the receiving end and passed up to the UDP layer. Usually,
the UDP checksum would detect the problem and throw the datagram out. However,
since the checksum is weak and we were sending a lot of data, it would
sometimes pass the datagram up to the application layer.

So, we would get corrupted data on the receiving end once in a while. It took
quite a bit of time to figure this out, of course.

The solution was to implement dynamic MTU discovery along with the protocol so
we could avoid the fragmentation.

The custom solution is still being used 20 years later!

~~~
r1ch
Fragmentation by the IP layer is very unreliable in today's internet. I've
seen plenty of consumer routers that throw away the whole datagram if a
fragment arrives out-of-order, and who knows what kind of hell corporate
middleboxes and other similar systems do to them. Pretty much every protocol
these days goes to great lengths to avoid upper level fragmentation.

~~~
jchb
IPv6 helps in that it mandates a MTU of at least 1280 bytes. In addition, in
IPv6 only source nodes can break a packet into fragments - intermediary nodes
such as firewalls and routers cannot. Sending back a ICMPv6 Packet Too Big for
packets larger than the MTU is a must, which helps with path MTU discovery.

------
ivoras
Ok, so there have been literally decades spent in managing and optimising TCP
to behave in various network conditions, from modem lines to multi-gigabit
fiber, in all kinds of congestion conditions, lossy-ness, and other hairy edge
cases of real physical networks.

So, what's the point of ditching all that and reinventing HTTP over a thin
wrapper (UDP) over (lossy) IP? Has there been a huge theoretical breakthrough
which simply can't be applied to TCP?

~~~
rstuart4133
QUIC does things TCP will never do. Firstly QUIC is a combination of SSL +
TCP. This allows it put the SSL handshake into the TCP handshake - thus
reducing the overhead of starting an SSL connection. It also leverages SSL
session reuse to avoid the TCP 3 package handshake completely. As others have
mentioned, QUIC is optimised for something that HTTP/2 does often that TCP
does not support very well - multiple child streams over the one connection.

In general, it is true that TCP is highly optimised, but it is only optimised
for one use case: a long lived single flow over a low error rate connection to
a single destination. That was the predominate type of connection back in the
FTP and early web days. But is hasn't been that way for a while now. HTTP is
short lived, video goes to multi destinations, HTTP/2 and multi media in
general has multiple flows. Wireless only appears to be low error rate because
the physical layer had to cover up it's shortcomings otherwise TCP's
performance became abysmal.

TCP hasn't remained king of the mountain because its fantastic protocol that
can't be improved. On the contrary - looks to be the simplest thing it's
inventors could come up with. It even lacked round trip time measurement -
which is something I suspect that would have killed it if an extension hadn't
saved it. TCP has remained king of the mountain purely because it appears to
be near impossible to replace. Now that Google has done it, I expect we will
see a lot more experimentation in this area.

~~~
ktpsns
Nice review. Just a theoretical question: Wasn't IPsec supposed to do the
encryption right in the IP layer? I wonder how this compares with the
handshake roundtrip time needed (for IPsec, for TCP).

~~~
pas
Well, the problem is complex. For security you need integrity, authenticity
and anonimity/secrecy. Crypto of course is the way we implement all that, via
keys and other kind of information theoretic tokens that somehow build upon a
pre-shared secret. You need an anchor, a mutual trust foundation to build your
secure channel upon.

If you want to put security on/into IP, you suddenly need the whole key
management thing, and if you want it to be automatic, you need to somehow
cobble together the assignment, provisioning, allocation, publication,
revocation of keys with IP addresses (or hostnames, domain names, and you just
put DNS into IP).

Eventually it means either you need manual key management (thus IPsec becames
a hard to manage old rusty and poorly supported concept from an old era) or
you just descended into the recursive pit of all hells of circular
dependencies. Sure, it can be done, we can put keys into DNS, and we can use
that, but then we're back to QUIC or DTLS basically.

------
peterwwillis
At first I wrote a lengthy rant about not supporting QUIC as an OSI layer 4
protocol on its own merits and develop HTTP to sit on top of either TCP or
QUIC. But I'm _hoping_ that the people pushing this along are only creating a
homunculus guinea-pig, and that once this gets adopted widely, will push to
turn QUIC out into a real independent low-level protocol like it should have
been.

Bundling HTTP and QUIC into one protocol is a bad idea because (1) it
increases complexity, (2) it creates application-specific QUIC
implementations, which will become a problem later if QUIC is adopted by
anything else, not to mention the security holes/feature gaps in different
implementations, etc, (3) it reduces the likelihood of the operating system
supporting QUIC as a first class L4(OSI) protocol, which will further doom any
network application that uses any method of IPC other than an HTTP request.

Honestly, the way modern network applications operate is embarrassing. People
in the future are going to look back and wonder what was wrong with us.

~~~
xg15
But why would you ever _want_ to use anything else than HTTP??

/s

------
Mojah
For anyone interested in understanding QUIC, here's a post I wrote a few years
ago explaining the tech, the challenges involved and the intended fixes:
[https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-
udp...](https://ma.ttias.be/googles-quic-protocol-moving-web-tcp-udp/)

------
tathougies
SCTP should be the preferred protocol here. SCTP is

1\. An internet standard 2\. Already has kernel support 3\. Was not developed
by one company

And, most importantly,

4\. IT'S ALREADY IMPLEMENTED IN EVERY MAJOR BROWSER!

~~~
hexchain
SCTP is a good protocol itself, but there are lots of NAT
devices/firewalls/other strange middleboxes that do not play nice with SCTP.
These devices only recognize TCP and UDP, and they tend to drop everything
they do not know.

This is not something very easy to fix - not as easy as fixing Cisco devices
that hijack 1.1.1.1, or upgrade enterprise-grade MITM firewalls that don't
know and block TLS 1.3.

~~~
tathougies
Luckily, SCTP, like QUIC, has a standardized UDP encapsulation.

~~~
banthar
If you do SCTP in user space with UDP encapsulation, most of its benefits
disappear (and you have to do that - also because of Windows).

It's standard only on paper. There is only one significant user space
implementation (usrsctp). It's used both by Chrome and Firefox. I don't think
it has much use outside of that. And, I don't think other browsers implement
data channels (which require SCTP).

Browser implementations will probably never have to interact with kernel
implementations (which are not really used outside of telecoms). There is
really no reason to make them talk the same protocol. It's likely better to
use two different protocols tuned to those specific uses:

[https://tools.ietf.org/html/draft-joseph-quic-comparison-
qui...](https://tools.ietf.org/html/draft-joseph-quic-comparison-quic-sctp-00)

They will probably replace SCTP with QUIC also in WebRTC:

[https://w3c.github.io/webrtc-quic/](https://w3c.github.io/webrtc-quic/)

~~~
tathougies
You only need to do UDP encapsulation on the client side (and then again, you
can easily fallback to the kernel acceleration on supported platforms). On the
server side, SCTP has a BSD sockets API already, and with some creative
ethernet bridging, you can convince the kernel to unencapsulate your SCTP
packets (doing this right now in a project for WebRTC, so I know it's
possible). This is a one-time investment that can be made open-source and can
be done in userspace. Of course, a kernel patch to encapsulate SCTP in UDP
shouldn't be that difficult either.

There's two sides to networking -- client and server. Your arguments may make
sense for the client, but they do not make sense for the server, IMO.

> There is only one significant user space implementation (usrsctp)

That's right, but there are others (one was posted a while back) independent
implementations, and the kernel implementations (BSD, Linux, likely others)
are indeed different.

------
amckinlay
What's wrong with SCTP?

~~~
altmind
most system administrators are only aware of tcp and udp(that quic uses). its
really hurts protocol adoption if it does not work for 3rd parties
blocking/not handling it on th e network gear.

~~~
mikedilger
SCTP packets would need to be allowed through the networks. And networks don't
bother if nobody is using it. So it's a chicken-and-egg problem.

But google is big enough to push through that problem, IMHO, as long as
browsers fail back to HTTP/2 or HTTP/1.1, the widespread acceptance of SCTP
would be a boon to e.g. VoIP and game developers. But alas it's too late. QUIC
has been in development for six years now.

~~~
zAy0LfpBZLC8mAC
> But google is big enough to push through that problem

Not really. There are hundreds of millions of home routers deployed that all
do NAT. And they all don't support NAT for SCTP. And many do NAT with hardware
support, so it's probably not even fixable with a firmware upgrade, which
isn't even an option for a ton of unsupported devices anyway.

So, I think encapsulating in UDP is the only realistic option if you want to
gain any adoption any time soon.

Also, SCTP has the same problem that TCP has in that the network can look
inside the protocol, and thus you would get protocol ossification. While
Google does this for selfish reasons, I think it is a really good idea to
establish a protocol that is completely opaque to telcos and should ultimately
benefit the public. Telcos really don't want to be dumb pipes, and they tend
to abuse any power they get, as they have demonstrated time and time again,
and the only way to force this issue is by simply making it impossible for
them to see or manipulate anything at all. So, while we may have to live with
the UDP encapsulation forever, and as stupid as that is, this at least ensures
that anyone in the future can trivially invent and deploy new protocols, as it
is trivial to masquerade anything at all as QUIC. The adoption of QUIC for the
web has the potential to get all ISPs to fix things so that QUIC actually
works reasonably reliably over their network. And the fact that as far as the
network is concerned it's just UDP packets filled with random data ensures
that as long as your new protocol is UDP packets filled with random data, that
will work as well, even if you use completely different mechanisms for framing
or flow control or multiplexing or whatever.

~~~
gmueckl
It has taken 20 years to get IPv6 adoption to where it is now. This takes
amazing dedication and is a much more fundamental change. Why can't SCTP
adoption be a similar long term project? A home router probably has a life
span of less than a decade. So it would be realistic to get a majority
adoption of SCTP within approximately 15 years if there were a bit of a push
in that direction. QUIC has been in the making for 6 years now? SCTP was
standardized in 2000. So we could be 6 years into this 15 year project by now
instead. And that is not comsidering the time it will take to finish QUIC,
build compatible implementations and deploy them.

~~~
zAy0LfpBZLC8mAC
> It has taken 20 years to get IPv6 adoption to where it is now. This takes
> amazing dedication and is a much more fundamental change.

It doesn't take any dedication at all, it only takes address exhaustion. Which
is precisely why it took so long.

> Why can't SCTP adoption be a similar long term project?

Because there is zero incentive for Telcos.

> QUIC has been in the making for 6 years now?

And QUIC (the Google "prototype") has probably been successfully deployed to
more devices than IPv6 by now?

~~~
jamespo
hmm Facebook run their entire internal network on IPv6

~~~
zAy0LfpBZLC8mAC
Why are you mentioning this?

------
quotemstr
QUIC is technically excellent. I have a game-theoretic concern with putting it
in userspace.

I worry that everyone will have an incentive to "cheat" at congestion control,
leading to a tragedy-of-the-commons situation of ever-more-aggressive flows,
followed by eventual congestion collapse.

This effect didn't happen with in the TCP world, since most applications don't
run with enough privileges to speak TCP without the kernel's involvement. But
now that we're moving to a model in which applications do their own congestion
control, an arms race seems inevitable.

~~~
MaxBarraclough
Apps have been free to abuse UDP all this time, but it's not been a real
issue. Even data-hungry applications like video-streaming have gone with TCP
almost every time.

Also, couldn't the OS still throttle back greedy applications? No reason it
couldn't detect the heavy stream of UDP traffic.

~~~
M2Ys4U
> Apps have been free to abuse UDP all this time, but it's not been a real
> issue. Even data-hungry applications like video-streaming have gone with TCP
> almost every time.

I suspect that's more down to NAT than anything else

------
thosakwe
So, what does this mean for HTTP and HTTP2? Will they always be around, and
servers will just "gracefully degrade" and opt for the fastest/newest
protocol?

Or will HTTP/1 and 2 slowly decline over time?

Genuine question, not snark.

~~~
tialaramex
We cannot know the distant future. In the medium term lots of corporates block
QUIC already, and so far as I know nobody at all has even deprecated HTTP/1
let alone ceased to support out.

------
k_sze
An obvious, and probably stupid question to ask: Does UDP NAT traversal work
well enough nowadays with minimal user intervention?

Remember that UDP itself is stateless. So you either need some explicit port
mapping via UPnP (which seems like a nightmare because HTTP connections are
_so_ common), or some really smart router that understands QUIC.

------
toomuchtodo
Is there a recommended guide for those without background to get up to speed
on QUIC and it’s pros and cons?

~~~
manigandham
HTTP/3 is HTTP/2 with minor updates and running on QUIC instead of TCP.

QUIC is basically rebuilding the stateful connection abilities of TCP on top
of UDP with optimizations.

Multiplexed streams with packets that can arrive out of order, removing the
head of line blocking issue with TCP. There is also forward error-correction
to reconstruct lost packets instead of retransmitting. Better packet sizing
and congestion algorithms to reduce round-trips. There are also updates to
include all the TLS 1.3 stuff for 0-RTT setup.

All of this reduces latency and makes connections more reliable, especially
with changing networks on mobile devices. Not many downsides other than that
TCP is far more open and optimized than UDP so it may take a while to
effectively get this protocol out there.

~~~
pgeorgi
> especially with changing networks like mobile

In particular QUIC contains everything necessary to continue connections after
the client's IP address (any one side really, but the other side has to find
it) changes, which is helpful when devices change between networks without
needing a Mobile IP-style bouncer.

~~~
eridius
Doesn't Multipath TCP already exist as a solution for this?

~~~
ianlevesque
Indeed. I’m not sure why adoption is so poor on that.

~~~
toast0
It's only available on apple iOS and maybe Mac OS out of the box. Android is
probably not going to support it; if Android does support it, it will be a
long time before there's critical mass and early versions may claim to support
it, but not work well, and it may be hard to selectively enable it, and we
know many devices won't get the updates to fix it.

It's expensive to support on servers because (to my knowledge) there's not a
good way to ensure related sub flows arrive at the same nic rx queue on the
same server in real world load balancing situations, which is critical for
performance.

Nobody else is doing it (other than Apple for Siri), so it's not clear what
the benefits are.

~~~
ianlevesque
It's weird that Android wouldn't support it, it was developed for linux first.

------
ainar-g
An interesting and somewhat related article called “How Unreliable is UDP?”:
[https://www.openmymind.net/How-Unreliable-Is-
UDP/](https://www.openmymind.net/How-Unreliable-Is-UDP/).

Discussion from four years ago:
[https://news.ycombinator.com/item?id=8465956](https://news.ycombinator.com/item?id=8465956).

------
bvinc
What happened to QUIC as a non-HTTP-specific layer that any protocol could
use? Even the QUIC Wikipedia page has been changed to a HTTP/3 page.

------
romeisendcoming
The vendor history of network and transport protocols can be seen as invisible
to end user market warfare. Those who are able to push their network based
ideas to better address the way they want to do things (and shape the market)
win.

Way back this was IPX/SPX vs IP/(TCP|UDP). We know who won that and what
happened to the loser.

There is another aspect to this: agnostic network and 'good enough'
conceptions have been historically adopted by engineering organizations to
remove vendor disposition from the end user landscape and protect the end
user. On this forum a lot of these historical protections are seen as passe or
unimportant for reasons best not scrutinized too closely.

IMO, the idea embodied in encapsulation of application protocol traffic in
stateless transport to avoid a vendors technical issues with existing
application stream orientation|transport in the non-vendor real world is one
of the ugliest ideas I've yet encountered.

There are a lot of good ideas floating around in the comments here that avoid
vendor lock in for applications stream based transport that already exist.

------
KaiserPro
Garh.

So we are to believe that the people who thought that multiplexing a single
TCP connection over mobile was a good idea, that moving lockstock and barrel
to UDP will solve our problems?

Look, its very simple: HTTP is now a file server protocol with a chatty
control channel smashed in. However instead of optimising for that we have
this.

It basically seems like a massive ego trip: "we can't admit we were wrong
about the multiplexing thing, lets just bash out a reliable UDP protocol"

Look, the vast majority of file transfer protocols are TCP for a reason.
Loosing chunks of files is annoying, and making a custom protocol over UDP
that is both fast start reliable _and_ fast, is actually quite difficult.

Yes, UDP has the advantage that you don't have a connection, and you can just
fire stuff at the destination port and it'll magically arrive. however,
_anyone_ can do that. How does HTTP3 handle noise? how does it handle
spoofing?

Basically, H2 was a regression because it was designed by people who didn't
appear to understand real world networks, this seems like a doubling down.

~~~
vlovich123
It seems like you're just saying things without having actually researched how
QUIC works. QUIC is the next iteration of TCP. That it lives on top of UDP (&
might ship that way) is an implementation detail due to the realities of how
the Internet is built, but it's a reliable in-order transport (+ guaranteed
encryption, loss of packets on a stream doesn't block other unrelated streams,
etc).

~~~
KaiserPro
> without having actually researched how QUIC works

I have evaluated QUIC, as reliable stream protocols on UDP are something I
have a professional interest in.

> QUIC is the next iteration of TCP

Its not designed for that.

> That it lives on top of UDP (& might ship that way) is an implementation
> detail due to the realities of how the Internet is built

no, its fundamental. You can't just swap out TCP with UDP and have done with
it. If we ignore the Datagram vs stream aspect, we are left with implementing
our own flow control, packet loss and authentication.

Which is where my comment comes in, with a TCP socket, if someone tries to
inject spoofed packets, (assuming we in a non-LAN environment) those packets
will be rejected long before they reach us. In UDP no such guarantees exist.

Thats not considering flow control, which is a whole 'nother issue. Its always
a trade-off between efficiency, speed and reliability.

~~~
tialaramex
QUIC is an encrypted protocol. If an adversary injects forged packets the AEAD
tag won't match in decryption and those packets will be discarded.

I would suggest an evaluation in which you completely omitted to notice the
central feature of the protocol doesn't reflect well on your abilities,
regardless of whether you claim it's a "professional interest".

~~~
KaiserPro
encrypting the whole protocol intrinsically protects against denial of
service, buffer overflows and other common mistakes how exactly?

My point is this, the client is protected from a _lot_ of noise. QUIC changes
this.

~~~
tialaramex
Er, no. What happened is you have no idea what you're talking about.

~~~
KaiserPro
so your saying that in TCP, a connection orientated protocol, at each hop the
router/firewall/kernel is _not_ doing basic validation to see if that
connection is valid, and not coming from a spurious source?

The client does all this when you open socket.connect()?

no, no, you're not and no, it doesn't, which is part of my point. Lots more
noise will be forwarded to the client directly, after all, stateful connection
management is impossible. With more noise, comes a bigger attack surface.

------
Ericson2314
Talking with a friend, we believe QUIC itself could be reconceptualized into
multiple layers without changing the wire format:

1\. Connection management 1.1. Initial handshake allows arbitrary layer above
initial handshake in same packet as body 1.2. Subsequent packets have
connection identifier.

2\. Encryption. The SSL bits.

3\. Multiplexing and Congestion control.

So in essence the trick of QUIC is not collapsing layers, but splitting TCP
into connection management and congestion control and moving SSL in the
middle.

So why wasn't this proposed? I hope not because people are too "practical" to
care about layering and abstraction anymore!

------
chasd00
I've always felt the lack of congestion control made UDP like the Wild West. I
think the endpoints (server and client) have a lot of opportunity to really
screw things up for everything in between.

------
pdimitar
Say I decide to implement HTTP/3 in a language that doesn't have a library for
it yet.

Is there any testing toolkit that will tell me "bzzt! you messed up header X"
or any other error I might make?

------
zoom6628
TBH i dont know much in this area but nobody has mentioned CoAP which is
basically simple http over udp for the IOT world. Have code a couple of use
cases in c# for both server and client. As it is both blinding fast, and has
option for reliabe messaging I wonder if CoAP was influenced by QUIC or other
way round. Just curious. Seems modern. apps that are micro service driven
eminently suited to using CoAP for its lower overhead and speed... And as such
HTTP/3 makes a whole load of sense. #justsaying

------
cdoty
Wouldn't UDP be easier to spoof?

~~~
M2Ys4U
QUIC is encrypted, so no not really.

------
snvzz
Amazing: I couldn't find a "is quick working?" kind of test website. Tried
several wordings to no avail.

------
geofft
URL should be
[https://daniel.haxx.se/blog/2018/11/11/http-3/](https://daniel.haxx.se/blog/2018/11/11/http-3/)
, which is the permanent link to this post (not to the "http3" tag on the
blog).

~~~
dang
OK, changed from
[https://daniel.haxx.se/blog/tag/http3/](https://daniel.haxx.se/blog/tag/http3/).
Thanks!

------
OpenBSD-reich
What will this do to Tor, which can't route UDP?

~~~
conradev
A better way to think about it is that Tor routes streams. Tor has the concept
of "pluggable transports" which don't care what form the data takes over the
network, as long as a stream can be reconstructed.

For example, Snowflake[1] which is WebRTC based, and meek[2] which chunks the
stream into a series of HTTP requests/responses for domain fronting. TCP is
still used between nodes.

[1]
[https://trac.torproject.org/projects/tor/wiki/doc/Snowflake](https://trac.torproject.org/projects/tor/wiki/doc/Snowflake)

[2]
[https://trac.torproject.org/projects/tor/wiki/doc/meek](https://trac.torproject.org/projects/tor/wiki/doc/meek)

------
anticensor
I have a problem with QUIC. As everyone implementing this knows, TCP and UDP
headers have very similar structure and the difference mostly lies within
different treatment of packets in these two. QUIC will increase the size of
transport and this affects capped customers. Why is QUIC chosen despite this
major flaw? :Not everyone uses gigabit fiber:

~~~
nabla9
I think the correct comparison is QUIC vs TCP+TLS.

~~~
ldng
And vs SCTP.

A deep dive impartial comparison at different usecase/workload would be great.

