
Some notes about HTTP/3 - BerislavLopac
https://blog.erratasec.com/2018/11/some-notes-about-http3.html
======
tialaramex
The discussion of standards seems like it unhelpfully conflates the reality of
standardisation by bodies like the IETF that have no discernible authority and
don't want it even if it were possible - with "de facto" standards which it
says are just whatever people do in practice. Not so.

The IETF is not a conventional SDO, nor indeed a conventional organisation of
any sort, since it has no members, and it operates on "rough consensus" rather
than having some specific formal process that invariably (see Microsoft's
interaction with ECMA and ISO) would be gamed by scumbags.

But nevertheless those are de jure standards that come out the far end, the
result of "getting all major stakeholders in a room" albeit that room is most
often a mailing list since only a hardcore few can attend every IETF physical
meeting. The IETF even explicitly marks standards track RFCs distinctly from
non-standards track ones. If you contribute documentation for a complete
working system, rather than (as Google did with QUIC) a proposal based on such
a system that needs further refinement, it'll just get published as an
Informational RFC. Such RFCs are how a bunch of Microsoft network protocols
are documented, by Microsoft. Whereas months of arguing and back-and-forth
technical discussion have shaped the IETF's QUIC and will continue to do so,
the documentation for MSCHAPv2 (commonly used in corporate WiFi deployments)
is an informational RFC so a Microsoft employee just dumped it as written, no
chance for anyone to say "Er, this protocol is stupid, change it not to shove
zero bytes into this key labelled C or else anybody can crack user passwords
after spoofing the AP". So they didn't, and you can.

[Edited: wording tweaks near start, sorry]

~~~
nneonneo
> is an informational RFC so a Microsoft employee just dumped it as written,
> no chance for anyone to say...

Which is ironic, considering “RFC” stands for “Request for Comments”.

~~~
kerneis
To be fair, the parent comment is slightly misleading. I don't know the exact
story of MSCHAPv2 but note that it is an informational RFC published by the
pppext WG:
[https://tools.ietf.org/html/rfc2759](https://tools.ietf.org/html/rfc2759)

For an RFC to be published by a WG, it must first be "adopted" by the group,
which means a first draft is submitted by the author, and then debatted
(sometimes lightly) until the group agrees that it fits the topic and is
suitable for adoption. Similarly, once the RFC is adopted, it goes through a
series of calls by the WG chair where people have opportunities to comment,
until it is finally published. Informational RFCs have lighter requirements
than standard tracks one, so they are easier to get published, but they still
get some amount of review and comments before publication.

It took 14 months and 4 drafts for MSCHAPv2 to get published:
[https://datatracker.ietf.org/doc/rfc2759/](https://datatracker.ietf.org/doc/rfc2759/)

In fact, even "independent submissions" with "experimental" status (that do
not go through a WG at all,
[https://tools.ietf.org/html/rfc2026#section-4.2.1](https://tools.ietf.org/html/rfc2026#section-4.2.1))
get reviewed before publication. The reviews in that case are private, but a
RFC editor is responsible for sanity-checking the document and sometimes
requests additional input from reviewers specialized in the domain area
covered by the draft.

[Edit: the actual WG for MSCHAPv2 was
[https://tools.ietf.org/wg/pppext/](https://tools.ietf.org/wg/pppext/), not
"Networking" which is just the generic name on top of the RFC]

~~~
tialaramex
Although you're correct that there was a drafting process for MSCHAPv2, the
actual protocol it describes had already shipped in Windows. As a result "But
this is a bad idea" would not have been a useful contribution to the drafting
process, the zero draft describes the exact same protocol, just with different
words.

Edited to add:

The drafting process wasn't worthless, it fixed typographical errors, unclear
descriptions, and so on. For example the zero draft insists Windows usernames
are "Unicode" (UCS-2) but actually they're just ASCII, the examples show ASCII
encoded hexadecimal but the text in the zero draft specifically calls it
Unicode. And originally the document repeatedly says something is a 16-bit
value in the text while showing a 24-bit value in structures, the final RFC
has corrected this to split out an 8-bit "reserved" all zero field in the
structure when this happens. In at least one place the RFC seems to "extend"
the protocol compared to the zero draft, but again this isn't a response to
Working Group feedback, it's documenting a patch Microsoft shipped in later
Windows versions after the zero draft.

I don't know how much a WG chair could have usefully interfered here. As I say
it's documenting something that already existed, so "fixing" it to document a
more secure protocol nobody was using wouldn't help. The IETF's role here was
to help people interoperate with Microsoft's solution, your non-Windows OS
that can sign-in to a corporate WiFi system with Windows domain servers is
enabled by this documentation.

------
popee
> Their second upgrade they called QUIC (pronounced "quick"), which is being
> standardized as HTTP/3.

Isn't QUIC new transport layer protocol based on UDP and, if I remember
correctly, HTTP/3 will be HTTP bindings for QUIC?

You might think this is nitpicking, but HTTP is application layer protocol, so
it's little bit confusing to me.

~~~
dharmab
>However, in those discussions, a related concern was identified; confusion
between QUIC-the-transport-protocol, and QUIC-the-HTTP-binding. I and others
have seen a number of folks not closely involved in this work conflating the
two, even though they're now separate things.

>

>To address this, I'd like to suggest that -- after coordination with the HTTP
WG -- we rename our the HTTP document to "HTTP/3", and using the final ALPN
token "h3". Doing so clearly identifies it as another binding of HTTP
semantics to the wire protocol -- just as HTTP/2 did -- so people understand
its separation from QUIC.

[https://mailarchive.ietf.org/arch/msg/quic/RLRs4nB1lwFCZ_7k0...](https://mailarchive.ietf.org/arch/msg/quic/RLRs4nB1lwFCZ_7k0iuz0ZBa35s)

TL;DR the rename is to resolve the confusion.

~~~
yy77
why not just http/quic? using 3 seems strongly suggest that it is the next
generation of http. They knew that but pretend it is not relavent.

~~~
dharmab
Because there's a decent chance that it will be the next generation of HTTP.

If it doesn't pan out they'll just move on. Remember IPv5?

------
jasonhansel
Wish we could just use SCTP instead:
[https://en.m.wikipedia.org/wiki/Stream_Control_Transmission_...](https://en.m.wikipedia.org/wiki/Stream_Control_Transmission_Protocol)

~~~
rrdharan
I'm curious, how does SCTP tunneled over UDP compare to QUIC?

[https://tools.ietf.org/html/rfc6951](https://tools.ietf.org/html/rfc6951)

~~~
dcbadacd
It compares in the way that there's not a single SCTP web server
implementation compared to QUIC.

People here say that we should use HTTP/2 over SCTP, no protocol will be
adopted if there's no good implementations of it.

~~~
derefr
A WebRTC "RTCDataChannel" is an SCTP-over-DTLS-over-UDP stack, and webservers
exist to stream to these. They're just mostly proprietary, existing as part of
vertically-integrated stacks like that of Google Hangouts (i.e. its "app
sharing" feature.)

------
nly
Is it me or does this part make no sense

> But moving from TCP to UDP can get you much the same performance without
> usermode drivers. Instead of calling the well-known recv() function to
> receive a single packet at a time, you can call recvmmsg() to receive a
> bunch of UDP packets at once.

TCP is a streaming protocol, there are no datagrams to read one at a time.
Nothing stops you from reading the entire kernel buffer (containing multiple
HTTP messages) in to userspace in one syscall.

~~~
detaro
HTTP requests are small, and a read() call only gets you the data from a
single connection, so you get one to few packet(s) worth of data per syscall.
In contrast, recvmmsg can get you a large bunch of packets across all
"connections" in a single syscall.

~~~
nly
Good point. Only that makes sense for servers though since for clients (web
browsers etc) sockets will be 'connected' and they will still have lots of
sockets anyway. They'll still be epoll()ing or similar.

Not withstanding the other benefits of QUIC, the UDP vs TCP thing wrt to
crossing the kernel-userspace boundary doesn't seem _that_ significant.

~~~
detaro
For clients these numbers don't matter as much, many thousands of packets per
second doesn't really happen to them. For a server with a fast pipe it does
happen. I hope one of the big providers will share numbers. I don't think many
of them actually use user-space stacks with TCP, would be interesting if QUIC
improves performance measurably.

------
meddlepal
I feel like the adoption of HTTP/3 is going to be much much slower than
HTTP/2... Besides Google Cloud do any of the major cloud providers have UDP
load balancers?

~~~
jgh
I believe Akamai has implemented QUIC but I haven't heard of any other CDNs
that have.

~~~
cortesoft
EdgeCast supports QUIC as well

[https://www.verizondigitalmedia.com/blog/2018/05/quic-
announ...](https://www.verizondigitalmedia.com/blog/2018/05/quic-
announcement/)

------
nly
I hope we get in-kernel implementations of QUIC at some point because having
to find a portable third-party library for userspace sounds about as appealing
as installing Winsock on Windows 95.

~~~
ris
But the whole _point_ of QUIC is that it is a userspace implementation. From
the QUIC viewpoint (and I take no sides in this) kernel implementation is
death for a protocol because it freezes its specification and behaviour in
slow-to-update systems. This is why they found they couldn't "just improve
TCP".

~~~
the8472
There have been plenty of improvements to TCP over time. New congestion
controllers, new extension headers, fast open, ECN, etc.

~~~
ris
I'm not saying that they don't happen, but they are extremely slow to gain
traction because of various things including OS support.

------
romed
This post groks QUIC. The most important thing about QUIC is it frees
applications from the tyranny of the kernel TCP state machine. Today all TCP
sockets (at least, on Linux) are subject to the same system-wide parameters of
the TCP state machine, none of which are appropriate for any particular
application. With QUIC we will finally have each application in control of its
own retry timers and other parameters. That is going to be quite beneficial,
especially on mobile where packet loss is so common.

~~~
blitmap
setsockopt(fd, SOL_SOCKET, SO_RCVTIMEO, ...)

setsockopt(fd, SOL_SOCKET, SO_SNDTIMEO, ...)

?

I'm not dismissing QUIC, but it is in your control to redefine those defaults.
Maybe in 2020 we'll be grappling back toward [sane] defaults.

~~~
romed
Those do not affect TCP state machine parameters like RTO(min), ATO, and TLP
timeout. These are internal to the kernel and are either static, or can only
be set systemwide. For example the minimum delayed ack timeout in Linux is
just 40ms and can't be changed except by recompiling the kernel. 40ms is a
totally inappropriate number for ATO in a datacenter or other low-latency
setting. Other numbers like RTO(min) are specified in RFCs as 200ms, again
completely inappropriate in a low-latency setting.

QUIC also frees us from other outdated misfeatures of TCP such as timestamps
in milliseconds when they should be in microseconds.

~~~
dcbadacd
How does QUIC compare to HTTP/2 over SCTP?

~~~
moderation
IETF - A Comparison between SCTP and QUIC [0]

0\. [https://tools.ietf.org/html/draft-joseph-quic-comparison-
qui...](https://tools.ietf.org/html/draft-joseph-quic-comparison-quic-sctp-00)

------
john37386
Most of the deadliest DDoS attacks happen over UDP. Spoofing, reflection and
amplification just to name a few. Many businesses just deny UDP to protect
themself against the on going DDoS threats.

I feel this move won't make internet a better and safer place, but let's see.

~~~
baby
Or let's actually read about QUIC before quickly commenting on it.

QUIC uses two mechanism to make sure you cannot do such attacks:

* it requires a proof of IP ownership (exactly like TCP sequence numbers) to setup a connection ID (pretty much, you're able to receive the server's response to finalize the connection) [1]

* it requires the client's first message (client hello) to be padded to at least the size of the server's response. Which implies that an attacker would require as much bandwidth as is spent by the server performing the attack, making the attack as practical as the attack without QUIC. [2]

[1]: [https://tools.ietf.org/html/draft-ietf-quic-
transport-16#sec...](https://tools.ietf.org/html/draft-ietf-quic-
transport-16#section-8)

[2]: [https://tools.ietf.org/html/draft-ietf-quic-
tls-16#section-9...](https://tools.ietf.org/html/draft-ietf-quic-
tls-16#section-9.1)

~~~
john37386
Quic is secure and doesn't allow spoofing.

But, since it runs on UDP, a hacker could attack few DNS servers and amplify a
UDP attack toward a Quic server. This is true for all reflection and
amplification attacks.

Hence, Quic is vulnerable to receive huge amplification attacks +100Gb and
soon 1Tbps. It will not make internet a safer and better place.

Even video game companies used to use UDP and they move away because UDP is
too dangerous. They now use TCP with a kind of websocket techno to not allow
UDP.

Many big enterprises don't allow UDP toward their critical infrastructure.

~~~
Brybry
I'm curious, what games have moved exclusively to TCP?

Most (all?) multiplayer games I play still seem to use UDP, though there is
definitely more mixed TCP use than there used to be.

~~~
penagwin
Slither.io and Agar.io use exclusively TCP but that's only because browsers
don't allow you to send/receive UDP packets. If you've ever played those games
on any network or device with shaky internet then you'll know those games have
lag issues, and the only way to optimize it more would be to switch to UDP
(Which they can't)

~~~
gsich
I am not sure if that is what your parent meant by "games". If you want to
talk about browsergames, you usually add the term "browser".

------
aaaaaaaaaab
Interesting comment under the post:

>The problem is fairness in the presence of network congestion. To a large
extent it depends on most TCP implementations using the same congestion
control algorithm, or at least algorithms that have the same general behavior.
Google's developed a new algorithm called BBR that is robust, but also unfair.
When a TCP connection implementing the NewReno algorithm shares a congested
link with another one implementing BBR, the BBR grabs the lion's share of the
bandwidth:

>[https://ripe76.ripe.net/presentations/10-2018-05-15-bbr.pdf](https://ripe76.ripe.net/presentations/10-2018-05-15-bbr.pdf)

>QUIC specifies NewReno as default and mentions CUBIC, but the choice of
algorithm is left to the implementation. I can easily envision Google using
BBR for connections between Chrome and Google properties, which means Google
traffic would be prioritized over competitors'. Over time, more players would
implement BBR in a race to the bottom (or a tragedy of the commons) and
Internet brown-outs as in the 1980s and 1990s would come back.

[https://blog.erratasec.com/2018/11/some-notes-about-
http3.ht...](https://blog.erratasec.com/2018/11/some-notes-about-
http3.html?showComment=1542664152300&m=1#c2719702969623179467)

~~~
piannucci
Race to the bottom is one thing. Yes, so-called “TCP unfriendly” protocols
will muscle out any protocol that reacts to probability p random packet drops
by a one-over-root-p reduction in throughput. But that does NOT mean that BBR
will fail to avoid congestive collapse. The one-over-root-p behavior is
outdated and actually harmful on wireless links, anyway; it was designed for
the assumption that all losses are congestive losses; but today, many losses
are purely random and not an indicator of congestion. BBR and other modern
TCP-unfriendly congestion control protocols are a necessary step. TCP-
friendliness must die.

~~~
richardwhiuk
WiFi generally doesn't drop packets - it's usual failure mode is for latency
to spike horribly for a clump of packets.

~~~
piannucci
Under the hood, what's happening is that the physical layer is reporting that
a packet failed to decode, and the link layer is attempting retransmission at
a series of lower and lower fallback rates. It's designed this way because if
it fails to deliver a packet, TCP will freak out. There's an RFC about the
general case of designing link layers to hide random losses:
[https://tools.ietf.org/html/rfc3366](https://tools.ietf.org/html/rfc3366)

~~~
gsich
Packet loss is an essential part of TCP to determine max. throughput.

~~~
jabl
Not with BBR.

------
exabrial
Is it possible to turn encryption _off_? If I'm running a cluster of sensors
on a remote airgapped network, the ease of using tools like tcpdump and nc far
outweigh the need for encryption, especially if one is power constrained.

~~~
askvictor
Presumably you're not being forced to use http/3 and can continue to use
whatever you're using now? Or is there a particular reason that you want to
move to http/3 for that network?

------
white-flame
I searched the page for proxying and tunneling, neither of which is discussed.
I'm sure switching to UDP will have dramatic consequences for these features.

------
saagarjha
I’m no web standards expert, but I’m surprised that this standard will be
implemented in userspace rather than in the kernel. Seems like a rather odd
choice: will there be a way to string together the requisite kernel calls
together to achieve the same functionality, or will I be forced to link
against a “priveleged” library that makes these calls for me?

~~~
zamadatix
To the kernel QUIC is just a UDP socket managed by your application, it's
nothing special in that regard.

------
anderspitman
Is there much hope of this leading to us being able to set up QUIC transport
connections (ie without HTTP) in the browser? This could be huge for browser
games and other low latency apps.

WebRTC is powerful but seems to be limited by not having a good portable
server implementation, due to complexity.

------
geo_mer
Is QUIC only built for HTTP or can it be also used as a secure infrastructure
for any L7 protocol?

~~~
M2Ys4U
The latter.

In fact, as I understand it, the point of the rename to HTTP/3 is to
distinguish the QUIC-HTTP bindings from the QUIC-by-itself protocol.

------
jokoon
Not sure I understand it very well, but that mean http3 is going to use UDP,
but does that mean there is going to be a new TCP named TCP/2?

~~~
jabl
> but that mean http3 is going to use UDP

Yes.

> there is going to be a new TCP named TCP/2

No.

There is a new protocol called QUIC, which runs on top of UDP. That is, to
routers/firewalls/middleboxes etc. that are not specifically aware of QUIC, it
will just look like UDP traffic.

QUIC provides TCP-like features (reliable streams, with retransmissions of
dropped packets etc.) plus more. As to why QUIC instead of improving TCP;
experience has suggested that TCP has essentially become ossified, meaning
that middleboxes will drop TCP packets using new features (see e.g. ECN). Thus
in QUIC there's a focus on ossification-resistance (mandatory encryption, and
as little information as possible exposed outside the encryption, etc.).

HTTP3 runs on top of QUIC (which runs on top of UDP). Hopefully that makes it
clearer.

~~~
jokoon
Thanks for the explanation, that's clearer now.

Just another question, are the "middleboxes" well prepared for a massive
switch towards UDP traffic? I don't know the network hardware world very well.

Also I thought TCP traffic was benefitting from hardware implementation and
optimization, I guess that's also wrong.

~~~
jabl
> Just another question, are the "middleboxes" well prepared for a massive
> switch towards UDP traffic? I don't know the network hardware world very
> well.

I'm not an expert either, but AFAIK QUIC development has been heavily
influenced by the requirement to work on the "real" internet with various
middleboxes of varying quality.

And it's not like it's going to be an instant change. QUIC is AFAIK already
used between Chrome and google/youtube, and once QUIC & HTTP3 are official,
it'll be a very long tail.

> Also I thought TCP traffic was benefitting from hardware implementation and
> optimization, I guess that's also wrong.

Well, going all the way (that is, TOE), hasn't been _that_ popular, and e.g.
the Linux kernel has refused to support such cards in the mainline kernel.
What is common, and very useful, is checksum and segmentation offloads.
Checksums are present in both TCP and UDP, and AFAIK NIC's capable of TCP
checksum offload can also do UDP checksum offloading. Similarly for
segmentation offloading, except in the UDP world it's called fragmentation
offload. Though I guess for QUIC, receive fragmentation offloading won't work
as long as the kernel and HW don't understand QUIC, as they won't understand
that two incoming UDP packets can be merged.

------
tor_user
have any of you geniuses figured out this will break websites for tor users
yet?

~~~
zmanian
Tor doesn't support UDP so no HTTP/3 but one would expect most services would
be available for the foreseeable future over HTTP/2 as well.

~~~
tor_user
like websites without javascript?

~~~
Dylan16807
Working without javascript requires development time. Running old versions of
HTTP doesn't. Version 1.1 is going to be fully supported for a long time.

~~~
tor_user
Working without javascript requires _money_. Running old versions of HTTP
requires _????_.

~~~
penagwin
Running old versions of HTTP requires nothing because people will need to
support legacy devices for a long time, and it's obviously already implemented
in all major components (Servers, CDNs, Clients, etc.)

~~~
tor_user
???? != nothing.

Why run HTTP/3 if HTTP/1 costs nothing?

Where I work legacy gets dropped sooner rather than later.

I hope this bombs, er tanks.

~~~
int_19h
You (will) run HTTP/3, because it is more efficient, meaning that your servers
will be able to service more request.

You run HTTP/2 and HTTP/1, because a lot of people are still using that, and
you don't want to lose them. This especially applies to mobile devices, many
of which are stuck with software that cannot be updated for various reasons.

There's no threat of the majority of websites going HTTP/3 anytime soon. By
the time that might be a possibility, Tor will catch up.

~~~
tor_user
"Will" \- are you from the future?

------
freediver
“The top 5 corporations in the world are, in order, Apple-Google-Microsoft-
Amazon-Facebook”

I literally can not find a criteria that would support this claim.

Market cap - no. Revenue - no. Customer satisfaction? Employee satisfaction?
Contribution to society? No.

~~~
owaislone
They are seen as the top five tech giants. That's probably what the author
meant.

~~~
freediver
Define the criteria please.

------
surak
Can we assume the main motivation for google is to be able to track mobile
users better? Particullary as more users are using various blocking methods
and the legal environment regarding 3rd party trackers I questionable.

"With QUIC, however, the identifier for a connection is not the traditional
concept of a "socket" (the source/destination port/address protocol
combination), but a 64-bit identifier assigned to the connection."

~~~
skybrian
I don't see any reason to assume that. As the article explains, portable
network connections have been a goal in network research for many years.

Also, Google benefits from a faster and more secure Internet and its employees
have the freedom to pursue that in a wide variety of ways. They aren't all
mustache-twirling villains.

But I do think the security and privacy implications should be explored. What
could an attacker do with a persistent connection?

~~~
unilynx
Nothing that a session cookie can't, probably - web servers aren't getting
that much of a new tracking mechanism with a QUIC session id.

But a session cookie doesn't allow to transfer a stream to a new IP address
during a long request/response.

------
pfschell
>With QUIC, however, the identifier for a connection is not the traditional
concept of a "socket" (the source/destination port/address protocol
combination), but a 64-bit identifier assigned to the connection. This means
that as you move around, you can continue with a constant stream uninterrupted
from YouTube even as your IP address changes, or continue with a video phone
call without it being dropped.

This is the ultimate dream of every surveillance company & gov't. Of course
Google is solving this "problem."

~~~
jetzzz
In a typical scenario, when you connect to a website with one IP address, then
change your network and connect again with different IP but using the same
device/browser, that website knows that you are the same user. I don't see how
having an identifier inside encrypted connection makes anything worse.

------
adeptima
> I mention this because one of the things that's missing from your education
> about the OSI Model is that it originally envisioned everyone writing to
> application layer (7) APIs instead of transport layer (4) APIs. There was
> supposed to be things like application service elements that would handling
> things like file transfer and messaging in a standard way for different
> applications. I think people are increasingly moving to that model,
> especially driven by Google with go, QUIC, protobufs, and so on.

HTTP/3 aka QUIC driven by Go & protobufs ? ;))))

