
The Status of HTTP/3 - feross
https://www.infoq.com/news/2020/01/http-3-status/
======
mholt
This article omits that Caddy has had long-running experimental support for
HTTP/3 for a couple of years now:
[https://caddyserver.com/docs/json/apps/http/servers/experime...](https://caddyserver.com/docs/json/apps/http/servers/experimental_http3/)

~~~
teruakohatu
Caddy's strange official binary licencing did them no favours. The licensing
makes more sense now but they are off people's radar.

~~~
PhilippGille
This might change with Caddy 2.0.

~~~
mholt
It changed months ago. It's all Apache licensed.

~~~
DoctorOW
I think they meant that it'll be back on people's radars once there is a big
release.

------
xfalcox
I created a container image of Discourse using nginx with the cloudflare patch
to enable HTTP/3[1] and for some reason the same config that works fine in
HTTP/2 loses the content-type header on Google Chrome. It works just fine in
Firefox...

1\.
[https://github.com/cloudflare/quiche/tree/master/extras/ngin...](https://github.com/cloudflare/quiche/tree/master/extras/nginx)

------
tyingq
Because QIC is UDP based, Chrome first runs a race with TCP just in case
you're sitting behind some device that blocks UDP.

I wonder how much bandwidth this will waste globally.

~~~
sebazzz
Isn't that like any device behind NAT?

~~~
stephenjudkins
No. Most NATs handle UDP sessions trivially as long as one end of the
connection is not behind a NAT itself. Tricks like UDP hole punching are
necessary only between two endpoints that are both behind NATs.

~~~
iso947
And assuming the protocol can cope with changing source ports. I’ve seem some
udp protocols that only work if the source port is not translated.

------
udippant
Facebook has also open sourced implementation of HTTP3
([https://github.com/facebook/proxygen#quic-and-
http3](https://github.com/facebook/proxygen#quic-and-http3)) and IEFT QUIC
([https://github.com/facebookincubator/mvfst](https://github.com/facebookincubator/mvfst)).

------
tambre
> with almost 300,000 services using it across the world

The Shodan search given scans headers for all requests. So the vast majority
of results are sites using Google Fonts, Maps, etc.

~~~
MarionG
According to W3Techs, 2.4% of the top 10m sites and 7.1% of the top 1k sites
support HTTP/3: [https://w3techs.com/technologies/breakdown/ce-
http3/ranking](https://w3techs.com/technologies/breakdown/ce-http3/ranking)

------
axaxs
I hate to be a curmudgeon, but I can't help but think that designing a new
service over UDP isn't the best idea. DNS has been fighting off wave after
wave of attack vectors, some that realistically cannot even be fixed. Making
it immune to these vectors is going to look a lot like a slapped together TCP
over UDP...

~~~
manigandham
Too bad no major internet company wants to implement a new version of TCP. We
can’t even get IPv6 done after 15 years.

~~~
jiggawatts
It's a common misconception that routers handle TCP. They strictly handle
_only_ the IP headers (and lower-level headers).

The TCP protocol is implemented only by _endpoints_ , at least in principle.

It's the "security appliances", also known as "middleboxes" that are the
problem. Think web proxies, antimalware scanners, firewalls, and inline IDS
systems.

These things are the bane of the Internet, because they ossify protocols,
blocking any further development.

~~~
detaro
Although what a consumer considers a "router" is actually a middlebox doing a
bunch of things and does care. (CG-NAT in provider networks is probably
another example of a common problematic middlebox)

------
3xblah
"HTTP/1.1 keep-alive connections, though, do not support sending multiple
requests out at the same time, which again resulted in a bottleneck due to the
growing complexity of Web pages."

Is this true.

Below we use HTTP/1.1 pipelining to send multiple requests (30) over a single
TCP connection in order to print short descriptions of the last 30 infoq.com
articles posted to HN.

    
    
       http11 ()
       { 
       while read x;do case $x in https://*)x1=${x#https://*/};;http://*)x1=${x#http://*/};;*)x1=${x#*/};esac;[ $x1 != $x ]||x1="";x2=${x#*//};x3=${x2%%/*};printf "GET /$x1 HTTP/1.1\r\nHost: $x3\r\nConnection: keep-alive\r\n\r\n";done|sed '/^$/d;N;$!P;$!D;$d';printf "Connection: close\r\n\r\n";
       }
       curl https://news.ycombinator.com/from?site=infoq.com|grep -o "https://www.infoq[^\"?]*"|   http11   |openssl s_client -connect www.infoq.com:443 -ign_eof -quiet 2>/dev/null|sed -n '/@id/p;/^  \"description\": \"/p'

~~~
gsnedders
With HTTP/1.1 pipelining, you can't reliably start sending the second request
until the first response is complete. As such, you can't have multiple
requests out at the same time. It's also very much linear.

~~~
3xblah
"With HTTP/1.1 pipelining, you can't reliably start sending the second request
until the first response is complete."

In the example, all 30 requests were sent at the same time. openssl did not
wait for any responses.

This example can be repeated again and again and every time, all the responses
are received, in order. It is reliable.

Not sure who "you" refers to in the above statement, however if it applies to
me then that statement is incorrect. I have been using HTTP/1.1 pipelining
outside the browser for decades.

HTTP/1.1 was written for HTTP clients. Browsers are just one type of client,
not the only type. More than half the traffic on the internet is initiated by
non-interactive clients. Besides headless, that excludes browsers.

From RFC 2616:

user agent

The client which initiates a request. These are often browsers, editors,
spiders (web-traversing robots), or _other end user tools_.

~~~
gsnedders
Ah, I seem to have misremembered; from RFC 7230:

> A client that supports persistent connections MAY "pipeline" its requests
> (i.e., send multiple requests without waiting for each response).

Maybe this was just yet another case where plenty of intermediaries are broken
and mass-deployment has always been difficult?

~~~
Dylan16807
Yes, that. I always set firefox to attempt pipelining until they removed it in
favor of HTTP/2\. It worked well.

------
throw0101a
From the article:

> _HTTP /2, derived from the now deprecated SPDY protocol, introduced the
> concept of first-class streams embedded in the same connection._

Was this not done by SCTP?

* [https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...](https://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol#Message-based_multi-streaming)

It's just that (a) network boxes often block 'unknown' protocols, and (b) web
servers/browsers did not bother implementing the protocol.

~~~
WorldMaker
Yes, my impression of both HTTP/2 and HTTP/3 efforts is that they learned a
lot from how many middle boxes clobber SCTP. I've heard the UDP-based QUIC
that HTTP/3 uses described before as "middle box safe SCTP", though it does
differ in details it attempts to accomplish much the same things but
piggybacking over UDP.

------
jrumbut
For the builder of small to medium (say, 10k to 1 million monthly users, some
media but not the primarily focus) websites, apps, APIs, etc, is it time to
begin deploying HTTP/2 or even 3?

How would one make the decision, what factors would influence it? What are
some of the best books/essays arguing either direction?

I believe in maintaining best practices even if you can get away with
sloppiness on a specific project, to be good and fast at doing things the
right way, but I honestly can't tell where the new protocols fall.

~~~
GABeech
HTTP/2 absolutely there are a ton of wins with some of the work they did.
Especially around content loading and ssl.

HTTP/3? Meh. We've started into the realm of solving google scale problems in
HTTP standards that have marginal if any benefit to the 99%

~~~
1_player
> google scale problems [...] that have marginal if any benefit to the 99%

Such as connection migration over different networks? Who ever needs that?

"Sorry, call dropped. I was leaving the house and I lost connection to the
Wifi..."

------
K0SM0S
Slight tangent that actually shocked me when I learned about it: the OSI model
that everyone keeps talking about is actually _not_ the "official" nor "real"
implementation we use. The actual protocol (and conceptual model) you really
want to learn and work with is simply called "Internet Protocol Suite",
commonly known as "TCP/IP".

Consider this comparison between OSI and TCP/IP models from Wikipedia[1]:

> _The OSI protocol suite that was specified as part of the OSI project was
> considered by many as too complicated and inefficient, and to a large extent
> unimplementable. Taking the "forklift upgrade" approach to networking, it
> specified eliminating all existing networking protocols and replacing them
> at all layers of the stack. This made implementation difficult, and was
> resisted by many vendors and users with significant investments in other
> network technologies. In addition, the protocols included so many optional
> features that many vendors' implementations were not interoperable._

> _Although the OSI model is often still referenced, the Internet protocol
> suite has become the standard for networking. TCP /IP's pragmatic approach
> to computer networking and to independent implementations of simplified
> protocols made it a practical methodology. Some protocols and specifications
> in the OSI stack remain in use, one example being IS-IS, which was specified
> for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as
> RFC 1142_

There's a similar discussion on the TCP/IP article as well[2], highlighting
the impracticality of OSI "layers" in terms of implementation.

> _The IETF protocol development effort is not concerned with strict layering.
> Some of its protocols may not fit cleanly into the OSI model, although RFCs
> sometimes refer to it and often use the old OSI layer numbers. The IETF has
> repeatedly stated[citation needed] that Internet protocol and architecture
> development is not intended to be OSI-compliant. RFC 3439, addressing
> Internet architecture, contains a section entitled: "Layering Considered
> Harmful"._

In practice, a short discussion about SSL/TLS is enough to point the
inadequacies of OSI, and orient one towards conforming their conceptual model
closer to TCP/IP and relevant RFC's.

The advice I follow personally, given the "popularity" of OSI (people call
"layer 1-7" as if it were an actual thing, 90% of the blogs and literature
uses the OSI model), is to simply _translate into TCP /IP lingo/concepts_ in
your mind whenever you read/speak about it. Not everyone will get it around
you, but at least you'll have a more solid implementation should you be
working with low-levels of the net stack.

For reference, RFC 1122[3] defines the Internet Protocol Suite with 4 layers:
_application, transport, internet_ (or _internetwork_ , the idea of routing)
and _link_. Does not assume or require a physical layer in the spec itself —
hence it works on 'anything' from USB to Fiber passing by Ethernet or
Bluetooth and PCIe/Thunderbolt if you just follow the protocol.

[1]:
[https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...](https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/IP_model)

[2]:
[https://en.wikipedia.org/wiki/Internet_protocol_suite#Compar...](https://en.wikipedia.org/wiki/Internet_protocol_suite#Comparison_of_TCP/IP_and_OSI_layering)

[3]:
[https://tools.ietf.org/html/rfc1122#section-1.1.3](https://tools.ietf.org/html/rfc1122#section-1.1.3)

~~~
rstuart4133
The OSI model was always more of a GoF software "Design Pattern" for network
stacks. It's certainly one way you can do it, and breaking down the conceptual
functions in the way they do makes it easier to understand the whole stack by
looking each function in isolation.

But it's not "the" only way to do it and following it doesn't guarantee a
better outcome than not following it. Still, I thought it was a good way of
presenting the material, but then I came across student after student
insisting it was the only possible way to structure network stacks. It's not.
Despite being a ISO standard, no real follows it exactly. It's prime use just
an educational tool.

------
fredrik-j
Can anyone now give a reasonable ETA for the HTTP/3 RFC:s?

I see that the WG charter has one milestone in May 2020, and Daniel Stenberg
of curl has mentioned early 2020 before. In addition, AFAIU, both Chrome and
Firefox have implementations ready, though still behind flags.

Is it likely that we'll see actual deployments and a wider rollout already in
2020?

~~~
jeltz
I saw a talk from Daniel about a month ago and then he said he had no idea.

------
WhatIsDukkha
From a brief look at the Rust side -

Actix uses Tokio

Quiche doesn't

Quinn does but doesn't look quite ready and I don't see any integration
attempts with Actix yet.

------
Seenso
Why are they calling it HTTP/3 and not just keeping the QUIC name?

~~~
treve
Two reasons I can think of:

1\. HTTP/2 was a 'new serialization' of the core HTTP data model. HTTP/3 is
too, so since it was ratified it made sense to use HTTP/3 to keep things
consistent. 2\. QUIC still exists, but it's now the underlying
framing/messaging protocol on top of UDP. I can imagine future internet
protocols being developed on top of QUIC that don't need HTTP/3.

~~~
yeaaaaah
Yup. For example there's a draft proposal for DNS-over-QUIC:
[https://tools.ietf.org/id/draft-huitema-quic-
dnsoquic-06.htm...](https://tools.ietf.org/id/draft-huitema-quic-
dnsoquic-06.html)

------
skrowl
Seems more verbose than
[https://caniuse.com/#feat=http3](https://caniuse.com/#feat=http3)

TLDR - No browsers support it (without flag / config changes) yet.

~~~
SquareWheel
In the age of evergreen browsers, it's possible to go from 0% to 85% in just a
few weeks time. Upgrading web servers will be the real bottleneck.

~~~
Avamander
nginx will get H3 this year, so I think we'll see a massive uptick in
adoption.

~~~
thenewnewguy
Do you know where I can watch the status on that?

~~~
Avamander
[https://trac.nginx.org/nginx/ticket/1057](https://trac.nginx.org/nginx/ticket/1057)
and
[https://trac.nginx.org/nginx/roadmap](https://trac.nginx.org/nginx/roadmap)

------
bruntonjeeves
I'll start worrying about HTTP/3 the same time I start worrying about IPv6,
never.

------
Animats
All this to support more ads per page.

All this is only a win mostly if you have a huge number of little assets from
different sources. Ads, trackers, icon buttons, malware, etc. If it's all
coming from one source, HTTP/2 is good enough. If it's mostly one big file,
HTTP/1 is good enough.

~~~
NilsIRL
> All this to support more ads per page.

I'm also very afraid of the future of Web Assembly.

Both of these technologies i'm afraid aren't going to make things better,
faster and more lightweight. They are just going to allow the bloat of the web
to become worse without noticeable symptoms.

~~~
baby
This made me think of the problem of highways: the more lane you add, the more
cars come to use your road and congestion is not reduced.

This is a problem with computers in general. For example IDEs and simple
electron apps taking huge amounts of memory.

~~~
generj
The problem with highways is called induced or latent demand. It’s a known
problem in a lot of fields.

[https://en.m.wikipedia.org/wiki/Induced_demand](https://en.m.wikipedia.org/wiki/Induced_demand)

------
whatsmyusername
Yeah I'm not deploying this, pretty much ever.

I'll gladly eat the overhead of TCP to be able to avoid the reflection and
spoofing issues of UDP.

~~~
Rusky
The only reason QUIC is built on top of UDP is because ossification prevents
it from being built on top of IP. It's essentially at the same level as TCP-
and provides similar mechanisms to avoid UDP's issues.

~~~
whatsmyusername
That's my point, I don't want something at the same level of TCP because then
you have to solve the same problems that TCP already solves.

Whitelisting patterns are incredibly common. Breaking that breaks a lot of
stuff.

