
HTTP/3: the past, the present, and the future - jgrahamc
https://blog.cloudflare.com/http3-the-past-present-and-future/
======
chmaynard
A beautifully written and illustrated blog post! Cloudflare is raising the bar
for documentation on the web.

------
nmussy
Hey @jgrahamc, just a heads-up, the output of the curl request in the "Using
curl" section is not up to date:

    
    
      alt-svc: h3-22=":443"; ma=86400
    

I was surprise you didn't deploy draft-23 despite quiche supporting it, so I
checked:

    
    
      $ curl -vs https://blog.cloudflare.com/ 2>&1 | grep alt-svc
      < alt-svc: h3-23=":443"; ma=86400
    

Could you pass it along?

~~~
matsur
Nice catch! As you've noticed, we plan to keep tracking the new drafts as they
come out. 22 was the current version at the time we drafted this post, and 23
is now live.

------
rumanator
Are there any benchmarks comparing HTTP/1, HTTP/2 and HTTP/3?

~~~
luizfelberti
Not aware of benchmarks, but specification-wise I consider HTTP2 to be a
regression.

Sure there are good things about it, and many great enhancements, but the
larger mechanic of the protocol (the most important part really) has been
significantly worsened. I'd rate them as follows:

HTTP3 > HTTP1.1 > HTTP2

QUIC is an amazing protocol, I have no complaints about it, and I'm very happy
they decided to go with it for HTTP3. However the decision to make HTTP2
traffic go all through a single TCP socket is horrible, and makes the protocol
very brittle under even the slightest network decay or packet loss. Any level
of head-of-line blocking severely degrades the entire stream, even for
unrelated requests.

Sure it CAN work better than HTTP1.1 under ideal network conditions, but any
network degradation is severely amplified, to a point where even for traffic
within a datacenter can amplify network distruption and cause an outage.

HTTP3 however is a refinement on those ideas, and gets pretty much everything
right afaik.

~~~
takeda
I wish they would target a real issues of http. For example adding a proper
session support and getting rid of cookies.

~~~
romaniv
Do you seriously expect Google and Cloudflate to make basic web development
easier when they are making tons of money by "managing" its complexity for
others? It's in their best interest to make hosting your own content or using
smaller hosting providers absolute hell. Just like it's in Google's best
interest to make browsers so complex that other browser devs just give up and
Chrome derivatives become the only choice.

I'm predicting that eventually Google will start deranking HTTP 1.1 websites.

~~~
IfOnlyYouKnew
I have no idea how cookies are supposed to having an impact on hosting in
Google's cloud vs your own hardware, or why Google would care about people
where something like that is in any way significant.

~~~
gambler
_> I have no idea how cookies are supposed to having an impact on hosting in
Google's cloud vs your own hardware_

You do not see how lack of built-in authentication increases complexity of
implementing your own website as opposed to "outsourcing" stuff to Gmail,
Google Docs and so on? You don't see the zillion "sign in with Google" buttons
all over the web? You don't see how cookies are abused for tracking, which
benefits Google orders of magnitude more than it would a smaller company?

~~~
y4mi
Do you even know what "sign in with Google" means?

It's in no way easier than adding keycloak authentication for example, which
is another (selfhosted) external authentication solution.

Social auth isnt there to make authentication easier for the website owner. He
still has to do everything he'd have to do if he didn't use it.

It's there for the users, do they don't need to remember a bazillion passwords

------
perspective1
It's kind of amazing seeing positive things from monopolies and evergreen
updates. These institutions can roll out things fast. It's possible in
hardware too-- remember Bell Labs in it's hayday?

------
tosh
from the article:
[https://github.com/cloudflare/quiche](https://github.com/cloudflare/quiche)
(QUIC & HTTP3 lib in Rust)

~~~
steveklabnik
It's been really great to see so much QUIC and HTTP/3 stuff in Rust. We have
quiche, there's also
[https://crates.io/crates/quinn](https://crates.io/crates/quinn), and
Mozilla's [https://github.com/mozilla/neqo](https://github.com/mozilla/neqo)

With my "Rust core team member" hat on, normally I'd want to see one good go-
to library for an ecosystem. With my "I love web protocols" hat on, I want to
see dozens of independent implementations. I wear hat #2 more than hat #1 when
it comes to HTTP/3.

------
tobib
From a adoption perspective, how much is adopting HTTP/2 or /3 different from
adopting a new IP "version" i.e. IPv6?

~~~
matsur
It's much easier to get the adoption flywheel moving for new application and
transport protocols than things farther down in the OSI stack.

To get critical mass for HTTP/3 we'll need large server implementations (eg
Cloudflare) and large client implementations (eg Chrome and Firefox) to build
support and drive adoption. Which is what is happening!

Contrast this to v6 adoption which requires software support _and_ support
from hardware vendors, network operators, etc.

~~~
tobib
I hope I didn't miss this from the article but how do client and server
"negotiate" which protocol to use?

~~~
tialaramex
Already in HTTP servers can send an "Alt-Svc" header which proposes a
different way that clients might reach the same resources. So one thing that
can happen goes in full like this:

1\. User type [http://example.com](http://example.com) into browser

1a. Browser does DNS lookup example.com AAAA or A -> 10.20.30.40

2\. Browser connects to 10.20.30.40 TCP port 80 and speaks HTTP/1.1 over that
port, announcing Host: example.com where it gets given a 301 redirect to
[https://example.com](https://example.com) (and HSTS pre-load would skip this
step)

3\. Browser connects to 10.20.30.40 TCP port 443, using SNI for example.com
where it is offered an ALPN option h2 (meaning HTTP/2.0) and it takes that
option and speaks HTTP/2.0

4\. Browser receives Alt-svc: h3=":12345" which is an announcement that this
same HTTPS service is available as HTTP/3 using UDP port 12345 from the same
IP

5\. Now for any future resources from
[https://example.com/](https://example.com/) the browser knows it could get
them using HTTP/3

In future, maybe, perhaps, a new DNS record (only really practical via DPRIVE
such as DoH since useless middleboxes will probably make this undeployable
without) will do what the SRV record was trying to do, but this time focused
on HTTP servers in particular. So with that record (again, not even a draft
exists for this yet AFAIK):

1\. User types [http://example.com/](http://example.com/) into browser

1a. Browser uses DoH to ask HTTP-SERVICE DNS lookup example.com -> big pile of
stuff about how to get this service. If the DNS lookup fails it tries asking
for A or AAAA instead.

2\. Browser intuits from the big pile of stuff to do QUIC to 10.20.30.40 UDP
port 12345, and then speaks HTTP/3.

~~~
lucaspardue
Nice overview!

There is a draft [1] that would support Alt-Svc in DNS. Lots to consider there
but it is being presented to IETF.

[1] - [https://tools.ietf.org/html/draft-nygren-dnsop-svcb-
httpssvc...](https://tools.ietf.org/html/draft-nygren-dnsop-svcb-httpssvc-00)

~~~
teddyh
That looks _very interesting_ , but, like SRV records have been studiously
ignored in the past, I doubt that Google, Cloudflare, etc al. will allow this
to be the norm, since this would eliminate much of the value proposition which
Cloudflare has, and also partially drain the moat which Google has constructed
around itself.

~~~
tialaramex
Maybe take another glance at the "very interesting" draft? Those named authors
are from Google and one of Cloudflare's competitors, Akamai.

Of course you can imagine these are rogue actors off developing technology
that's hostile to their employer's needs. But, like, Occam's razor. It seems
simpler to assume that these outfits see improving the place where they make
money (the web) as just good business.

~~~
teddyh
I’ll belive it when I see it. SRV has been around for ages, and the arguments
given for not using SRV in HTTP/2, QUIC etc. have been weak and unconvincing,
despite its obvious benefits.

------
achillean
Looks like a decent number of servers (~80,000) already advertise support for
it:

[https://www.shodan.io/search?query=http+Alt-
svc+h3](https://www.shodan.io/search?query=http+Alt-svc+h3)

------
innagadadavida
Are there any efforts to solve caching if encrypted counted? Currently all
https traffic has to be handled by originating server directly. If there is a
standard to verify document signature, clients could just cache the public key
of originating website and use that to verify signature and decrypt the
content. This seems to be totally ignored and http2/3 seem to be solving the
wrong problem.

~~~
tialaramex
Such efforts are contra-indicated by privacy concerns. You'd need to have
everybody agree what we're happy to just let everyone see, then that goes in
the cache, and anybody who wishes we didn't know that they've just looked at,
say, the Wikipedia page for HIV regrets everybody else's decision at their
leisure.

Anything you decide needs to be private doesn't benefit from this cache. So,
maybe we can cache the little Y-combinator logo, but not anything anybody
wrote? The Youtube help pages, but not any videos? It just doesn't seem like
there's any plausible way this isn't either horribly invasive or largely
useless or both.

~~~
innagadadavida
You are right, but there are other cases like say distribution of common JS
libraries. The way it is working currently is via CDN, even then the client
could fetch the same object from different sources. Instead if you could
describe the object as some sort of signed E-Tag, then the client can just
reuse previously downloaded/authenticated/decrypted copy. Of course your
actual bank statement should not be distributed this way.

~~~
jlokier
The privacy concerns aren't that easy to solve.

For example, let's say site G, well known for fingerprinting and tracking,
wants to estimate how long since you last visited site W recently. W uses
modern software engineering recommended practices: CI/CD with several deploys
a day, heavily minimised EMCAScript, various icons too.

When you visit any page on G, G serves you a page that uses 37 scripts (called
"signals") of particular exact, minified versions, which G happens to know M
has served on any particular day. The scripts are loaded in such a way that
they don't really do anything.

From that, G can estimate when you visited W, and on what day.

This works so well that G starts adding invisible iframes or AJAX calls to
signed copies of entire popular pages from W, and through that, works out
exactly what you've been reading. It takes a while because people read
different things, but guess by guess, they build up a profile of your W
reading habits over time, and the more accurate the profile becomes, the
better they are able to estimate what pages to check next.

They do this even though you don't visit G all that often, because G's page
does its measurements continuously while the page is loaded. You try turning
off JavaScript, but G's A/B test-driven AI helpfully evolves a clever
workaround with nested iframes and meta-refresh, so it doesn't make much
difference.

Eventually this is done in G's ServiceWorker so it trickles along in the
background, and G keep the bandwidth usage low enough that you don't notice.
Battery consumption is hardly affected because they run their probes at the
same time as the ubiquitous mobile-server-to-client-notification service wakes
up the radio. Which 99% of users have running all the time.

A kind of "web crawling", if you will, but crawling the client's history,
crawling shared, signed ETags just by trying them.

------
jazzyjackson
I noticed it says enabling QUIC on your network provides improvements to
encryption compared to TCP/TLS, why is that?

~~~
lilyball
I assume it's referring to this:

> _QUIC also combines the typical 3-way TCP handshake with TLS 1.3 's
> handshake. Combining these steps means that encryption and authentication
> are provided by default, and also enables faster connection establishment.
> In other words, even when a new QUIC connection is required for the initial
> request in an HTTP session, the latency incurred before data starts flowing
> is lower than that of TCP with TLS._

------
anonymoushn
Is this HTTP/3 to origin too?

