
Comparing HTTP/3 vs. HTTP/2 Performance - migueldemoura
https://blog.cloudflare.com/http-3-vs-http-2/
======
jrochkind1
So, as far as the results: In their synthetic benchmarks, they find negligible
to no improvement:

> For a small test page of 15KB, HTTP/3 takes an average of 443ms to load
> compared to 458ms for HTTP/2\. However, once we increase the page size to
> 1MB that advantage disappears: HTTP/3 is just slightly slower than HTTP/2 on
> our network today, taking 2.33s to load versus 2.30s

And in their closer to real world benchmarks, they find no improvement,
instead some negligible degradation.

> As you can see, HTTP/3 performance still trails HTTP/2 performance, by about
> 1-4% on average in North America and similar results are seen in Europe,
> Asia and South America. We suspect this could be due to the difference in
> congestion algorithms: HTTP/2 on BBR v1 vs. HTTP/3 on CUBIC. In the future,
> we’ll work to support the same congestion algorithm on both to get a more
> accurate apples-to-apples comparison.

As a developer of web apps, I will personally continue to not think that much
about HTTP/3\. Perhaps in the future network/systems engineers will have
figured out how to make it bear fruit? I don't know, but it seems to me of
unclear wisdom to count on it.

~~~
ATsch
There's something else, performance aside, that's really exciting about
HTTP/3: Fixing a decades old layering violation that has made truly mobile
internet impossible.

In TCP, a connection is uniquely identified by the following tuple:

    
    
        (src ip, src port, dst ip, dst port)
    

The issue is that we depend not only on layer 4 details (port numbers) but
also on layer 3 information (IP addresses). This means we can not ever keep a
connection alive when moving from one network and hence IP address into
another.

We can do some trickery to let people keep their addresses while inside of a
network, but switch from mobile data to wifi and every TCP connection drops.

This is easy enough to solve, in theory. Give every connection an unique ID,
and then remember the last address you received a packet for that connection
from, ideally in the kernel. This makes IP addresses completely transparent to
applications, just like MAC addresses are. However, the tuple is assumed
almost everywhere and NAT makes new layer 4 protocols impossible. Unless you
layer them over UDP. And this is exactly what Wireguard, QUIC, mosh and others
do. Once it's ubiquitous, you'll be able to start an upload or download at
home, hop on your bike, ride to the office, and finish it without the
connection dropping once.

~~~
rixed
Part of me consider HTTP/3 as an application protocol and disagree with this.
This is not a problem for HTTP to solve but a problem for the routing protocol
to solve. Is it desirable to reinvent a routing protocol at the application
layer, so that we can use it as another transport protocol and so on?
Shouldn't we be using a single, unique 128bit address for every device
regardless of which physical network it is attached to, by now? This is not a
technological limitation: If the same operator would administer both local
wifi and long distance GSM then of course you would not loose your IP, as you
do not lose it when you hop from one GSM antenna to the next.

...and part of me think HTTP/3 could be that universal transport protocol that
could eventually solve this problem, and then I agree.

~~~
ATsch
Yes, this shouldn't be done at the application layer. This is why it's done in
QUIC, which is layer 4.5-ish. HTTP/3 runs on top of that.

The "flat address space" idea however is completely ridiculous. That would
mean every node on the internet keeping track of the path to every singe other
node. This is what ethernet does and it barely scales to ten or so thousand
nodes. Routers are already struggling hard with the 700k table entries we have
for the IPv4 internet. To the point where providers actually wrap IP packets
inside of simpler protocols once they enter the network.

We need some kind of hierarchical addressing that expresses the location of a
node in the network. We know this works. We just need the layers above not to
rely on those addresses staying constant.

~~~
DarkWiiPlayer
> The "flat address space" idea however is completely ridiculous. That would
> mean every node on the internet keeping track of the path to every singe
> other node. This is what ethernet does and it barely scales to ten or so
> thousand nodes. Routers are already struggling hard with the 700k table
> entries we have for the IPv4 internet. To the point where providers actually
> wrap IP packets inside of simpler protocols once they enter the network.

If anything, we need a _more_ hierarchical structure, with stricter
separations by, e.g. Continent/Country/Province and possibly down to street or
even house level, so routers can more easily just throw the data in the right
general direction. Note the obvious privacy problem there.

------
londons_explore
A major benefit of HTTP/3 is the ability to transparently switch from one
network connection to another without restarting requests.

You could be midway through a gaming session over websocket, and walk away
from your wifi, and you shouldn't notice a glitch.

Nearly nothing else offers that ability, and it's very annoying, especially in
offices with hundreds of wifi access points - I should be able to walk down
the corridor on a video call without glitchiness!

MPTCP (developed mostly by Apple) offers the same, but Google and Microsoft
are holding it back, for some unknown reason.

~~~
ComputerGuru
This is called WiFi handoff and any enterprise AP deployment worth a damn
should have this sorted out, albeit in a proprietary manner. The WiFi standard
already has a client establishing a connection to a new AP before giving up
the old one at the actual “physical” transport layer, these proprietary
extensions exchange existing connection state information over the wired
backbone between APs when a client is attempting to move from one to the other
so that it can theoretically be a “seamless” experience. In theory, anyway.

~~~
Androider
Presumably HTTP/3 could do WiFi-> 4G -> WiFi -> 5G -> WiFi... hand-offs as
you're moving around.

~~~
ldng
Genuinely asking, is it actually working or is that like the promises of
multiplexing in HTTP/2 that don't really work IRL ?

~~~
Matthias247
Both share the same attributes: The spec allows them to work, but they require
lots of effort on the implementation side to get it right. HTTP/2 require a
library which does sane write scheduling and prioritization to make it work.

QUIC handoffs are a lot more complicated. They will require a library which
supports all the necessary features. And it will require infrastructure which
supports it. Without infrastructure support, packets from the client might get
routed to the wrong host after a IP tuple change, and can from there on not
associated with the QUIC connection.

My guess is some QUIC deployments will figure out how to make it work - others
likely won't, since a lot of efforts is involved.

------
the_duke
This does not mention if the tests also simulated and measured packet loss.

With a good network connection with little packet loss, I wouldn't expect much
benefit to /3\. Especially since all the server and client implementations are
immature and in user space without kernel support.

The benefits should show up with (poor) mobile connections.

~~~
matdehaast
Thanks for pointing this out! I really wish the blog would explain that
better. /3 will really shine where connection has degradation.

For me the most exciting part is the seamless network switching potential of
/3 on mobile devices

------
tomxor
> With HTTP/2, any interruption (packet loss) in the TCP connection blocks all
> streams (Head of line blocking).

This issue is really noticeable on my crappy home mobile internet when loading
web pages, in combination with the timeout being absurdly long for reasons I
don't understand.

~~~
bsdubernerd
This is a major set back introduced with HTTP/2 and I'm not sure why this is
not mentioned often.

Under firefox you can set "network.http.sdpy.enable" to false to switch back
to HTTP/1.

The improvement I have with HTTP2 is hardly noticeable, but HOL blocking is
very tangible as soon as you have occasional random packet loss.

~~~
tomxor
> Under firefox you can set "network.http.sdpy.enable" to false to switch back
> to HTTP/1.

Thanks! I might have to try that out, currently relegated to tunneling
everything over sshuttle which of course makes things slower but reliable
since it decompiles the TCP packets making them appear to work flawlessly to
HTTP... HTTP1 may be a faster solution.

[edit]

Trying this now. Correction on parent comment for Firefox:

    
    
      goto:
      about:config
    
      search:
      network.http.spdy.enabled.http2

------
sholladay
In Node.js (curious to hear about other ecosystems), HTTP/2 hasn't even caught
on yet. Sure, it's technically supported by Node core and various frameworks,
but hardly anyone is really using it. Most of the benefits that HTTP/2 brings
to the table require a new model that doesn't map cleanly to the traditional
request/response lifecycle. It seems harder to program applications using
HTTP/2 because of that. Perhaps some of it is what we are used to and the
burden of learning something new, but I don't think that's the whole story. I
wonder if future HTTP versions will address this in some way or if it is going
to continue to be the new normal. It will be interesting to see what the
adoption curve looks like for HTTP/3 and onward. I'm still building everything
on HTTP/1.1 (RFC 7230) and have no plans to change that any time soon, even
though I can appreciate the features that are available in the newer versions.

~~~
Androider
Turns out it's not really an issue in practice, since you rarely serve naked
Node.js to the Internet. If you put something like a load balancer (ELB) or
reverse proxy (Nginx) in front of your service which speaks HTTP/2, you
already get 95% of the benefits. I expect HTTP/3 to likewise just be a toggle
offered by AWS/GCP/Azure/NGinx etc. in the future, and your users will see an
immediate benefit.

~~~
steveklabnik
Cloudflare includes such a toggle for HTTP/3, though to be honest I forget if
it's still in a closed beta or more generally available.

------
pgjones
It is possible to compare HTTP/3 to HTTP/2 & HTTP/1 using Python, as Hypercorn
(via aioquic for HTTP/3) supports all three.

When I compared late last year I found HTTP/3 to be noticeably slower,
[https://pgjones.dev/blog/early-look-at-
http3-2019/](https://pgjones.dev/blog/early-look-at-http3-2019/) however my
test was much less comprehensive than the one here.

------
WhatIsDukkha
So I can't find the reference but I believe there was a paper a few months
back claiming that there were big issues with fairness (as I understand the
word) with other protocols.

The gist of it was that Quic tends to just flat out choke out TCP running on
the same network paths?

Anyone know about this?

There is some mention of BBRv2 improving fairness but not the outside academic
paper I was looking for -

[https://datatracker.ietf.org/meeting/106/materials/slides-10...](https://datatracker.ietf.org/meeting/106/materials/slides-106-iccrg-
update-on-bbrv2)

------
flyinprogrammer
When you're ready for an actual improvement check out
[https://rsocket.io/](https://rsocket.io/)

------
cletus
So in a former life I worked on Google Fiber and, among other things, wrote a
pure JS Speedtest (before Ookla had one alhtough there's might've been in beta
by then). It's still there
([http://speed.googlefiber.net](http://speed.googlefiber.net)). This was
necessary because Google Fiber installers use Chromebooks to verify
installations and Chromebooks don't support Flash.

This is a surprisingly difficult problem, especially given the constraints of
using pure JS. Some issues that spring to mind included:

\- The User-Agent is meaningless on iPhones, basically because Steve Jobs got
sick of leaking new models in Apache logs. There are other ways of figuring
this out but it's a huge pain.

\- Send too much traffic and you can crash the browser, particularly on mobile
devices;

\- To maximize throughput it became necessary to use a range of ports and
simultaneously communicate on all of them. This in turn could be an issue with
firewalls;

\- Run the test too long and performance in many cases would start to degrade;

\- Send too much traffic and you could understate the connection speed;

\- Sending larger blobs tended to be better for measuring throughput but too
large could degrade performance or crash the browser. Of course, what "too
large" was varied by device;

\- HTTPS was abysmal for raw throughput on all but the beefiest of computers;

\- To get the best results you needed to turn off a bunch of stuff like
Nagel's algorithm and any implicit gzip compression;

\- You'd have to send random data to avoid caching even with careful HTTP
headers that should've disabled caching.

And so on.

Perhaps the most vexing issue that I was never able to pin down was with
Chrome on Linux. In certain circumstances (and I never figured out what
exactly they were other than high throughput), Chrome on Linux would write the
blobs it downloaded to /tmp (default behaviour) and never release them until
you refreshed the Webpage. And no there were no dangling references. The only
clue this was happening was that Chrome would start spitting weird error
messages to the console and those errors couldn't be trapped.

So pure JS could actually do a lot and I actually spent a fair amount of
effort to get this to accurately show speeds up to 10G (I got up to 8.5G down
and ~7G up on Chrome on a MBP).

But getting back to the article at hand, what you tend to find is how terribly
TCP does with latency. A small increase in latency would have a devastating
effect on reported speeds.

Anyone from Australia should be intimately familiar with this as it's clear
(at least to me) that many if not most services are never tested on or
designed for high-latency networks. 300ms RTT vs <80ms can be the difference
between a relatively snappy SPA and something that is utterly unusable due to
serial loads and excessive round trips.

So looking at this article, the first thing I searched for was the word
"latency" and I didn't find it. Now sure the idea of a CDN like Cloudfare is
to have a POP close to most customers but that just isn't always possible.
Plus you hit things not in the CDN. Even DNS latency matters here where pople
have shown meaningful improvements in Web performance by just having a hot
cache of likely DNS lookups.

The degradation in throughput in TCP that comes from latency is well-known
academically. It just doesn't seem to be known about, given attention to or
otherwise catered for in user-facing services. Will HTTP/3 help with this? I
have no idea. But I'd like to know before someone dismisses it as having
minimal improvements or, worse, as degrading performance.

~~~
scarlac
They did mention multiple geographic locations as well as RTT (Round Trip
Time) which is somewhat equivalent to latency, no?

~~~
acdha
The challenge to control for is that they used WebPageTest, which tends to
have locations in data centers near where they do. Using the traffic shaping
options can add latency but what you really want is random latency and packet
loss to simulate real-world usage.

------
elsif1
I'm curious as to how good the bandwidth estimation is. That's something that
can certainly be improved from TCP, but it's also something that has a lot of
corner cases and is not usually done super well in UDP protocols (e.g. WebRTC)

------
underdeserver
I wonder how many different artifacts Cloudflare is serving on this test page.
Maybe a real test is the difference grouped by the number of files served on a
single page load.

------
ryanthedev
So http3 will be using UDP? Makes sense.

Will we see more performance tuning when it comes to MTU sizes?

------
KenanSulayman
The USP of h3 isn't peak performance, it's 95th percentile latencies.

------
mpweiher
TLDR: still slightly slower, but "very excited"

