
HTTP/1 should die; HTTP/2 can do everything HTTP/1 could, only faster - pgjones
https://medium.com/@pgjones/http-1-should-die-81b7588d617e
======
skywhopper
Actually, no. HTTP/2 cannot be easily read or written by a human with a telnet
or s_client connection without some additional tools. It also can't be
supported by a lot of old software without some additional layer of
indirection. This may or may not be important to you, but it is a thing that
HTTP/2 cannot do.

~~~
acdha
> HTTP/2 cannot be easily read or written by a human with a telnet or s_client
> connection without some additional tools

This is, of course, why we also don't use chunked encoding and compression
with HTTP 1 rather than writing better tools. I mean, you even have the
example in that sentence about how the solution for using TLS was not to say
“it doesn't work with telnet” but to use s_client, ncat, socat, etc.

> It also can't be supported by a lot of old software without some additional
> layer of indirection

This part is true but it's also the classic legacy computing problem: old
software which cannot be upgraded will increasingly have security and
compatibility issues which favor putting it behind a proxy anyway. This should
factor into the cost of choosing not to maintain those systems rather holding
back the future.

~~~
skywhopper
How is HTTP/1.1 holding anything back?

~~~
acdha
HTTP 1 has a fixed feature set and a lot of warts. Nobody is working on HTTP
1.2, etc. so the only way anything is going to get better will be in the
HTTP/2 series of development. At some point we need to ask whether skimping on
maintenance is actually better than getting those improvements.

------
mankyd
One important difference: HTTP/2 requires* encryption. This makes getting up
and running for local development and small deployments more difficult.

* [https://http2.github.io/faq/#does-http2-require-encryption](https://http2.github.io/faq/#does-http2-require-encryption) "[...] currently no browser supports HTTP/2 unencrypted."

~~~
crankylinuxuser
I'm guessing they're not carving out a niche for self-signed certs? I think
that would have been a fair balance, of setting a "this is a self-signed cert"
instead of the current "YoUr WeBsIte IS UNSECURED" (which isn't actually
true).

Accepting and indicating self signed would also be handy in Tor's onion sites,
since the only companies that will issue certs require EV2 and all your info
and a pile of money. And Onioncerts are pretty much only Facebook's area :/

~~~
pilif
> (which isn't actually true)

how would I discern your self-signed certificate from the one served by the
person arp-spoofing the gateway in a Starbucks?

~~~
crankylinuxuser
No cert chain for starters. The better answer here is some sort of "self
signed standard data" for non-CA certs. But right now, testing in a NAT using
HTTP2 is a ugly nonstarter.

~~~
pilif
> No cert chain for starters.

why would the person who arp-spoofs the gateway in the Starbucks not be able
to fetch your self-signed cert chain and mint their own matching chain with
the exact same metadata (but of course differing keys)?

~~~
crankylinuxuser
Then what's your proposal for NATted, self-contained (no gateway), and Tor
Onion networks?

Sure, if I have a public IP and DNS records pointing towards it, I'm served by
LE or a multitude other vendors. But that's a small number of machines on any
network.

~~~
nybble41
If you have a public DNS entry then you can get a certificate from Let's
Encrypt using the DNS verification method, even if the name points to a
private IP address. DANE would also be an option if it (and DNSSEC) were more
widely implemented.

Local name resolution (mDNS) could take a page from what Tor already does and
encode the public key fingerprint into the domain name. If the key uniquely
matches the domain (perhaps just the first part, e.g. b32-key-hash.friendly-
name.local) then you automatically get the equivalent of domain validation.
While, by itself, this doesn't prove that you're connected to the _right_
domain, by bookmarking the page and visiting it only through that bookmark you
would get the equivalent of trust-on-first-use. Browsers would just need to
recognize this form of domain name and validate the key against the embedded
hash instead of an external CA.

------
jabart
Yes and no, because it is complicated. This is a fairly naive example of
HTTP/2 to illustrate a point, while most websites I am called in on to
optimize the load time are not built like this. This example only shaved off
around 60ms for 20 requests, if your website has 20 requests to load, you have
ads or you have a resource problem.

HTTP/2 spec says it should not share a connection across a host/port combo,
any content you have loaded on a CDN, or a your own cdn.mydomain.com will be a
separate connection. The reason CDNs are faster is because they are
closer(lower latency) or it is common and already cached in your browser.

HTTP/2 still suffers from latency and TCP Window sizes, so no your 8mb website
will still be slow after you enable HTTP/2, you still have to push 8mb out to
the client. If you have a site loading over 80 resources, concat and minify
that first before asking your server admin to turn on HTTP/2.

HTTP/1 clients gets around some network latency issues by issuing more than
one TCP socket, just like SFTP clients using more than one thread. Because it
is hard to overload a single socket when your latency for your ACK packets is
200ms+. If this wasn't true, Google would not be spending the time on a UDP
based version of HTTP. HTTP/2

Overall, lower you content size, lower the number of requests it takes to load
your initial website, THEN turn on HTTP/2.

~~~
acdha
You have some IE6/HTTP 1 tuning advice which is no longer necessary in there.

> The reason CDNs are faster is because they are closer(lower latency) or it
> is common and already cached in your browser.

Shared CDN hit rates tend to be rather low and usually slower than self-
hosting unless you're requesting a LOT of resources because the DNS + TLS
setup overhead are greater than the eventual savings. Opening multiple
connections was useful back when you had very low limits for the number of
simultaneous requests but the browsers all raised that and HTTP/2 allowing
interleaved requests really tilted that into net-loss territory.

The best way to do this is to host your site behind a CDN so that initial
setup cost is amortized across almost every request and you can use things
like pushes for the most critical page resources.

> If you have a site loading over 80 resources, concat and minify that first
> before asking your server admin to turn on HTTP/2.

Similarly, this is frequently a performance de-optimization because it means
that none of your resources will be processed until the entire bundle finishes
streaming and the client needs to refetch the entire thing any time a single
byte changes. The right way to do this is to bundle only things which change
together, prioritize the ones which block rendering, and break as much as
possible out into asynchronous loads so you don't need to transfer most of it
before the site is functional.

~~~
jabart
Most of my tests on CDNs show they are faster unless the server is hosted in
Chicago for me. I have put anycast SSL termination in front of sites before
that have also decreased the ssl connection handshake, which barely beats
CloudFront or CloudFlare on a decent day. If I had the money I could use
Akamai which I know my ISP has before it even hits the internet, no way I
could make my web server faster than that.

It is not a performance de-optimization. If you have 8mb of JS, sure you
should break that up, but you have 8mb of JS and that is your first true
problem. I aim for about 400kb or less of JS, on a marketing website it may
change once a quarter, so I really don't care and they are all first time
users anyway. For a web application, I also don't care because if they are
mobile they need that in one request, gziped to like 40kb, so the hours it
would take to optimize for the single byte changed is still a performance
budget of 80ms.

~~~
acdha
> Most of my tests on CDNs show they are faster unless the server is hosted in
> Chicago for me.

For you, with warmed caches. The problem is that most users aren't you and so
when they follow a link to example.com their client makes that first DNS
request for example.com, starts the connection and TLS handshake, etc. and
then sees a request for e.g. cdnjs.com and does that same work again for a
different hostname. If (in that example) you were hosting your site on
CloudFlare you'd have the same work for the first connection but not the
second because it'd already have an open connection by the time your HTML told
the client it needed your CSS, JS, etc.

Here's an example: check out
[https://www.webpagetest.org/result/190125_F0_a1a180631ecebd8...](https://www.webpagetest.org/result/190125_F0_a1a180631ecebd8a67d3b1b6473e66d3/1/details/#step1_request1)
and notice how often you see “DNS Lookup” and ”Initial Connection” times which
are a significant chunk of the total request time — and ask yourself whether
it would be better for the render-blocking resources to have those times be
zero. Especially measure how that works on a cellular or flaky WiFi
connection, which is closer to what most people experience in the real world.

As to the gigantic bundle of JavaScript, do some real browser testing and ask
whether it's better to wait for the entire resource to be loaded before any
code runs — and to repeat that entire transfer every time one byte changes —
or only block on the portions needed to render the page. Yes, having 8MB of
JavaScript is too much in aggregate but the solution is to use less and use it
more intelligently, not slap it all into a big ball of mud.

~~~
jabart
Those DNS lookup times for Azure are in fact terrible, no DNS query should
take 400ms. You can pre-lookup DNS using html header tags, but nothing fixes a
slow DNS query like that, that is a failure of the Azure Edge DNS servers.

I do real browser testing, and have Real User Monitoring setup to prove the
results. Page load times from 600ms to 1.2s reported from the clients browser.
I measure and test our app on 3g connection including the high packet loss 4g
signal I get at my house. The point of my comment was that HTTP2 is not a fix
all issues with performance on a site. The Webpagetest result would not have
loaded in 2 seconds if they switched to HTTP2. Its 400 resources is what is
slowing it down, not http/1.

~~~
acdha
Look at the times for Google where they loaded jQuery: because it's using an
external CDN, the initial connection time (80ms) is greater than the total
transfer time (69ms) on a relatively fast connection. It would be faster to
serve it from your own server and then you also don't have to deal with things
like SRI to maintain the same level of security.

------
rqs
I rather believe HTTP/2 will die when HTTP/3 is available.

After all HTTP/1 is very simple to implement and already widely used and
optimized. It is usable for most of cases. Plus, maybe in the future, CDN can
serve HTTP/2 to client while use HTTP/1 to read the source.

And currently web browsers still need to send Upgrade request in HTTP/1 to
know whether or not a unknown HTTP server supports HTTP/2\. I guess this will
still be true after HTTP/3 comes out (alt-svc).

~~~
pmalynin
Actually HTTP2 upgrade isn’t done with an “Upgrade:” request, but rather with
TLS protocol negotiation.

~~~
rqs
Oh silly me. I've implemented my own HTTP/2 server according to RFC 7540[0]. I
forgot in the real world web browsers just send "h2" TLS-ALPN.

[0]
[https://tools.ietf.org/html/rfc7540#section-3.2](https://tools.ietf.org/html/rfc7540#section-3.2)

------
bpicolo
Aha, this article is by the creator of Quart[0]! I'm a big fan - one of my
favorite new python packages. It's essentially a super zippy, flask-compatible
asyncio python server. I switched a flask app over recently and saw an
immediate 10-20x throughput gain (the app is entirely io-bound). pgjones was a
pleasure to work with when I had a few issues and had to contribute a few
compat fixes as well. Thanks for the awesome package!

[0]: [https://gitlab.com/pgjones/quart/](https://gitlab.com/pgjones/quart/)

~~~
pgjones
Wow, any chance you could write up the throughput gain? It would be great to
see some real production numbers.

Thanks for the comments.

~~~
hultner
I’ve looked at quart and it looks interesting but is there much point to
running an async webserver for APIs mainly reliant on db-access if we use
SQLAlchemy for database connections?

I’m under the impression that since the db stuff still is sync/blocking we
won’t win much by running an asgi server instead of WSGI.

~~~
bpicolo
Give peewee-async as shot if you'd like async DB access as well.

[https://peewee-async.readthedocs.io/en/latest/](https://peewee-
async.readthedocs.io/en/latest/)

Looks like Gino is a new project trying to bake asyncio ORM on top of
sqlalchemy core:
[https://github.com/fantix/gino](https://github.com/fantix/gino)

~~~
hultner
While interesting for greenfield development and experiments I'd say that it's
most often a to big of a leap to change both ORM (with all db-access-code) and
microframework at the same time for a well established project at 20KLOC+.

~~~
bpicolo
For sure, definitely not a good path for an existing app :) You can use the
sqlalchemy-core via async via a few projects
([https://github.com/RazerM/sqlalchemy_aio](https://github.com/RazerM/sqlalchemy_aio))
but for the full ORM there's no option, as far as I know.

------
rando231
This can't be true, can it?

I was under the impression that HTTP/2 used a persistent connection w/
multiplexing. This seems like it would be very nice in a web-browser to front-
end situation, but what about for internal service calls? Seems like
persistent connection between services would mess w/ common load balancing
schemes.

~~~
bastawhiz
There's nothing stopping you from having one connection per request with
HTTP/2\. You could build your software to simply have the same behavior as
HTTP/1.1 with keep-alive

------
altmind
Time will show what unknown challenges and problems http/2 carries. So far,
the protocol is studied mostly by google(and less by cloudflare), there is
limited research by independent parties.

From the article, i see that the author heavily relies on the chrome devtools
to demonstrate the performance benefit, relying on chrome connection statuses.

My spdy and http/2 tests in 2016 did not show much imrovment in perceived load
speed for our e-commerce site. optimizing delivery(for us - caching the pre-
rendered javascript components and pre-loading some ajax) yield better
results. ymmv.

------
atemerev
So, Websockets over HTTP/2 are only available in latest Firefox (or
experimentally in Chrome, if you manually turn on a flag for it). Almost no
servers support HTTP/2 Websockets, too.

Sorry, it is a little too early to switch.

------
peterwwillis
If you want HTTP/2 to succeed, you're going to have to start making little
wins. Find a tiny, easy market and get them to use it. Then find a giant
customer (Google never counts) and get them to use it. If it seems more
complicated, nobody's going to pick it up unless they have to.

The alternative is to create big sexy splash pages, create a lots of hype, and
lie to people about how _easy_ it is to implement. When they're finally caught
up in the complexity of implementation, it'll be too late to back out.

------
commandlinefan
Same can be said about IPv6 - and could have been said about IPv6 20 years
ago. Still waiting...

~~~
acdha
That's a tricky comparison because using IPv6 required updates to the client,
server, and every box in between whereas HTTP/2 only requires the endpoints to
be updated and has a graceful degradation path in most cases. Unsurprisingly,
it's already far more common than IPv6 because you don't have to go to every
enterprise on the planet and tell them to fix things even their network team
is afraid to touch.

In contrast, HTTP/2 rapidly hit much higher numbers thanks to Firefox and
Chrome shipping support via automatic updates. When CloudFlare deprecated SPDY
about a year ago they were already seeing adoption numbers just under 70%:

[https://blog.cloudflare.com/deprecating-
spdy/](https://blog.cloudflare.com/deprecating-spdy/)

------
antoinevg
Speed. It is not everything.

------
xena
Except websockets.

~~~
pgjones
This is now possible, the article talks about what impact HTTP/2 WebSockets
can make. (See also RFC 8441).

------
est
but can h2 do websocket?

~~~
nickexyz
First sentence: "Recently WebSocket support has been added to HTTP/2 (in RFC
8441)"

