This is, of course, why we also don't use chunked encoding and compression with HTTP 1 rather than writing better tools. I mean, you even have the example in that sentence about how the solution for using TLS was not to say “it doesn't work with telnet” but to use s_client, ncat, socat, etc.
> It also can't be supported by a lot of old software without some additional layer of indirection
This part is true but it's also the classic legacy computing problem: old software which cannot be upgraded will increasingly have security and compatibility issues which favor putting it behind a proxy anyway. This should factor into the cost of choosing not to maintain those systems rather holding back the future.
Given that TLS is mention in-line as also breaking this point, I think it perhaps possible that some readers might be of the opinion that the decision-making process you call for has already been conducted. Since TLS breaks the standard of "it is human-eyeball-friendly with telnet/netcat", HTTP/2 is little different in the minds of many.
Your caution is right and apt. The history of computing is littered with technologies that were unready, unsuitable, or otherwise not fit for purpose when adopted.
And for what? Slightly faster speeds? This is a non-problem. When I see a slow site and view the network performance to discover why (using tools that would most likely not work under HTTP/2) here is what I inevitably see:
* Massive HTTP headers with unnecessary cookies
* Unnecessary, bloated CSS files
* Gratuitous unoptimized images
* No attempt at optimizing HTTP caching settings
* No attempt at HTTP compression (gzip)
These will still be the main bottlenecks with HTTP/2.
HTTP/2 solves no problem and makes network application development more difficult by making the network calls opaque and gratuitously complex.
"[...] currently no browser supports HTTP/2 unencrypted."
Accepting and indicating self signed would also be handy in Tor's onion sites, since the only companies that will issue certs require EV2 and all your info and a pile of money. And Onioncerts are pretty much only Facebook's area :/
how would I discern your self-signed certificate from the one served by the person arp-spoofing the gateway
in a Starbucks?
why would the person who arp-spoofs the gateway in the Starbucks not be able to fetch your self-signed cert chain and mint their own matching chain with the exact same metadata (but of course differing keys)?
Sure, if I have a public IP and DNS records pointing towards it, I'm served by LE or a multitude other vendors. But that's a small number of machines on any network.
Local name resolution (mDNS) could take a page from what Tor already does and encode the public key fingerprint into the domain name. If the key uniquely matches the domain (perhaps just the first part, e.g. b32-key-hash.friendly-name.local) then you automatically get the equivalent of domain validation. While, by itself, this doesn't prove that you're connected to the right domain, by bookmarking the page and visiting it only through that bookmark you would get the equivalent of trust-on-first-use. Browsers would just need to recognize this form of domain name and validate the key against the embedded hash instead of an external CA.
If you special-case self signed certificates, at it looks any less threatening, you will eventually have that page presented to unsuspecting victims.
HTTP/2 spec says it should not share a connection across a host/port combo, any content you have loaded on a CDN, or a your own cdn.mydomain.com will be a separate connection. The reason CDNs are faster is because they are closer(lower latency) or it is common and already cached in your browser.
HTTP/2 still suffers from latency and TCP Window sizes, so no your 8mb website will still be slow after you enable HTTP/2, you still have to push 8mb out to the client. If you have a site loading over 80 resources, concat and minify that first before asking your server admin to turn on HTTP/2.
HTTP/1 clients gets around some network latency issues by issuing more than one TCP socket, just like SFTP clients using more than one thread. Because it is hard to overload a single socket when your latency for your ACK packets is 200ms+. If this wasn't true, Google would not be spending the time on a UDP based version of HTTP. HTTP/2
Overall, lower you content size, lower the number of requests it takes to load your initial website, THEN turn on HTTP/2.
> The reason CDNs are faster is because they are closer(lower latency) or it is common and already cached in your browser.
Shared CDN hit rates tend to be rather low and usually slower than self-hosting unless you're requesting a LOT of resources because the DNS + TLS setup overhead are greater than the eventual savings. Opening multiple connections was useful back when you had very low limits for the number of simultaneous requests but the browsers all raised that and HTTP/2 allowing interleaved requests really tilted that into net-loss territory.
The best way to do this is to host your site behind a CDN so that initial setup cost is amortized across almost every request and you can use things like pushes for the most critical page resources.
> If you have a site loading over 80 resources, concat and minify that first before asking your server admin to turn on HTTP/2.
Similarly, this is frequently a performance de-optimization because it means that none of your resources will be processed until the entire bundle finishes streaming and the client needs to refetch the entire thing any time a single byte changes. The right way to do this is to bundle only things which change together, prioritize the ones which block rendering, and break as much as possible out into asynchronous loads so you don't need to transfer most of it before the site is functional.
It is not a performance de-optimization. If you have 8mb of JS, sure you should break that up, but you have 8mb of JS and that is your first true problem. I aim for about 400kb or less of JS, on a marketing website it may change once a quarter, so I really don't care and they are all first time users anyway. For a web application, I also don't care because if they are mobile they need that in one request, gziped to like 40kb, so the hours it would take to optimize for the single byte changed is still a performance budget of 80ms.
For you, with warmed caches. The problem is that most users aren't you and so when they follow a link to example.com their client makes that first DNS request for example.com, starts the connection and TLS handshake, etc. and then sees a request for e.g. cdnjs.com and does that same work again for a different hostname. If (in that example) you were hosting your site on CloudFlare you'd have the same work for the first connection but not the second because it'd already have an open connection by the time your HTML told the client it needed your CSS, JS, etc.
Here's an example: check out https://www.webpagetest.org/result/190125_F0_a1a180631ecebd8... and notice how often you see “DNS Lookup” and ”Initial Connection” times which are a significant chunk of the total request time — and ask yourself whether it would be better for the render-blocking resources to have those times be zero. Especially measure how that works on a cellular or flaky WiFi connection, which is closer to what most people experience in the real world.
I do real browser testing, and have Real User Monitoring setup to prove the results. Page load times from 600ms to 1.2s reported from the clients browser. I measure and test our app on 3g connection including the high packet loss 4g signal I get at my house. The point of my comment was that HTTP2 is not a fix all issues with performance on a site. The Webpagetest result would not have loaded in 2 seconds if they switched to HTTP2. Its 400 resources is what is slowing it down, not http/1.
I haven't seen more recent studies on this, but I do remember a lot of use cases and benchmarks I read where if your user base was on high-latency or unreliable internet connections (packet loss, even minor), HTTP/2 would be slower than 1. Here is a good summary of the issue:
Like you mentioned, HTTP is slowly moving towards UDP instead of TCP (QUIC protocol) to combat this.
Cloudflare covers it a bit here too: https://blog.cloudflare.com/the-road-to-quic/
I'm a fan of HTTP/2, and I think most people will benefit, but I really hate these kind of posts that only highlight the best case scenario to prove a point that has wide-ranging consequences. I can go on google and type "HTTP/2 is fast" and be reaffirmed in everything I thought about HTTP/2 -- I just did -- and almost every single blog mentioned zero downsides to using HTTP/2.
After all HTTP/1 is very simple to implement and already widely used and optimized. It is usable for most of cases. Plus, maybe in the future, CDN can serve HTTP/2 to client while use HTTP/1 to read the source.
And currently web browsers still need to send Upgrade request in HTTP/1 to know whether or not a unknown HTTP server supports HTTP/2. I guess this will still be true after HTTP/3 comes out (alt-svc).
Thanks for the comments.
I’m under the impression that since the db stuff still is sync/blocking we won’t win much by running an asgi server instead of WSGI.
Looks like Gino is a new project trying to bake asyncio ORM on top of sqlalchemy core: https://github.com/fantix/gino
I was under the impression that HTTP/2 used a persistent connection w/ multiplexing. This seems like it would be very nice in a web-browser to front-end situation, but what about for internal service calls? Seems like persistent connection between services would mess w/ common load balancing schemes.
From the article, i see that the author heavily relies on the chrome devtools to demonstrate the performance benefit, relying on chrome connection statuses.
Sorry, it is a little too early to switch.
The alternative is to create big sexy splash pages, create a lots of hype, and lie to people about how easy it is to implement. When they're finally caught up in the complexity of implementation, it'll be too late to back out.
In contrast, HTTP/2 rapidly hit much higher numbers thanks to Firefox and Chrome shipping support via automatic updates. When CloudFlare deprecated SPDY about a year ago they were already seeing adoption numbers just under 70%:
I loathe the tendency to pack emotion and opinion into everything, and exaggerate everything, just because people think they'll get readers that way. Please don't.
Feedback on the article - haven't you chosen a pathological case of a single click triggering 20 API calls that don't depend on each other? Wouldn't it be simpler to batch these calls on the server?