Similar to many of the other demo's of HTTP/2 (Gopher Tile, Akamai) it's written in a way that presents HTTP/1.x in the worst light and manages to screw things up even more.
HTTP/1.1 is really latency prone so when you have a demo that uses lots of smalls requests that don't fill up the congestion window you run into a couple of problems.
1. The browser can only use a limited number of connections to a host, so once these are in use the other requests queue behind waiting to a connection to become free.
2. Even when one becomes free, we've got the request / response latency before the browser sees an images bytes
3. If the response doesn't fill the congestion window i.e. it's a small file, then there's spare capacity that's not being used i.e. packets we could have send it the round trip that didn't.
4. In this demo the server sends connection: close so forces the browser to open a new TCP connection and negotiate TLS for each of the tiles, so the congestion window won't grow either.
Yes, HTTP/2 is faster, because it can send multiple requests at the same time to overcome latency, the server can fill the congestion window, and the window will grow.
But are our web pages build of tiny image tiles, or a greater variety of image and resources sizes?
EDIT: They've now enabled keep-alive which makes the HTTP/1.1 test much faster than it was
You'd be surprised how many servers don't have keep-alive enabled, it's much more common than I'd like.
I'm a great fan of HTTP/2 but I'd like to see realistic tests!
But that doesn't fit the narrative. Did Google ever test against pipelining? Did IETF? No, they didn't. Did Google ever show the effect of head-of-line blocking on page load speed, especially when spread over several independent TCP connections? No, they didn't. Why didn't they do this basic research before pushing their new protocol?
They just said "it's 40% faster! rubber stamp this because it's in Chrome!" and some of their fans even created stacked demos where pipelining wouldn't be used (perhaps unintentionally, but still invalid comparisons).
I think if you read back though people like Will Chan's and others blogs it becomes clear that pipelining doesn't work reliably enough outside a test lab environment plus HoL blocking is still an issue for it
Will Chan is the guy that wrote "it’s unclear what these intermediaries are". Oh well, there's some bad software out there, let's just make a whole new protocol /s. Fix the bad software, or at least find out what it is. If it's malware causing the problems, you don't need to make a whole new protocol you can just get rid of the malware.
It's a fair bet that the reason these companies didn't find out what was causing the very few failures was because it was due to SuperFish or other illicit software. Some people's computers having malware isn't a good reason disable pipelining.
Firefox found far from universal success even after years of testing and blacklisting known-noncompliant servers:
(the actual blacklist at the time
Opera's implementation apparently relied on some non-trivial heuristics but they apparently weren't well documented and were discontinued with the transition to Blink.
Similarly, it's easy to find cases where people discovered real problems in the wild with the iOS implementation:
(This also affected Mozilla: https://bugzilla.mozilla.org/show_bug.cgi?id=716840)
It's easy to understand why people decided that it wasn't worth investing so much time in this when HTTP/2 would deliver significant additional benefits and by using TLS as a starting point could provably avoid the worst tampering proxies entirely.
Also the OP test gave similar order of magnitude.
It looks like HTTP2 will be awesome for mobile!
If you have a low-latency connection close to the server you can easily find cases where HTTP/1 is still competitive – e.g. in this case using Akamai's demo with a 2ms ping time to their closest CDN node:
(That's presumably due to the HTTPS setup – even with caching disabled, that page takes ~.8 seconds on a reload)
The catch, of course, is recognizing that this is not the case for most people and so even if we're personally not seeing a huge benefit on a fast computer in an office, it's still of huge benefit to anyone on a cellular network, flaky wifi, saturated ISP connection on another continent.
Yes, we could somehow package this up on the server and unpack it on the client, but I'd prefer HTTP/2 do that for us.
The loading bar is really small and doesn't provide enough feedback.
worth mentioning that the h2 test actually respects the congestion window.. this helps both at the sending-too-slow stage, and also at the sending-too-fast stage which can easily be shown with h1 using parallel connections for large objects. (the parallelism essentially bypasses the congestion control at startup)
In http/2 the "Connection: close" header is meaningless and all the tiles come from the same connection.
That being said, the comparison would have definitely been fairer with keep-alive and having to resort to a trick like this makes me wonder how much faith they have in their own product.
You're right. Watched it closely a second time and there's definitely concurrency.
My wget implementation does not suppot http/2
$ time wget https://1153288396.rsc.cdn77.org/http2/tiles_final/tile_18.png
real 1.038 user 0.038 sys 0.007 pcpu 5.37
$ time wget https://1906714720.rsc.cdn77.org/http2/tiles_final/tile_18.png
real 0.539 user 0.045 sys 0.009 pcpu 10.01
We should also consider the fact that this is cherry picking the worst trait of HTTP/1.1 and that's that it's latency sensitive.
A demo with a real webpage of large assets would be a better example.
The real way to demonstrate this would be to release a VM or container for people to test on their own server. And add a demo page of large assets, like you suggest.
- HTTP/1.1 17.83s
- HTTP/2 57.73s
I guess I am living in a really remote area of the Interweb.
But then real http2 against the http2 server is still 2.98 sec vs 8.65 sec.
Problems with HOL blocking can be reduced significantly with good caching though. 50 blocking requests aren't much of an issue if they're all going to return small "304 Not Modified" responses straight out of the web servers file cache.
And don't forget you can still get HOL blocking over HTTP/2... at the end of the day the browser has to start parsing HTML before it knows what else it needs to request. The only alternative is teaching your web server HTML or a set of heuristics and doing PUSH. And PUSH actually counter-acts the good in caching because when I load your index.html the web server has no idea whether I have jquery or your blogs stylesheet cached or not.
What I really want when I visit a URL is for my browser to tell the web server when I last visited, and then for the web server to give me a complete list of all dependent resources and sub-resources that have changed since that visit.... basically a set of HEAD responses that constitute a diff. My browser can then just say "hmm, ok, I needed these last time, and they've changed, so while I'm downloading index.html I'll just go ahead and request this and this and this even though I have no idea how I'm going to load them yet".
Basically, imho, all webpages should be cached as git repos ;)
Obviously that requires a decent server implementation that uses smallish sending chunks and doesn't over buffer.
So the browser should not reorder requests and hold some back (they all do play those games in h1) with h2 - it should just set the pri/dep flags well to give the server (which is going to do the sending) maximum information.
The real HOL problem with h2 involves TCP loss recovery. Its a fairly minor issue though in practice.
And that is I still need to wait 100-200ms for PHP to spit out HTML from index.php, 100ms for the client to receive it, and another 100ms for a list of prioritised requests for dependent resources to come back. How is that not a "real HOL problem"?
If you treat a request as referring to a bundle of resources rather than a single response then you can solve all that by allowing the client to send a lot more metadata about its current view of the whole package. This frees up the web server to respond with logo.png, funkystyle.css and bloated.js, or a freshness response, while we wait for PHP
The only way a user-agent can solve this under HTTP/2 is to request a bunch of related resources it knows it needed the last time, at a low priority, and hope they are either needed, or unneeded but unchanged (as to not waste bandwidth). And you know, a pipelined HTTP/1.1 client can do this just fine by prefacing its primary request with a bunch of If-Modified-Since requests.
And sure, you can build a webserver that can parse HTML out of PHP and do HTTP/2 PUSH, but again that's a waste of bandwidth if I have these resources cached.
That's why I think it's time to move away from a simple request, response and cache model and start thinking in terms of bundle synchronisation and dependencies.
Btw, note that if you're looking into supporting HTTP/2.0 on your own then with nginx there's still some waiting left: https://www.nginx.com/blog/early-alpha-patch-http2/
And there's no plan to support server push with the first production release. So NGINX users will have to keep using SPDY.
AFAIK the latest plan with SPDY is to remove it from Chrome browser in early 2016 so nginx has to make sure to deliver before that...
I assume I was supposed to see the opposite result? :P
I guess, this test is just not an epitome of anything at all.
HTTP/2 routinely outperforms HTTP/1.1 by several seconds for me. HTTP/1.1 being somewhat stable at 7-8 seconds and HTTP varying from 4 to 11 seconds (though generally closer to 11 seconds than to 4).
The Akamai demo works fine: https://http2.akamai.com/demo (though HTTP/2 is only ahead by 20% or so)
Can someone explain what exactly HTTP2 is doing differently to achieve such an improvement?
Any clue if Amazon CDN service is / will offer HTTP/2 support too?
Example: Buzzfeed loads in 1 second for me on my work's broadband, but it's still full of junk making it hard to use. Bandwidth-wise, it could handle more crap on the page, but in terms of usability it's at the limit.
I think you actually will get a faster and generally improved web when sites are finally able to end HTTP/1 support, because it will free up web developers from old performance hacks like asset concatenation, image sprites etc, which add a lot of friction to making websites.
All that said, this demo is bullshit for the reasons given in other comments.
on mobile the difference is much bigger, like, 100s vs 5s
Yeah, I "can see the difference clearly", but I don't think it is the kind of difference they expected or intended.
Edit: Firefox 40 on Windows 7 at work. Will try at home as well.
Oddly enough, the Akamai demo someone else posted gives me 18.47s for HTTP/1.1 and 2.24s for HTTP/2.
For me: HTTP/1.1 at 13.19s, HTTP/2 at 1.48s
edit: this was Chrome 44.0.2403.155 m on windows 8 64
One issue is our data is on S3 and I believe that any outgoing S3 traffic to this CDN would be slow and cost money, but S3 to CloudFront is is likely prioritized and free.
Chome 44, Win 7
With this, JS bundling is a thing of the past, I think.
Suppose page.html depends synchronously on A.js depends synchronously on B.js depends synchronously on C.js. Somehow page.html needs to statically represent that it depends on each of A,B,C to avoid requiring a roundtrip after each new dependency is loaded. Yes the roundtrip cost is lower, but minimizing the number of roundtrips is desirable even over http2. The utility of a module bundler like webpack which walks the dependency graph of A.js and constructs a static representation of all the dependencies required by the html page seems like its somethings thats still going to be desirable to use even with http2 ...
HTTP1 - 3.13s
HTTP2 - 0.54s
HTTP2 - 1.65s
Chrome on Windows 8.1.