Similar to many of the other demo's of HTTP/2 (Gopher Tile, Akamai) it's written in a way that presents HTTP/1.x in the worst light and manages to screw things up even more.
HTTP/1.1 is really latency prone so when you have a demo that uses lots of smalls requests that don't fill up the congestion window you run into a couple of problems.
1. The browser can only use a limited number of connections to a host, so once these are in use the other requests queue behind waiting to a connection to become free.
2. Even when one becomes free, we've got the request / response latency before the browser sees an images bytes
3. If the response doesn't fill the congestion window i.e. it's a small file, then there's spare capacity that's not being used i.e. packets we could have send it the round trip that didn't.
4. In this demo the server sends connection: close so forces the browser to open a new TCP connection and negotiate TLS for each of the tiles, so the congestion window won't grow either.
Yes, HTTP/2 is faster, because it can send multiple requests at the same time to overcome latency, the server can fill the congestion window, and the window will grow.
But are our web pages build of tiny image tiles, or a greater variety of image and resources sizes?
EDIT: They've now enabled keep-alive which makes the HTTP/1.1 test much faster than it was
Regarding #4 isn't this a bit cheating, who doesn't use keep-alive. Also what about request pipelining? Doesn't that basically do the sane thing what http/2 is doing?
From memory request pipelining is disabled in most browsers as many intermediaries (proxies etc) screw it up, it's still vulnerable to head-of-line blocking even when it's enabled.
You'd be surprised how many servers don't have keep-alive enabled, it's much more common than I'd like.
I'm a great fan of HTTP/2 but I'd like to see realistic tests!
There has been one realistic test that I know of, done by Microsoft Research, and they found pipelining to be basically equivalent to HTTP/2.
But that doesn't fit the narrative. Did Google ever test against pipelining? Did IETF? No, they didn't. Did Google ever show the effect of head-of-line blocking on page load speed, especially when spread over several independent TCP connections? No, they didn't. Why didn't they do this basic research before pushing their new protocol?
They just said "it's 40% faster! rubber stamp this because it's in Chrome!" and some of their fans even created stacked demos where pipelining wouldn't be used (perhaps unintentionally, but still invalid comparisons).
I think if you read back though people like Will Chan's and others blogs it becomes clear that pipelining doesn't work reliably enough outside a test lab environment plus HoL blocking is still an issue for it
"Doesn't work reliably enough" doesn't answer the question of why Google didn't test HTTP/2 against pipelining. None of Google's performance improvement claims compared to pipelining, and they've never demonstrated or quantified an actual real-world head of line blocking problem (ironically, other than Google Maps loading very slowly in HTTP/2 because of a priority inversion).
Will Chan is the guy that wrote "it’s unclear what these intermediaries are". Oh well, there's some bad software out there, let's just make a whole new protocol /s. Fix the bad software, or at least find out what it is. If it's malware causing the problems, you don't need to make a whole new protocol you can just get rid of the malware.
Yup, and clients also mess it up, especially when it comes to pipelining. Should also not forget about TLS session resumption. I've seen it boost server performance over 10x, TLS session negotiation is CPU bound.
yes #4 is totally cheating. But pipeling is very different from http/2 multiplexing. pipeling the responses has to come in order they were requested.. so it could suffer from head of line blocking. http/2 is async in that regards.
Others have pointed out why HTTP/2 is still better but if you're curious about HTTP pipelining here are the reasons why Firefox and Chrome disabled it after years of testing:
Does nobody find it strange that neither Google nor Mozilla were able to determine what proxies and software broke with pipelining? In their controlled experiments pipelining worked fine. Pipelining worked fine in Opera, and in Mobile Safari, and Android Browser, and pretty much universally in Firefox as reported by users that enabled it.
It's a fair bet that the reason these companies didn't find out what was causing the very few failures was because it was due to SuperFish or other illicit software. Some people's computers having malware isn't a good reason disable pipelining.
Your theory that it was just malware is easily disproven by reading the bug reports and blog posts from the years people spent trying to make this work.
Firefox found far from universal success even after years of testing and blacklisting known-noncompliant servers:
Opera's implementation apparently relied on some non-trivial heuristics but they apparently weren't well documented and were discontinued with the transition to Blink.
Similarly, it's easy to find cases where people discovered real problems in the wild with the iOS implementation:
It's easy to understand why people decided that it wasn't worth investing so much time in this when HTTP/2 would deliver significant additional benefits and by using TLS as a starting point could provably avoid the worst tampering proxies entirely.
Yes, there was one report of one problem with iOS pipelining, and circa-2000 IIS had problems. But these aren't what caused them to disable pipelining, allegedly it was the unknown software causing problems. For instance Google reported that some small percent of requests failed, but they never determined the cause for it. We know that SuperFish and other malware was out there intercepting HTTP, and unknown software was causing pipelining problems, and malware isn't known for its attention to detail.
Yes, and if you do the Akamai test[0], which actually uses Keep-Alive, and enable pipelining in FF, it's possible to get HTTP/1.1 to within ~20-50% [1].
Yes – HTTP/2 will show the greatest impact on any connection with high latency because you avoid the classic HTTP 1 behaviour where each request must finish before the next one can be issued.
If you have a low-latency connection close to the server you can easily find cases where HTTP/1 is still competitive – e.g. in this case using Akamai's demo with a 2ms ping time to their closest CDN node:
(That's presumably due to the HTTPS setup – even with caching disabled, that page takes ~.8 seconds on a reload)
The catch, of course, is recognizing that this is not the case for most people and so even if we're personally not seeing a huge benefit on a fast computer in an office, it's still of huge benefit to anyone on a cellular network, flaky wifi, saturated ISP connection on another continent.
It may not be representative of the average page, but there are certainly ones that id does. When you load into our HTML5 game, you download over 1,000 PNG images on your first load. HTTP/2 is pretty exciting for us in that regard since we should see a significant improvement in load times.
worth mentioning that the h2 test actually respects the congestion window.. this helps both at the sending-too-slow stage, and also at the sending-too-fast stage which can easily be shown with h1 using parallel connections for large objects. (the parallelism essentially bypasses the congestion control at startup)
The test is a lie. While I don't doubt improvements in http/2, this test uses "Connection: close" on the http/1.1 test, which means for each tile there needs to be a tcp connect and TLS handshake. This is not representative of real world.
In http/2 the "Connection: close" header is meaningless and all the tiles come from the same connection.
Playing the devil's advocate; "Connection: close" merely exaggerates the underlying issue.
That being said, the comparison would have definitely been fairer with keep-alive and having to resort to a trick like this makes me wonder how much faith they have in their own product.
They're closer to the same speed now. Some of the difference could be the I/O rate the h2 server has to contend with, right?
The real way to demonstrate this would be to release a VM or container for people to test on their own server. And add a demo page of large assets, like you suggest.
Something is very fishy.
I tested this behind the evil-proxy-of-doom on the internal network and http2 was twice as fast despite the proxy barely supporting http1...
But then real http2 against the http2 server is still 2.98 sec vs 8.65 sec.
but pipelines have some real poorly performing cases (head of line blocking, cancel and retry semantics, etc..) that don't apply to h2 - those gotchas aren't represented in this test.
Problems with HOL blocking can be reduced significantly with good caching though. 50 blocking requests aren't much of an issue if they're all going to return small "304 Not Modified" responses straight out of the web servers file cache.
And don't forget you can still get HOL blocking over HTTP/2... at the end of the day the browser has to start parsing HTML before it knows what else it needs to request. The only alternative is teaching your web server HTML or a set of heuristics and doing PUSH. And PUSH actually counter-acts the good in caching because when I load your index.html the web server has no idea whether I have jquery or your blogs stylesheet cached or not.
What I really want when I visit a URL is for my browser to tell the web server when I last visited, and then for the web server to give me a complete list of all dependent resources and sub-resources that have changed since that visit.... basically a set of HEAD responses that constitute a diff. My browser can then just say "hmm, ok, I needed these last time, and they've changed, so while I'm downloading index.html I'll just go ahead and request this and this and this even though I have no idea how I'm going to load them yet".
Basically, imho, all webpages should be cached as git repos ;)
h2 is better than that.. each request carries a priority, so the server can stop sending one resource and start sending another higher priority one if it becomes available. It can even do this interleaving based on actual data available to send not just the request queue - so things like CPU and IO time on one high-priority resource don't become blockers to using the bandwidth for a lower-pri resource that is ready to go.
Obviously that requires a decent server implementation that uses smallish sending chunks and doesn't over buffer.
So the browser should not reorder requests and hold some back (they all do play those games in h1) with h2 - it should just set the pri/dep flags well to give the server (which is going to do the sending) maximum information.
The real HOL problem with h2 involves TCP loss recovery. Its a fairly minor issue though in practice.
I serve Wordpress over SPDY 3.1 all the time. Not sure if that's close enough to HTTP/2 to be comparable, but I don't see how HTTP/2s capabilities will solve the following problem:
And that is I still need to wait 100-200ms for PHP to spit out HTML from index.php, 100ms for the client to receive it, and another 100ms for a list of prioritised requests for dependent resources to come back. How is that not a "real HOL problem"?
If you treat a request as referring to a bundle of resources rather than a single response then you can solve all that by allowing the client to send a lot more metadata about its current view of the whole package. This frees up the web server to respond with logo.png, funkystyle.css and bloated.js, or a freshness response, while we wait for PHP
The only way a user-agent can solve this under HTTP/2 is to request a bunch of related resources it knows it needed the last time, at a low priority, and hope they are either needed, or unneeded but unchanged (as to not waste bandwidth). And you know, a pipelined HTTP/1.1 client can do this just fine by prefacing its primary request with a bunch of If-Modified-Since requests.
And sure, you can build a webserver that can parse HTML out of PHP and do HTTP/2 PUSH, but again that's a waste of bandwidth if I have these resources cached.
That's why I think it's time to move away from a simple request, response and cache model and start thinking in terms of bundle synchronisation and dependencies.
This demo could be possibly even faster if using HTTP/2.0 Server Push.
Btw, note that if you're looking into supporting HTTP/2.0 on your own then with nginx there's still some waiting left: https://www.nginx.com/blog/early-alpha-patch-http2/
And there's no plan to support server push with the first production release. So NGINX users will have to keep using SPDY.
AFAIK the latest plan with SPDY is to remove it from Chrome browser in early 2016 so nginx has to make sure to deliver before that...
HTTP/2 routinely outperforms HTTP/1.1 by several seconds for me. HTTP/1.1 being somewhat stable at 7-8 seconds and HTTP varying from 4 to 11 seconds (though generally closer to 11 seconds than to 4).
Very simple: Instead of making 200 requests, http2 will use a single requests and stream in the chunks continuously. Also noticeable by the pictures not appearing randomly but in the same order they are sent out.
Well, they also appear to be stacking the deck a bit, by configuring HTTP/1.1 in ways that no sane person would (i.e., force-closing the connection after each resource -- HTTP/1.1 already supports re-using the existing connection for further requests, but they're explicitly disallowing that to make HTTP/2 seem better).
I don't think the limiting factor in "how much cruft should we add" is bandwidth. It's the point at which it becomes so hard to use that users leave.
Example: Buzzfeed loads in 1 second for me on my work's broadband, but it's still full of junk making it hard to use. Bandwidth-wise, it could handle more crap on the page, but in terms of usability it's at the limit.
I think you actually will get a faster and generally improved web when sites are finally able to end HTTP/1 support, because it will free up web developers from old performance hacks like asset concatenation, image sprites etc, which add a lot of friction to making websites.
All that said, this demo is bullshit for the reasons given in other comments.
Recently the .net 4.6 has allowed windows server to run some http/2 for the edge browser, it greatly improved the load speed and web socket calls of our app.
Same for me. I tried numerous times and couldn't get HTTP/2 to be faster than HTTP/1.1. I had one time where it was close, but the vast majority of the time its between 2x and 4x slower than HTTP/1.1.
Ignoring HTTP/2, I'm finding it very interesting that on my 11" MacBookAir6,1 running OS X 10.9.5, Safari 7.0.6 is much faster than Chrome 44.0.2403.155 at the HTTP/1.1 test. Safari performs the test in almost exactly 3.00 seconds, while Chrome never comes in under 3.15 and often takes as high as 3.45.
The test server does not actually make sure the h2 test is using h2. If you are using a client that does not have h2 support then you are just using the fallback code on the server and testing h1 against h1. An iphone is a good example :) (but it may be using spdy instead.. lots of variables)
The speed test links at the bottom for single files don't make any sense. A single file download wouldn't benefit from the upgraded protocol and just seems, from very rough testing on my 100mb line, like the http/1 links are artificially slowed down.
Cheaper and faster than AWS Cloudfront with free custom SSL. So what is the catch?
One issue is our data is on S3 and I believe that any outgoing S3 traffic to this CDN would be slow and cost money, but S3 to CloudFront is is likely prioritized and free.
It is interesting that Safari & Firefox beat Chrome in the HTTP/1.1 test for me. However, the HTTP2 test is then twice as fast as Safari. Maybe we can stop smashing all those javascript files together.
Ignoring the technical issues with the demo that have been pointed out -- how does this actually prevent the need for a js bundle building process of some kind?
Suppose page.html depends synchronously on A.js depends synchronously on B.js depends synchronously on C.js. Somehow page.html needs to statically represent that it depends on each of A,B,C to avoid requiring a roundtrip after each new dependency is loaded. Yes the roundtrip cost is lower, but minimizing the number of roundtrips is desirable even over http2. The utility of a module bundler like webpack which walks the dependency graph of A.js and constructs a static representation of all the dependencies required by the html page seems like its somethings thats still going to be desirable to use even with http2 ...
Similar to many of the other demo's of HTTP/2 (Gopher Tile, Akamai) it's written in a way that presents HTTP/1.x in the worst light and manages to screw things up even more.
HTTP/1.1 is really latency prone so when you have a demo that uses lots of smalls requests that don't fill up the congestion window you run into a couple of problems.
1. The browser can only use a limited number of connections to a host, so once these are in use the other requests queue behind waiting to a connection to become free.
2. Even when one becomes free, we've got the request / response latency before the browser sees an images bytes
3. If the response doesn't fill the congestion window i.e. it's a small file, then there's spare capacity that's not being used i.e. packets we could have send it the round trip that didn't.
4. In this demo the server sends connection: close so forces the browser to open a new TCP connection and negotiate TLS for each of the tiles, so the congestion window won't grow either.
Yes, HTTP/2 is faster, because it can send multiple requests at the same time to overcome latency, the server can fill the congestion window, and the window will grow.
But are our web pages build of tiny image tiles, or a greater variety of image and resources sizes?
EDIT: They've now enabled keep-alive which makes the HTTP/1.1 test much faster than it was