Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2 technology demo (http2demo.io)
165 points by LeZuse on Aug 20, 2015 | hide | past | favorite | 96 comments

It is a real world demo though?

Similar to many of the other demo's of HTTP/2 (Gopher Tile, Akamai) it's written in a way that presents HTTP/1.x in the worst light and manages to screw things up even more.

HTTP/1.1 is really latency prone so when you have a demo that uses lots of smalls requests that don't fill up the congestion window you run into a couple of problems.

1. The browser can only use a limited number of connections to a host, so once these are in use the other requests queue behind waiting to a connection to become free.

2. Even when one becomes free, we've got the request / response latency before the browser sees an images bytes

3. If the response doesn't fill the congestion window i.e. it's a small file, then there's spare capacity that's not being used i.e. packets we could have send it the round trip that didn't.

4. In this demo the server sends connection: close so forces the browser to open a new TCP connection and negotiate TLS for each of the tiles, so the congestion window won't grow either.

Yes, HTTP/2 is faster, because it can send multiple requests at the same time to overcome latency, the server can fill the congestion window, and the window will grow.

But are our web pages build of tiny image tiles, or a greater variety of image and resources sizes?

EDIT: They've now enabled keep-alive which makes the HTTP/1.1 test much faster than it was

Regarding #4 isn't this a bit cheating, who doesn't use keep-alive. Also what about request pipelining? Doesn't that basically do the sane thing what http/2 is doing?

From memory request pipelining is disabled in most browsers as many intermediaries (proxies etc) screw it up, it's still vulnerable to head-of-line blocking even when it's enabled.

You'd be surprised how many servers don't have keep-alive enabled, it's much more common than I'd like.

I'm a great fan of HTTP/2 but I'd like to see realistic tests!

There has been one realistic test that I know of, done by Microsoft Research, and they found pipelining to be basically equivalent to HTTP/2.

But that doesn't fit the narrative. Did Google ever test against pipelining? Did IETF? No, they didn't. Did Google ever show the effect of head-of-line blocking on page load speed, especially when spread over several independent TCP connections? No, they didn't. Why didn't they do this basic research before pushing their new protocol?

They just said "it's 40% faster! rubber stamp this because it's in Chrome!" and some of their fans even created stacked demos where pipelining wouldn't be used (perhaps unintentionally, but still invalid comparisons).

Is it this paper - http://research.microsoft.com/pubs/170059/A%20comparison%20o...

I think if you read back though people like Will Chan's and others blogs it becomes clear that pipelining doesn't work reliably enough outside a test lab environment plus HoL blocking is still an issue for it

"Doesn't work reliably enough" doesn't answer the question of why Google didn't test HTTP/2 against pipelining. None of Google's performance improvement claims compared to pipelining, and they've never demonstrated or quantified an actual real-world head of line blocking problem (ironically, other than Google Maps loading very slowly in HTTP/2 because of a priority inversion).

Will Chan is the guy that wrote "it’s unclear what these intermediaries are". Oh well, there's some bad software out there, let's just make a whole new protocol /s. Fix the bad software, or at least find out what it is. If it's malware causing the problems, you don't need to make a whole new protocol you can just get rid of the malware.

Yup, and clients also mess it up, especially when it comes to pipelining. Should also not forget about TLS session resumption. I've seen it boost server performance over 10x, TLS session negotiation is CPU bound.

I didn't even check if they were using session resumption, OCSP stapling etc., I thought I'd found enough shortcomings for now

mostly keep-alive could be worse when only a few requets run

yes #4 is totally cheating. But pipeling is very different from http/2 multiplexing. pipeling the responses has to come in order they were requested.. so it could suffer from head of line blocking. http/2 is async in that regards.

Others have pointed out why HTTP/2 is still better but if you're curious about HTTP pipelining here are the reasons why Firefox and Chrome disabled it after years of testing:

https://bugzilla.mozilla.org/show_bug.cgi?id=264354 https://www.chromium.org/developers/design-documents/network...

Does nobody find it strange that neither Google nor Mozilla were able to determine what proxies and software broke with pipelining? In their controlled experiments pipelining worked fine. Pipelining worked fine in Opera, and in Mobile Safari, and Android Browser, and pretty much universally in Firefox as reported by users that enabled it.

It's a fair bet that the reason these companies didn't find out what was causing the very few failures was because it was due to SuperFish or other illicit software. Some people's computers having malware isn't a good reason disable pipelining.

Your theory that it was just malware is easily disproven by reading the bug reports and blog posts from the years people spent trying to make this work.

Firefox found far from universal success even after years of testing and blacklisting known-noncompliant servers:


(the actual blacklist at the time http://hg.mozilla.org/mozilla-central/file/1d122eaa9070/netw...)

Opera's implementation apparently relied on some non-trivial heuristics but they apparently weren't well documented and were discontinued with the transition to Blink.

Similarly, it's easy to find cases where people discovered real problems in the wild with the iOS implementation:


(This also affected Mozilla: https://bugzilla.mozilla.org/show_bug.cgi?id=716840)


It's easy to understand why people decided that it wasn't worth investing so much time in this when HTTP/2 would deliver significant additional benefits and by using TLS as a starting point could provably avoid the worst tampering proxies entirely.

Yes, there was one report of one problem with iOS pipelining, and circa-2000 IIS had problems. But these aren't what caused them to disable pipelining, allegedly it was the unknown software causing problems. For instance Google reported that some small percent of requests failed, but they never determined the cause for it. We know that SuperFish and other malware was out there intercepting HTTP, and unknown software was causing pipelining problems, and malware isn't known for its attention to detail.

Yes, and if you do the Akamai test[0], which actually uses Keep-Alive, and enable pipelining in FF, it's possible to get HTTP/1.1 to within ~20-50% [1].

[0] https://http2.akamai.com

[1] https://i.imgur.com/VJBKG36.png

I tried this test on mobile (iPhone 6 with iOS 9) over a flaky hotel wifi, and I got impressive results, 25s vs 3s: http://m.imgur.com/ND1wzZs

Also the OP test gave similar order of magnitude.

It looks like HTTP2 will be awesome for mobile!

Yes – HTTP/2 will show the greatest impact on any connection with high latency because you avoid the classic HTTP 1 behaviour where each request must finish before the next one can be issued.

If you have a low-latency connection close to the server you can easily find cases where HTTP/1 is still competitive – e.g. in this case using Akamai's demo with a 2ms ping time to their closest CDN node:


(That's presumably due to the HTTPS setup – even with caching disabled, that page takes ~.8 seconds on a reload)

The catch, of course, is recognizing that this is not the case for most people and so even if we're personally not seeing a huge benefit on a fast computer in an office, it's still of huge benefit to anyone on a cellular network, flaky wifi, saturated ISP connection on another continent.

Are there any browsers that have pipelining turned on by default ?

That's a good point. Not long ago I would say Opera uses it, but they discontinued it and new Opera basically uses Blink engine from Chrome.

This demo is representative of loading Clara.io scenes, lots of individual images and meshes:


Yes, we could somehow package this up on the server and unpack it on the client, but I'd prefer HTTP/2 do that for us.

Took 40 seconds for it to load and the navigation to show up. Why is there no helpful message?

The loading bar is really small and doesn't provide enough feedback.

It may not be representative of the average page, but there are certainly ones that id does. When you load into our HTML5 game, you download over 1,000 PNG images on your first load. HTTP/2 is pretty exciting for us in that regard since we should see a significant improvement in load times.

Why not to fit your game into single HTML file, without any external dependencies?

good analysis.

worth mentioning that the h2 test actually respects the congestion window.. this helps both at the sending-too-slow stage, and also at the sending-too-fast stage which can easily be shown with h1 using parallel connections for large objects. (the parallelism essentially bypasses the congestion control at startup)

The test is a lie. While I don't doubt improvements in http/2, this test uses "Connection: close" on the http/1.1 test, which means for each tile there needs to be a tcp connect and TLS handshake. This is not representative of real world.

In http/2 the "Connection: close" header is meaningless and all the tiles come from the same connection.

Playing the devil's advocate; "Connection: close" merely exaggerates the underlying issue.

That being said, the comparison would have definitely been fairer with keep-alive and having to resort to a trick like this makes me wonder how much faith they have in their own product.

concurrency is not impacted by this, but its affects window scaling, and 2 - 4 extra round trips per request for the connection setup.

> concurrency is not impacted by this

You're right. Watched it closely a second time and there's definitely concurrency.

the other server is 2x faster even without http/2

My wget implementation does not suppot http/2

HTTP Server:

  $ time wget https://1153288396.rsc.cdn77.org/http2/tiles_final/tile_18.png
  real 1.038      user 0.038      sys 0.007       pcpu 5.37
HTTP2 Server:

  $ time wget https://1906714720.rsc.cdn77.org/http2/tiles_final/tile_18.png
  real 0.539      user 0.045      sys 0.009       pcpu 10.01
of course that's just latency.. but this is hardly a scientific demonstration.

We should also consider the fact that this is cherry picking the worst trait of HTTP/1.1 and that's that it's latency sensitive.

A demo with a real webpage of large assets would be a better example.

They're closer to the same speed now. Some of the difference could be the I/O rate the h2 server has to contend with, right?

The real way to demonstrate this would be to release a VM or container for people to test on their own server. And add a demo page of large assets, like you suggest.

Funny, for me

- HTTP/1.1 17.83s - HTTP/2 57.73s

I guess I am living in a really remote area of the Interweb.

Something is very fishy. I tested this behind the evil-proxy-of-doom on the internal network and http2 was twice as fast despite the proxy barely supporting http1...

But then real http2 against the http2 server is still 2.98 sec vs 8.65 sec.

This is a copy of the Akamai's http2 demo: http://http2.akamai.com

Akamai's http/1 does keep alive. This one does not.

And Akamai's is inspired by http://http2.golang.org/gophertiles (which they acknowledge).

My results on the Akamai page have less of a difference between the results when compared to http://www.http2demo.io.

Akamai's test does not cheat by sending Connection: close

...and if you enable pipelining in Firefox, it's only 20% slower than HTTP/2 (~1.6s vs ~1.3s).

but pipelines have some real poorly performing cases (head of line blocking, cancel and retry semantics, etc..) that don't apply to h2 - those gotchas aren't represented in this test.

Yup, it's true there are pathological cases.

Problems with HOL blocking can be reduced significantly with good caching though. 50 blocking requests aren't much of an issue if they're all going to return small "304 Not Modified" responses straight out of the web servers file cache.

And don't forget you can still get HOL blocking over HTTP/2... at the end of the day the browser has to start parsing HTML before it knows what else it needs to request. The only alternative is teaching your web server HTML or a set of heuristics and doing PUSH. And PUSH actually counter-acts the good in caching because when I load your index.html the web server has no idea whether I have jquery or your blogs stylesheet cached or not.

What I really want when I visit a URL is for my browser to tell the web server when I last visited, and then for the web server to give me a complete list of all dependent resources and sub-resources that have changed since that visit.... basically a set of HEAD responses that constitute a diff. My browser can then just say "hmm, ok, I needed these last time, and they've changed, so while I'm downloading index.html I'll just go ahead and request this and this and this even though I have no idea how I'm going to load them yet".

Basically, imho, all webpages should be cached as git repos ;)

h2 is better than that.. each request carries a priority, so the server can stop sending one resource and start sending another higher priority one if it becomes available. It can even do this interleaving based on actual data available to send not just the request queue - so things like CPU and IO time on one high-priority resource don't become blockers to using the bandwidth for a lower-pri resource that is ready to go.

Obviously that requires a decent server implementation that uses smallish sending chunks and doesn't over buffer.

So the browser should not reorder requests and hold some back (they all do play those games in h1) with h2 - it should just set the pri/dep flags well to give the server (which is going to do the sending) maximum information.

The real HOL problem with h2 involves TCP loss recovery. Its a fairly minor issue though in practice.

I serve Wordpress over SPDY 3.1 all the time. Not sure if that's close enough to HTTP/2 to be comparable, but I don't see how HTTP/2s capabilities will solve the following problem:

And that is I still need to wait 100-200ms for PHP to spit out HTML from index.php, 100ms for the client to receive it, and another 100ms for a list of prioritised requests for dependent resources to come back. How is that not a "real HOL problem"?

If you treat a request as referring to a bundle of resources rather than a single response then you can solve all that by allowing the client to send a lot more metadata about its current view of the whole package. This frees up the web server to respond with logo.png, funkystyle.css and bloated.js, or a freshness response, while we wait for PHP

The only way a user-agent can solve this under HTTP/2 is to request a bunch of related resources it knows it needed the last time, at a low priority, and hope they are either needed, or unneeded but unchanged (as to not waste bandwidth). And you know, a pipelined HTTP/1.1 client can do this just fine by prefacing its primary request with a bunch of If-Modified-Since requests.

And sure, you can build a webserver that can parse HTML out of PHP and do HTTP/2 PUSH, but again that's a waste of bandwidth if I have these resources cached.

That's why I think it's time to move away from a simple request, response and cache model and start thinking in terms of bundle synchronisation and dependencies.

This test only demonstrates that CDN77 can't serve HTTP/1.1 properly.

This demo could be possibly even faster if using HTTP/2.0 Server Push.

Btw, note that if you're looking into supporting HTTP/2.0 on your own then with nginx there's still some waiting left: https://www.nginx.com/blog/early-alpha-patch-http2/ And there's no plan to support server push with the first production release. So NGINX users will have to keep using SPDY.

AFAIK the latest plan with SPDY is to remove it from Chrome browser in early 2016 so nginx has to make sure to deliver before that...

Ran this a couple of times in Firefox 40/Linux x86_64, HTTP/1.1 was always faster by 10-20% (~1s vs. ~1.15s).

In my case (and I'm not sure why), HTTP/1.1 consistently got ~5s and HTTP/2 consistently got ~10s.

I assume I was supposed to see the opposite result? :P

Reading the comments here, the results seem to fluctuate heavily in dimensions, winners and margins.

I guess, this test is just not an epitome of anything at all.

Chrome 43 on Linux from Germany here.

HTTP/2 routinely outperforms HTTP/1.1 by several seconds for me. HTTP/1.1 being somewhat stable at 7-8 seconds and HTTP varying from 4 to 11 seconds (though generally closer to 11 seconds than to 4).

The Akamai demo works fine: https://http2.akamai.com/demo (though HTTP/2 is only ahead by 20% or so)

It appears they turned on keep alive in http/1.1 test now. http/1.1 timings improved by a lot ... still obviously slower than http/2

6.41s HTTP/1.1 vs 2.51s HTTP/2 on FF42. Very nice! (Although when HTTP2 is going the FPS drops quite a bit.)

Can someone explain what exactly HTTP2 is doing differently to achieve such an improvement?

Very simple: Instead of making 200 requests, http2 will use a single requests and stream in the chunks continuously. Also noticeable by the pictures not appearing randomly but in the same order they are sent out.

Well, they also appear to be stacking the deck a bit, by configuring HTTP/1.1 in ways that no sane person would (i.e., force-closing the connection after each resource -- HTTP/1.1 already supports re-using the existing connection for further requests, but they're explicitly disallowing that to make HTTP/2 seem better).

Using exactly one connection and multiplexing all the files into it (parallel downloads with only one socket).

12.75s vs 1.40s. This is quite impressive - looking forward to a faster Web, slowly migrating to HTTP/2.

Any clue if Amazon CDN service is / will offer HTTP/2 support too?

You won't get a faster web, you'll just get more cruft shoved on each page.

I don't think the limiting factor in "how much cruft should we add" is bandwidth. It's the point at which it becomes so hard to use that users leave.

Example: Buzzfeed loads in 1 second for me on my work's broadband, but it's still full of junk making it hard to use. Bandwidth-wise, it could handle more crap on the page, but in terms of usability it's at the limit.

I think you actually will get a faster and generally improved web when sites are finally able to end HTTP/1 support, because it will free up web developers from old performance hacks like asset concatenation, image sprites etc, which add a lot of friction to making websites.

All that said, this demo is bullshit for the reasons given in other comments.

Recently the .net 4.6 has allowed windows server to run some http/2 for the edge browser, it greatly improved the load speed and web socket calls of our app.

My results show 1.3s for HTTP/1.1 and 3.0 seconds for HTTP/2 using Chrome on OS X. So, this demo wasn't very impressive for me.

Same for me. I tried numerous times and couldn't get HTTP/2 to be faster than HTTP/1.1. I had one time where it was close, but the vast majority of the time its between 2x and 4x slower than HTTP/1.1.

That means your internet is fast enough to reduce latency problem that HTTP/2 fixes.

on mobile the difference is much bigger, like, 100s vs 5s

Ignoring HTTP/2, I'm finding it very interesting that on my 11" MacBookAir6,1 running OS X 10.9.5, Safari 7.0.6 is much faster than Chrome 44.0.2403.155 at the HTTP/1.1 test. Safari performs the test in almost exactly 3.00 seconds, while Chrome never comes in under 3.15 and often takes as high as 3.45.

Did anyone else observer the JS in the iframe footer? I'm just curios why it's obfuscated and what's its purpose (see surce of https://1153288396.rsc.cdn77.org/http2/http1.html)

Hm, HTTP/1.1 at 15.5s, HTTP/2 at 23.72s

Yeah, I "can see the difference clearly", but I don't think it is the kind of difference they expected or intended.

Edit: Firefox 40 on Windows 7 at work. Will try at home as well.

Oddly enough, the Akamai demo someone else posted gives me 18.47s for HTTP/1.1 and 2.24s for HTTP/2.

Shit happens :) I have HTTP/1.1 at 14.54s, HTTP/2 at 2.22s .. seems more like your configuration thing?

Maybe. But the Akamai demo does show a major improvement for HTTP/2, so I think it is primarily a CDN77 thing. Maybe a bad route.

What browser did you use? Are you sitting behind a proxy?

For me: HTTP/1.1 at 13.19s, HTTP/2 at 1.48s

Mine was http/1.1 at 3.4~s and http/2 at 0.75s

edit: this was Chrome 44.0.2403.155 m on windows 8 64

At home (Firefox 40 on Windows 10) I get 2.34s for HTTP/1.1 and 1.26s for HTTP/2.

Here: HTTP/1.1 at 15.48s and HTTP/2 at 2.06s


The test server does not actually make sure the h2 test is using h2. If you are using a client that does not have h2 support then you are just using the fallback code on the server and testing h1 against h1. An iphone is a good example :) (but it may be using spdy instead.. lots of variables)

The speed test links at the bottom for single files don't make any sense. A single file download wouldn't benefit from the upgraded protocol and just seems, from very rough testing on my 100mb line, like the http/1 links are artificially slowed down.

Cheaper and faster than AWS Cloudfront with free custom SSL. So what is the catch?

One issue is our data is on S3 and I believe that any outgoing S3 traffic to this CDN would be slow and cost money, but S3 to CloudFront is is likely prioritized and free.

Outgoing traffic from S3 has no bandwidth cost associated with it.

That isn't true. S3 has no bandwidth cost for incoming traffic, but it certainly does outgoing to the internet: https://aws.amazon.com/s3/pricing/.

Oops my bad. Havent had my full dose of coffee today.

It is interesting that Safari & Firefox beat Chrome in the HTTP/1.1 test for me. However, the HTTP2 test is then twice as fast as Safari. Maybe we can stop smashing all those javascript files together.

Here's a fun overview video explanation of HTTP/2 from the other day..


What browser?

For me (on FF) HTTP/1.1 was faster in around 5/7 attempts. I'm on corporate network so not sure that's affecting it.

no connection: close anymore

yes u'r right man!

How about a demo that doesn't require javascript to be enabled to work? Or is javascript a hard requirement for HTTP/2?

I ran the demo several times and I got a 1 second difference. I guess there is a place where even 1 second is important.

Awesome! Now web sites can pack 6 times more ads and other cruft onto each page.

12.50 -> 1.41

Chome 44, Win 7

With this, JS bundling is a thing of the past, I think.

Ignoring the technical issues with the demo that have been pointed out -- how does this actually prevent the need for a js bundle building process of some kind?

Suppose page.html depends synchronously on A.js depends synchronously on B.js depends synchronously on C.js. Somehow page.html needs to statically represent that it depends on each of A,B,C to avoid requiring a roundtrip after each new dependency is loaded. Yes the roundtrip cost is lower, but minimizing the number of roundtrips is desirable even over http2. The utility of a module bundler like webpack which walks the dependency graph of A.js and constructs a static representation of all the dependencies required by the html page seems like its somethings thats still going to be desirable to use even with http2 ...

Lol ran slower than http1 on my iPhone :/

HTTP/2 is consistently slower for me...

Not sure what is going on, but here were my results.

HTTP1 - 3.13s

HTTP2 - 0.54s

HTTP1 - 10.88s

HTTP2 - 1.65s

Chrome on Windows 8.1.

How can I enable HTTP/2 on Apache?

Now make those images webp.

this HTTP 2.0 was faster for me by only .01 seconds.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact