
Loading 180 tiled images with HTTP/2 vs. HTTP/1 - mohamedattahri
https://http2.golang.org/gophertiles?latency=0
======
jewel
Note that this means that there's no longer a pressing need to consolidate
multiple javascript files into one file, likewise with CSS files. Icon sprites
aren't necessary either.

There are still some gains from combining the files, as it will mean less
headers will go over the wire, and you'd still want to minify them so that
there were less total bits, but the gains aren't going to be as big as they
used to be.

~~~
andrewstuart2
I personally still think that JS should still always be pre-processed.

When you can spend longer up front optimizing the concatenation, minification,
and compression, you can amortize that additional cost across every request
and still make important incremental gains.

A bit saved is a bit earned.

~~~
Kudos
You can and should still compress and do some amount of minification.
Concatenation is debatable on sites with varied page types and JS needs.

> A bit saved is a bit earned.

So don't send bits the user probably won't need? :P

~~~
Twirrim
> So don't send bits the user probably won't need? :P

Surely good for both your bandwidth bills, and the end user experience. The
less work your users have to do, the better, especially when it comes to
mobile devices.

------
codewithcheese
Akami http2 demo
[https://http2.akamai.com/demo](https://http2.akamai.com/demo)

~~~
bsdetector
Firefox Nightly:

    
    
      HTTPS 1.1: 22 ms latency, 2.83s load time
      HTTPS 2:   17 ms latency, 2.91s load time
    

Network panel in web developer tools says the second was actually fetched over
HTTP/2 so it looks like the demo worked... just not as intended.

So HTTP/2 provides no performance benefit in this case. Although I do have
pipelining turned on, and unlike the gopher tiles demo this server actually
returns "Connection: Keep-Alive" header necessary for pipelining to be used so
that might explain it; Microsoft Research did determine that SPDY and
pipelining have essentially the same page load performance.

~~~
iotku
Chromium:

    
    
        HTTPS 1.1: 140ms latency, 46.47s load time
        HTTPS 2:   25ms  latency, 4.76s load time
    
    

Definitely a difference on a high latency (Sat) connection, although I'm not
aware what the latency amount is based on.

~~~
bsdetector
And now you know why Google never tested against pipelining... if they had
they'd have to sell it as marginally faster and more sensible, rather than
saying it is up to 10x faster.

Since you are on satellite I suggest trying Firefox and turning on aggressive
pipelining in about:config. As far as I know I've never had a problem with it
in years, and as you can see it brings HTTP/2 speeds to non-SSL and older
sites.

~~~
ZeroGravitas
If aggressive pipelining worked well enough then Mozilla would turn it on by
default. The fact they don't is more than enough reason for Google to not test
against it.

~~~
bsdetector
Mozilla and Google claim there are bad servers out there that pipelining
doesn't work with, but yet neither were able to identify those bad servers or
reproduce the problems -- probably because the bad software is Superfish and
other malware. You don't see problems with iOS Safari using pipelining.

Maybe they should have asked Kaspersky. If you have a clean system pipelining
works great.

~~~
ZeroGravitas
For anyone interested, a summary from the decision maker at Mozilla who made
this call:

[https://bugzilla.mozilla.org/show_bug.cgi?id=264354#c65](https://bugzilla.mozilla.org/show_bug.cgi?id=264354#c65)

 _" It pains me as I believe firefox has the most sophisticated pipelining
algorithm ever built, but the fundamental approach is simply flawed in ways
that multiplexing is not. Let's put energy into making multiplexing a
success."_

~~~
bsdetector
> _" Let's put energy into making multiplexing a success."_

In other words, the guy just doesn't want to work on pipelining anymore, not
that it doesn't work well. He also said "it's good" and kept it enabled for
mobile because "the higher rtts tip the balance in its favor" \-- and you're
telling somebody with _satellite_ internet not to use it to reduce page load
time for the vast majority pages out there that are not SSL (and so not HTTP2)
or are on old servers. Really?

------
mahouse
The server seems to be overloaded right now.

Anyway, disable HTTPS Everywhere if you have it, since if you don't do it
there will be no difference. :-)

------
dcsommer
The reverse schadenfreude here is great. All the haters on HN complained about
HTTP/2 endlessly, and here we've given them a faster Internet anyway.

------
ben_pr
I am shocked at the differences. A picture is worth a thousand words.

------
ZeroGravitas
The Google Page speed service has a test page where you provide a URL and it
runs it through the webpagetest.org service twice, to show before and after
times (and other details) for all the various optimisation techniques it
applies automatically.

[https://developers.google.com/speed/pagespeed/service/tryit](https://developers.google.com/speed/pagespeed/service/tryit)

One of these optimisations is the use of SPDY/HTTP2 I believe? (Actually seems
like WebPageTest has some issues with HTTP2 currently, though it's at the top
of their TODO list to fix them: [https://github.com/WPO-
Foundation/webpagetest/issues/20](https://github.com/WPO-
Foundation/webpagetest/issues/20)) It would be great if they (or someone else)
provided this service but only changed the use of HTTP2 so that people could
run this benchmark on their own sites, and people could get a comprehensive
view on what kinds of sites would benefit from switching today (without even
bothering to remove old optimisations like image spriting etc.)

Those sites that benefit without any change may be the first to move, as so
many workflows are built around concatenation of files etc. and that workflow
will still be needed for some percentage of users, probably for years to come,
so an initial, easy win would be good to demonsrate. I'm hopeful that if you
use SSL then HTTP/2 will _always_ be faster, but it would be good to see some
data on that.

------
einrealist
Interesting. If the tiles are cached by the browser, HTTP/2 is slower than
version 1. Only for the first request HTTP/2 was tremendously faster than its
predecessor.

------
geoffreyvdb
It seems to be offline now?

~~~
mholt
Seems to be having trouble with the all TLS handshakes from HN. Plaintext http
connection works for me (but it's not http/2)

------
dsiegel2275
Wouldn't a more fair comparison involve domain sharding for the HTTP/1 impl?

~~~
mohamedattahri
Good point, but domain sharding is just a clever optimization (hack) to
emulate some of the parallelism provided by HTTP/2 out of the box.

~~~
lomnakkus
Certainly, but comparing X-as-almost-never-practiced[1] to Y is pretty
disingenuous unless you're going to _very_ clearly state up front that that is
what you're doing.

Of course, the hackiness of X-as-practiced is a valid point of qualitative
comparison and should indeed be brought up.

[1] Ok, I'm exaggerating a little. It's not quite "never", but most large
sites where image loading time is an issue are using some form of workaround,
be it spriting, multiple domains, or something else.

~~~
kbenson
I think the point is that these sites get an efficient solution for free
rather than spending time and resources on designing and implementing
sharding. Just because it's common for sites with loading time constraints
doesn't mean they wouldn't have been a whole lot happier to not have to have
dealt with it.

------
iotku
Ran this a while ago on my Satellite connection [1], it's really exciting for
users with high latency.

Satellite internet services have come a long way from what it used to be and
has a much higher raw speed than before, but of course you can only do so much
with latency when the signal is traveling such a distance.

[1]:
[https://www.youtube.com/watch?v=Ut-8ieRg1yE](https://www.youtube.com/watch?v=Ut-8ieRg1yE)

------
hayksaakian
A Better idea for a demo:

Take the top 20 website on the web, and get their static assets from
archive.org

Show me how much faster they load. I'm imagining sites like CNN or Yahoo which
serve many images on their home page could load faster.

How much faster?

------
eridal
I wonder how well will a reverse http/2-proxy, in front of a http/1-server
will behave

~~~
andrewstuart2
It could be rather complex, but almost certainly will still help. It could,
for example, heuristically process the script tags as HTML passes back through
it to the client, make optimistic requests to the HTTP/1 server (presumably at
a much lower latency than the client), then push those documents down to the
client.

And of course, it could also cache that knowledge and do it all instantly.

~~~
floatboth
nghttpx, the proxy from the nghttp2 package, turns rel=preload links from the
Link header into server push:
[https://nghttp2.org/documentation/nghttpx.1.html#server-
push](https://nghttp2.org/documentation/nghttpx.1.html#server-push)

------
eridal
from here, with 1sec delay, 3 attempts, avg values, empty cache each time..
38.7s vs 58.3s

------
motoboi
Could someone please explain what is going on in this demo?

~~~
pfranz
I'm guessing the submitter saw this talk at PyCon that used the page as
illustration: Cory Benfield - Hyperactive: HTTP/2 and Python
[https://youtu.be/ACXVyvm5eTc](https://youtu.be/ACXVyvm5eTc)

It's a new major version of HTTP. At the high level it's backwards compatible
(status codes, urls, etc are the same), but the communication of those things
changed. It's binary instead of plain text (a major point of contention), it's
stateful instead of stateless (it can refer back to previous requests, which
makes debugging harder when you jump in the middle of a communication), it can
multiplex data (send multiple files concurrently--this is where the demo
shines), adds a prioritization layer, and header compression. I may have some
of those details wrong--all of that I learned from the talk.

