
HTTP/2 is here. Goodbye SPDY? Not quite yet - akerl_
https://blog.cloudflare.com/introducing-http2/
======
xpose2000
In terms of optimal performance for end users... I should now be hosting all
files on my own server w/ Cloudflare rather rather something like Google's
CDN? For example, jQuery. Reason being, is that those files will all load in
parallel on my own domain, whereas for another domain like Google, it'd have
to renegotiate an SSL connection and wait a bit longer?

Is this correct? Or is there more to it than that?

~~~
buro9
You are correct.

What I'm now doing is reducing the number of third party domains I call.

In essence, where I used to use cdnjs.cloudflare.com or whatever other
externally hosted JS or CSS, I'm now mostly self-hosting, but still behind
CloudFlare.

You can see this in action on [https://www.lfgss.com/](https://www.lfgss.com/)
which is now serving everything it can locally... only fonts and Persona
really remain external.

I have been using preconnect hints to try and reduce the latency created by
contacting those 3rd parties, but TBH the fact that I use SSL as much as
possible meant that those connections take time to establish. In that time,
most of the assets can be delivered over my already open connection.

There is an argument that cdnjs/Google CDN or whatever is better for the web,
but personally I'm unconvinced. I think one should self-host/control all of
the JavaScript that runs on your own site, and that unless the exact versions
of the exact libs are cached in end user browsers the benefits are not even
there.

This also looks to be a smarter thing to do anyway; the increasing prevalence
of ad-blocking tech is impacting 3rd party hosted assets, and thus the
experience of your users. You can mitigate that by self-hosting.

I haven't obliterated first-party extra domains, for example I still use a
different domain for uploaded assets by users. This is a security thing, if I
could safely do it I'd serve everything I can from just the one domain.

Basically: Self-host, http/2 has brought you the gift of speed to make that
good again.

~~~
X-Istence
If your first-party extra domains are advertised in your SSL cert, then Chrome
at least will use the same connection for those assets too.

See this: [https://blog.cloudflare.com/using-cloudflare-to-mix-
domain-s...](https://blog.cloudflare.com/using-cloudflare-to-mix-domain-
sharding-and-spdy/)

~~~
buro9
The first party extra domains use a different domain and .tld altogether.

A bit like how google.com is for maps and anything users upload go to
googleusercontent.com.

LFGSS is served from www.lfgss.com and the user assets go via
lfgss.microco.sm, and proxied user assets (another level of distrust
altogether) are going via sslcache.se .

I own all of the domains, and they're on the same CloudFlare acount, but we
don't yet offer ways to give users control over which domains get SNI'd
together, and this is especially true when the domains are on different
CloudFlare plans.

That said... it's cool. To reduce everything from 8 domains down to 3 or 4 is
a significant enough improvement that I'm happy.

------
Cshelton
I really wish Microsoft gave HTTP/2 support to IE 11 on windows 8/8.1. Any
insight as to why they decided not to support it on IE 11 for windows < 8.1
would be appreciated.

Many of our users are stuck with windows 8/8.1, or even 7 for many more years
unfortunately. Some of them won't even have another browser as an
option(enterprise...).

~~~
mattmanser
They're not even updating IIS 8.5 to support HTTP/2 as far as anyone can tell,
you'll have to update to windows server 2016 to get it.

~~~
snuxoll
"Updating" IIS to support HTTP/2 means updating http.sys, something they are
not keen on doing without a major OS upgrade.

~~~
prdonahue
Nor do they like updating schannel.dll (the underling SSL/TLS stack) unless
there's an extremely serious vulnerability in it. And even then, they bungle
it more often than not ([http://www.infoworld.com/article/2848574/operating-
systems/m...](http://www.infoworld.com/article/2848574/operating-
systems/microsoft-botches-kb-2992611-schannel-patch-tls-alert-code-40-slow-
sql-server-block-iis-sites.html)).

The reason SChannel matters is that the protocol used to negotiate which "next
generation" protocol is to be used for the HTTP connection (not session, minor
point) is something called Application-Layer Protocol Negotiation. ALPN is a
TLS extension sent as part of the ClientHello but wasn't added to SChannel
until Windows 8.1/2012 R2 Server. (There was a predecessor to ALPN called NPN
that Adam Langley authored/implemented for Chrome but Microsoft never
implemented it.)

------
NoGravitas
I'm quite surprised that there are a lot of browsers in the wild that support
SPDY, but not HTTP/2, given auto-updating. But that's what their numbers show.
Maybe mobile skews this?

~~~
therealmarv
I think it has more to do with "old" IE11 version on Windows<10: See
[http://caniuse.com/#search=http%2F2](http://caniuse.com/#search=http%2F2) vs.
[http://caniuse.com/#search=spdy](http://caniuse.com/#search=spdy)

A awful lot of companies still use IE and not Windows 10 ;)

~~~
chriselsen
The caniuse.com data on http/2 appears to have some flaws. Biggest buckets for
browsers that support SPDY but not HTTP/2 for our website right now are: a)
Chrome for mobile b) Safari on older Mac OS X versions c) Older Chrome for
desktop versions d) Internet Explorer (small impact) Other websites might see
different ratios depending on their audience.

Stay tuned for instructions on how to gain protocol version insight for your
own website on CF.

~~~
therealmarv
thanks for more insight statistics :)

------
therealmarv
Hmm, does anyone know how to support SPDY and HTTP/2 on a nginx>=1.9.5 and
which has only module "ngx_http_v2_module" build inside? What is the
configuration for nginx to support SPDY and HTTP/2 ?

~~~
jgrahamc
We developed our own patch to NGINX that allows it to support both SPDY/3.1
and HTTP/2 and to negotiate correctly. Stock NGINX allows you to have one or
the other, but not both.

~~~
therealmarv
Would be nice to see that patch open sourced (at least I can hope) ;) :)

~~~
jgrahamc
I'm sure we will. We open source pretty much everything we can (i.e. we don't
open source stuff that's too complex to extract from our business logic).

------
ropiku
Does anyone know if they support HTTP/2 on the backend side too ? They didn't
with SPDY and I think it would help to multiplex connections all the way.

~~~
jgrahamc
Not right now. We are currently experimenting with Server Push because we
think it will help with the end user experience more than HTTP/2 to the origin
server. You can see that running on the experimental server
[https://http2.cloudflare.com/](https://http2.cloudflare.com/)

The question is... does HTTP/2 on the backend help that much. We aren't
restricted like a browser in terms of bandwidth, latency or number of
connections we can open. Greatest benefit for HTTP/2 is between browser and
us, but origin HTTP/2 hasn't been forgotten.

~~~
predakanga
I've love to see CloudFlare enable admins to utilize Server Push without extra
configuration on their backend.

My ideal situation is one where I can have my webapp specify it's dependencies
through a spec such as Server Hints[1], and have them be requested and cached
edge-side, turned into a Server Push to the end user.

[1]: [https://www.chromium.org/spdy/link-headers-and-server-
hint/l...](https://www.chromium.org/spdy/link-headers-and-server-hint/link-
rel-subresource)

~~~
jgrahamc
Stay tuned.

------
tgb
Those page load improvement numbers seem ridiculously good (factor of almost 2
versus HTTP 1.1). Are they really expecting that to hold up in real world
cases?

~~~
contravariant
In their demo[1] it is 20x faster for me, I had to disable http pipelining for
it to work correctly (not sure why, but http/2 became a lot faster after I had
disabled pipelining).

Minor nitpick: I don't agree with the way they calculate the percentage, if it
takes 5% of the time then it's 20x (i.e. 1900%) faster, not 95%.

[1]: [https://www.cloudflare.com/http2/](https://www.cloudflare.com/http2/)

~~~
bsdetector
Isn't it interesting that even today, after Microsoft Research showed that
pipelining could be almost as fast as SPDY and when activating it in Firefox
is an about:config away, people _still_ refuse to include it in any tests?

Google never showed _any_ results vs pipelining. They just said "head of line
blocking bad" and "one TCP connection per user good" (for tracking) and people
just ate it up without evidence because, I suppose, they viewed HTTP/2 as
conceptually simpler and more elegant. Nevermind that HTTP/2 didn't address
any criticism that PHK had... that's ok because Google was just going to do it
anyway.

~~~
e12e
If anyone else wants a bit of history on pipelining wrt Netscape/Firefox etc:

[http://kb.mozillazine.org/Network.http.pipelining](http://kb.mozillazine.org/Network.http.pipelining)

And in particular (from links in the above):

"Bug 264354 - Enable HTTP pipelining by default Status: RESOLVED WONTFIX" \-
in particular one of the last comments in the thread:
[https://bugzilla.mozilla.org/show_bug.cgi?id=264354#c65](https://bugzilla.mozilla.org/show_bug.cgi?id=264354#c65)

And:

"Bug 395838 - Remove HTTP pipelining pref from release builds Status: RESOLVED
WONTFIX":
[https://bugzilla.mozilla.org/show_bug.cgi?id=395838](https://bugzilla.mozilla.org/show_bug.cgi?id=395838)

My general impression is that there were a few issues on Windows, in
particular with "anti-virus software", and some problems with broken proxies
-- as well as a handful of issues with hopelessly broken servers.

Additionally, it appears SSL/TLS latency was never really considered (not
explicitly stated, but there appear to be implications that on "fast networks"
http is "fast enough" that pipelining makes little difference) -- in other
words it does indeed appear that just enabling piplining as the web moved from
plain http to TLS, would've sidestepped most of the need for HTTP/2...

------
mei0Iesh
I'm using HTTP/2\. Here's some quick stats:

    
    
        # tail -n100000 access.log | grep 'jquery.js' | grep 'HTTP/1' | wc
    
        3,095
    
        # tail -n100000 access.log | grep 'jquery.js' | grep 'HTTP/2' | wc
    
        6,074

~~~
d0ugie
For me, 505 and 1947 respectively. I guess my audience is hipper than yours.
:)

------
adamowen
For comparison, I enabled HTTP/2 via CloudFlare on a dev site. Results:
[http://blog.adamowen.co.uk/deploying-http2-using-
cloudflare-...](http://blog.adamowen.co.uk/deploying-http2-using-cloudflare-
initial-results/)

------
joeblau
Just tested my side project
[https://www.gitignore.io](https://www.gitignore.io) and it now has sub second
loading time. Unfortunately, adding Google analytics doubles the loading time
to about 1.8 seconds.

~~~
treyp
at least google analytics is non blocking

------
xyproto
Here's a small utility for checking if a web server offers HTTP/2:
[https://github.com/xyproto/http2check](https://github.com/xyproto/http2check)

------
danielsamuels
Is it possible to use HTTP/2 without SSL yet? I tried it a few weeks ago and
my browser was just downloading a 4KB file with some random bytes in it, I
assume this was the server response but it wasn't clear.

~~~
Skunkleton
Per the spec SSL is not required, however all major browser implementations
require TLS to negotiate using HTTP/2.

~~~
danielsamuels
So in theory it's not required but in practice it is? And to think it used to
be IE which didn't follow specs.

~~~
Dylan16807
The browsers follow the spec fine. The requirement for SSL was almost in the
spec, and support for non-SSL is optional for a reason.

Pushing people onto SSL is good.

~~~
paulddraper
II just like pushing people around in general.

------
tracker1
Am I correct in assuming this means that cloudflare reads html to determine
other files that need to be sent (css, js, images)?

~~~
Viper007Bond
Push isn't supported so it's all on the browser to request the needed files.

~~~
jgrahamc
Yet

------
raullen
Google's HTTP loadbanlancer and CDN have supported H2 for a long while.

------
ape4
Both? Yuck.

