Is this correct? Or is there more to it than that?
What I'm now doing is reducing the number of third party domains I call.
In essence, where I used to use cdnjs.cloudflare.com or whatever other externally hosted JS or CSS, I'm now mostly self-hosting, but still behind CloudFlare.
You can see this in action on https://www.lfgss.com/ which is now serving everything it can locally... only fonts and Persona really remain external.
I have been using preconnect hints to try and reduce the latency created by contacting those 3rd parties, but TBH the fact that I use SSL as much as possible meant that those connections take time to establish. In that time, most of the assets can be delivered over my already open connection.
This also looks to be a smarter thing to do anyway; the increasing prevalence of ad-blocking tech is impacting 3rd party hosted assets, and thus the experience of your users. You can mitigate that by self-hosting.
I haven't obliterated first-party extra domains, for example I still use a different domain for uploaded assets by users. This is a security thing, if I could safely do it I'd serve everything I can from just the one domain.
Basically: Self-host, http/2 has brought you the gift of speed to make that good again.
See this: https://blog.cloudflare.com/using-cloudflare-to-mix-domain-s...
A bit like how google.com is for maps and anything users upload go to googleusercontent.com.
LFGSS is served from www.lfgss.com and the user assets go via lfgss.microco.sm, and proxied user assets (another level of distrust altogether) are going via sslcache.se .
I own all of the domains, and they're on the same CloudFlare acount, but we don't yet offer ways to give users control over which domains get SNI'd together, and this is especially true when the domains are on different CloudFlare plans.
That said... it's cool. To reduce everything from 8 domains down to 3 or 4 is a significant enough improvement that I'm happy.
Feels crazy but this makes me think to proxy imgix, which uses Fastly (not supporting SPDY or HTTP/2 yet), through CloudFlare. I'll just set up CNAMEs on my imgix account that are subdomains of my main domain, then add them to CloudFlare with acceleration on - but no caching (since imgix serves images by user agent). This adds an extra datacenter to datacenter hop, but hopefully that's really fast and upgrading the client to SPDY or HTTP/2 would outweigh that.
Anybody else tried something like this?
We already started to proxy our S3 / CloudFront assets through our load balancer so they can be cached and served through the SPDY (now HTTP/2) CloudFlare connection. However, since we're using imgix to serve different images by device, we can't allow CloudFlare to cache.
I've set up some tests to proxy Fastly through CloudFlare and my initial tests are inconclusive as to whether the crazy extra hop is worth it. It seems that if we have tons of images, it probably will be faster, but most of our pages only load about 6 images above the fold and lazy load everything else, so that might be why the difference is negligible. I'll have to test on a page where more images download concurrently to see if 1 extra hop to get SPDY and HTTP/2 is worth it.
The reason being that most likely they've already got it in their cache and won't make a new call to Google (or you) at all.
CF is already a CDN so it's better to just pipe all the assets through a single connection rather than the more likely chance of setting up another connection just for jquery.
Many of our users are stuck with windows 8/8.1, or even 7 for many more years unfortunately. Some of them won't even have another browser as an option(enterprise...).
Microsoft first added ALPN to SChannel in 8.1 and RARELY updates this library outside of OS releases, so that's (at least one reason) why you won't see on Windows 8/2012 Server.
The reason SChannel matters is that the protocol used to negotiate which "next generation" protocol is to be used for the HTTP connection (not session, minor point) is something called Application-Layer Protocol Negotiation. ALPN is a TLS extension sent as part of the ClientHello but wasn't added to SChannel until Windows 8.1/2012 R2 Server. (There was a predecessor to ALPN called NPN that Adam Langley authored/implemented for Chrome but Microsoft never implemented it.)
A awful lot of companies still use IE and not Windows 10 ;)
Stay tuned for instructions on how to gain protocol version insight for your own website on CF.
"Before installing the nginx‑plus‑http2 package, you must remove the spdy parameter on all listen directives in your configuration (replace it with the http2 and ssl parameters to enable support for HTTP/2). With this package, NGINX Plus fails to start if any listen directives have the spdy parameter.
NGINX Plus R7 supports both SPDY and HTTP/2. In a future release we will deprecate support for SPDY. Google is deprecating SPDY in early 2016, making it unnecessary to support both protocols at that point."
The question is... does HTTP/2 on the backend help that much. We aren't restricted like a browser in terms of bandwidth, latency or number of connections we can open. Greatest benefit for HTTP/2 is between browser and us, but origin HTTP/2 hasn't been forgotten.
My ideal situation is one where I can have my webapp specify it's dependencies through a spec such as Server Hints, and have them be requested and cached edge-side, turned into a Server Push to the end user.
Seems like for high traffic sites and APIs the persistent non-blocking multiplexed connections with binary transfer might make a big difference?
I think Instart Logic uses the same kind of model in between their proxy nodes called IPTP (inter-proxy transport protocol).
(there's an issue to fix that on the caniuse github repo: https://github.com/Fyrd/caniuse/issues/2098)
Minor nitpick: I don't agree with the way they calculate the percentage, if it takes 5% of the time then it's 20x (i.e. 1900%) faster, not 95%.
Google never showed any results vs pipelining. They just said "head of line blocking bad" and "one TCP connection per user good" (for tracking) and people just ate it up without evidence because, I suppose, they viewed HTTP/2 as conceptually simpler and more elegant. Nevermind that HTTP/2 didn't address any criticism that PHK had... that's ok because Google was just going to do it anyway.
And in particular (from links in the above):
"Bug 264354 - Enable HTTP pipelining by default
Status: RESOLVED WONTFIX" - in particular one of the last comments in the thread:
"Bug 395838 - Remove HTTP pipelining pref from release builds
Status: RESOLVED WONTFIX":
My general impression is that there were a few issues on Windows, in particular with "anti-virus software", and some problems with broken proxies -- as well as a handful of issues with hopelessly broken servers.
Additionally, it appears SSL/TLS latency was never really considered (not explicitly stated, but there appear to be implications that on "fast networks" http is "fast enough" that pipelining makes little difference) -- in other words it does indeed appear that just enabling piplining as the web moved from plain http to TLS, would've sidestepped most of the need for HTTP/2...
# tail -n100000 access.log | grep 'jquery.js' | grep 'HTTP/1' | wc
# tail -n100000 access.log | grep 'jquery.js' | grep 'HTTP/2' | wc
Pushing people onto SSL is good.