Hacker News new | comments | ask | show | jobs | submit login

In terms of optimal performance for end users... I should now be hosting all files on my own server w/ Cloudflare rather rather something like Google's CDN? For example, jQuery. Reason being, is that those files will all load in parallel on my own domain, whereas for another domain like Google, it'd have to renegotiate an SSL connection and wait a bit longer?

Is this correct? Or is there more to it than that?

You are correct.

What I'm now doing is reducing the number of third party domains I call.

In essence, where I used to use cdnjs.cloudflare.com or whatever other externally hosted JS or CSS, I'm now mostly self-hosting, but still behind CloudFlare.

You can see this in action on https://www.lfgss.com/ which is now serving everything it can locally... only fonts and Persona really remain external.

I have been using preconnect hints to try and reduce the latency created by contacting those 3rd parties, but TBH the fact that I use SSL as much as possible meant that those connections take time to establish. In that time, most of the assets can be delivered over my already open connection.

There is an argument that cdnjs/Google CDN or whatever is better for the web, but personally I'm unconvinced. I think one should self-host/control all of the JavaScript that runs on your own site, and that unless the exact versions of the exact libs are cached in end user browsers the benefits are not even there.

This also looks to be a smarter thing to do anyway; the increasing prevalence of ad-blocking tech is impacting 3rd party hosted assets, and thus the experience of your users. You can mitigate that by self-hosting.

I haven't obliterated first-party extra domains, for example I still use a different domain for uploaded assets by users. This is a security thing, if I could safely do it I'd serve everything I can from just the one domain.

Basically: Self-host, http/2 has brought you the gift of speed to make that good again.

If your first-party extra domains are advertised in your SSL cert, then Chrome at least will use the same connection for those assets too.

See this: https://blog.cloudflare.com/using-cloudflare-to-mix-domain-s...

The first party extra domains use a different domain and .tld altogether.

A bit like how google.com is for maps and anything users upload go to googleusercontent.com.

LFGSS is served from www.lfgss.com and the user assets go via lfgss.microco.sm, and proxied user assets (another level of distrust altogether) are going via sslcache.se .

I own all of the domains, and they're on the same CloudFlare acount, but we don't yet offer ways to give users control over which domains get SNI'd together, and this is especially true when the domains are on different CloudFlare plans.

That said... it's cool. To reduce everything from 8 domains down to 3 or 4 is a significant enough improvement that I'm happy.

I think the case for hosting jQuery and the like, on external, presumably cached CDNs is overstated. Library version fragmentation and to a lesser extent, CDN fragmentation, have to be weighed against the cost of the additional connection.

For HTTP/2 it's a lot better to have fewer domains since they download in parallel as you say. I think the only advantage of hosting jQuery from a shared place would be caching between servers. CloudFlare already acts as a CDN for your static files.

Wouldn't caching be a big thing for libraries like jQuery? Its highly-likely that jQuery was used by one of the other most recent sites a user visited... why not still take advantage of the fact jQuery may be cached locally?

But perhaps not as likely that one of those sites had exactly the same version of jQuery.

Because most browser caches are insanely small and really eager to evict stuff, especially on mobile phones.

Ideally, CloudFlare would allow you to route specific paths to different backends so you could just aggregate multiple services within your single domain for the fewest connections and DNS lookups.

Feels crazy but this makes me think to proxy imgix, which uses Fastly (not supporting SPDY or HTTP/2 yet), through CloudFlare. I'll just set up CNAMEs on my imgix account that are subdomains of my main domain, then add them to CloudFlare with acceleration on - but no caching (since imgix serves images by user agent). This adds an extra datacenter to datacenter hop, but hopefully that's really fast and upgrading the client to SPDY or HTTP/2 would outweigh that.

Anybody else tried something like this?

Coming soon.

Awesome! With that + HTTP/2 server push, we'll really be flying.

We already started to proxy our S3 / CloudFront assets through our load balancer so they can be cached and served through the SPDY (now HTTP/2) CloudFlare connection. However, since we're using imgix to serve different images by device, we can't allow CloudFlare to cache.

I've set up some tests to proxy Fastly through CloudFlare and my initial tests are inconclusive as to whether the crazy extra hop is worth it. It seems that if we have tons of images, it probably will be faster, but most of our pages only load about 6 images above the fold and lazy load everything else, so that might be why the difference is negligible. I'll have to test on a page where more images download concurrently to see if 1 extra hop to get SPDY and HTTP/2 is worth it.

One advantage of a service like CDNJS for a resource used by a number of unique sites, like jQuery, is that the resource will often be in the browser's cache. That value diminishes quickly if the particular version of the resource and the location from which it is served is not widely used. So, for widely used resources like jQuery, it can still make sense even in an HTTP/2 world to use a third party service. On the other hand, other HTTP/1.1 performance techniques, like domain sharding, can actually substantially hurt HTTP/2 performance.

You're not quite correct. In most cases, yes, but for JQuery specifically, your users will get better performance by continuing to use the Google CDN.

The reason being that most likely they've already got it in their cache and won't make a new call to Google (or you) at all.

There are way too many versions in use everywhere, not to mention way too many CDNs, for this to have much of an impact.

CF is already a CDN so it's better to just pipe all the assets through a single connection rather than the more likely chance of setting up another connection just for jquery.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact