
Why we don’t use a CDN: A story about SPDY and SSL (2014) - flipchart
https://thethemefoundry.com/blog/why-we-dont-use-a-cdn-spdy-ssl/
======
bsder
Is anybody else disgusted that 1.6 _seconds_ is considered an acceptable page
load time? When did this happen? Whatever happened to 200ms being the
benchmark?

Dear Lord, people, please stop loading 14 zillion different domains every time
I hit a web page.

~~~
FooBarWidget
If you can tell me how to make a decent looking site with a few MB of images
load in 200 ms, I'm all ears. The only method I can think of, is by making the
website ugly (i.e. few images and other design assets).

~~~
bsder
1) Put your assets on the same domain.

Did you see how much time was being used in multiple DNS requests?

To an end user, everything not on your domain is _CRAP_. It's not there for
me, the end user. It's there for _you_ , the website owner so you can
aggregate my eyeballs, pitch me something, use somebody's comment service,
extract advertising revenue, push network bandwidth onto Google, jQuery, etc.

2) Why on Earth do you need megabytes of images?

Really? WTF? Just because you took the picture with a gigapixel camera doesn't
mean you need to serve every pixel.

3) Minimize the Javascript

No, you _don 't_ need that Javascript framework. Shoot it.

Especially on tablet/mobile browsers, a very small number of pages cache
before the browser has to reload and regenerate (and that's _REALLY_ slow).

The research is against you. Every extra 100ms is a significant drop in
retention rate. Your website will stick out because everything is so snappy.
Users can _feel_ this--especially on tablets and mobile.

~~~
tombrossman
Agree with the above comments about reducing bloat, and I'm surprised by the
number of smart people who think nothing of setting up a Wordpress site with
dozens of CSS and JS files.

I just spent 20 minutes trying to find a link to an Apache function I found
recently, which will join all your CSS or JS files together with something
like an @include server-side automatically. Can't find it now but as I have
been doing this manually I hope I find it again. Anyone know what this is? I
seem to recall it was a native Apache function, not part of mod_pagespeed.

~~~
moehm
Do you mean modconcat? [0] There is also a nginx version by Alibaba available.
[1]

[0]
[https://code.google.com/p/modconcat/](https://code.google.com/p/modconcat/)

[1] [https://github.com/alibaba/nginx-http-
concat](https://github.com/alibaba/nginx-http-concat)

~~~
tombrossman
Nah, found that while searching for the other. I can probably use that,
though. The one I'm thinking of had you putting a short instruction at the top
of the file (like @include) if you wanted it to be part of the output file.
Maybe I'm not remembering this correctly as I can find nothing on it now.

~~~
e12e
Server Side Includes? (mod_include for apache, mod_ssi for some others)? Not
sure I'd recommend that as a general solution today -- for static sites I
think you're better of building the static html, and serve that up. With
varnish, you might want to look at edge-side includes.

------
jdub
The big mistake here is using a split host name CDN (instead of an edge/origin
CDN), and then putting the primary CSS on the split host name.

That means you have to wait for DNS resolution and the TCP connection
roundtrips (or worse, unoptimised TLS roundtrips) just to get the CSS.

If they were on the same host name, and the CSS is the first thing in the
HTML, it'll be downloaded immediately after the HTML.

(And with SPDY, the browser could start downloading the CSS on the same
connection as soon as it parses the link element.)

~~~
youngtaff
Yeh would be interesting so what performance would be like with the origin
served through the CDN or even using dns-prefetch or tcp-preconnect

I think all we can gather from this example is that MaxCDNs TLS implementation
wasn't optimal and they were working on it

Would be interesting to see how this behaves for other CDNs or with link rel=
optimisations in place

------
mdekkers
We optimise and host websites for our clients, use an https/spdy CDN
exclusively, most sites we work on are dogs when it comes to performance, and
aim for half second page load times on average. Anything over 1 second is
unacceptable. I read this article before, I think it was posted to HN
previously, and wasn't impressed at the time. Still not impressed now....

Long story short: Not using a CDN is stupid. Learn to CDN.

------
jgrahamc
See also "Using CloudFlare to mix domain sharding and SPDY":
[https://blog.cloudflare.com/using-cloudflare-to-mix-
domain-s...](https://blog.cloudflare.com/using-cloudflare-to-mix-domain-
sharding-and-spdy/)

------
ckuehl
Rather than copy their suggestions for SSL configuration, I think you can
probably find better (and more well-maintained) advice on the Mozilla wiki:

[https://wiki.mozilla.org/Security/Server_Side_TLS](https://wiki.mozilla.org/Security/Server_Side_TLS)

------
dedalus
[http://www.webpagetest.org/result/150402_TV_f0fe942e49dc1191...](http://www.webpagetest.org/result/150402_TV_f0fe942e49dc1191e818c0cecba48d76/)
shows you take 8 seconds to load in Australia. Using

[http://www.webpagetest.org/result/150402_22_0fa16aec7ebc410a...](http://www.webpagetest.org/result/150402_22_0fa16aec7ebc410a00737efcd1321d3e/)
also shows 8 seconds from tokyo

[http://www.webpagetest.org/result/150402_AN_22f88770d4bab89c...](http://www.webpagetest.org/result/150402_AN_22f88770d4bab89c84bd986d75da50ae/)
shows 8 seconds from India

[http://www.webpagetest.org/result/150402_CT_49277f5267e82b6b...](http://www.webpagetest.org/result/150402_CT_49277f5267e82b6b75a5f2bc9a3cf589/)
shows 36 seconds from Amsetrdam

[http://www.webpagetest.org/result/150402_9G_0810eaa940dc2941...](http://www.webpagetest.org/result/150402_9G_0810eaa940dc2941d0ff83615913e959/)
shows 12 seconds from Buenos Aires

Not so sure that you dont need a CDN based on this

------
bkchung
My experience:
[https://medium.com/p/196b5024899c](https://medium.com/p/196b5024899c) Major
browsers will not support h2c switching(switching from http, rather than TLS),
and faster(but less secure) ciphers or null ciphers will not work for those
browsers either, so speed in the HTTP/2 world will likely be something on top
of SSL's latency.

------
troels
In the Australia comparison, the CDN version spends almost 1.5 second on DNS
lookup for the main domain. In the non-CDN version this figure is much lower.
It seems disingenuous to attribute this overhead to the CDN, no?

And as an aside - Maybe there are some low hanging fruits in tuning their DNS
setup instead? 1.5s seems extremely high to me.

------
puppetmaster3
Why we don't use a CDN: NIH.

~~~
wampus
In this case, that's an appropriate attitude. If your environment is a mess
and you add complexity with a CDN, you've got a complicated mess. But a local
optimization like this benefits your entire organization and puts off the need
for a CDN (a solution to a problem that no longer exists). They may grow to a
point where a real strain is being put on their systems, they can't make local
improvements, and involving a CDN is the right choice.

