

Make your website fast.  (And why you should). - TomGullen
http://www.scirra.com/blog/74/making-a-fast-website

======
asolove
With the way people use browsers these days, load time has become even more
important. As I tell my clients and bosses:

"It's never a choice between two seconds and five seconds. It's a choice
between two seconds and whenever the hell they come back to this tab, possibly
never."

Many people go about web performance the wrong way, though. It usually isn't
about algorithms, or hard work of any kind. It's mostly about rigorous
preparation (a build process to combine and minify your assets) and outright
lying (async loading, etc.).

~~~
TomGullen
You're right imo, if the site is well designed most techniques for a good fast
website really don't require much maintenance at all. If they are badly
designed however they can be a headache. Some of them are just bog standard
good practises though (like image dimensions).

------
firefoxman1
This is the same stuff I've been hearing for the past 5 years.

I think a few things are changing though. The new Basecamp's speed is mind-
blowing. One thing that interested me about their approach was that they claim
to use static HTML whenever possible. This allows them to cache a single file
and serve it to everyone.

So I may have to disagree with the age-old wisdom: _"It’s important to keep
your Javascript in external files where possible, since it allows the browser
to cache the scripts and not load them on every page load!"_

Why? With single-page apps I think we need to separate out "Javascript" into
two categories: Actual code and templates. I've been playing around with the
37Signals approach, where I store the site-wide base template inside a script
tag right in the body. Then I send a cache header the first time that HTML is
requested to ensure it's only loaded once. Then page-specific (route-specific)
templates and scripts are asynchronously loaded. So my actual "code" JS is
still loaded with src="..." but templates aren't.

~~~
shingen
I think the improvements 37 Signals has delivered on speed have a lot more to
do with recent database optimization and hardware upgrades than anything to do
with static js file serving.

~~~
dreamdu5t
I agree considering their existing services are much faster now, without any
of these optimizations...

It's also easy to be fast when the functionality is so simple.

------
icebraining
On the "Serve your pages compressed" section, one should point out that static
files (usually CSS, JS, etc) can be pre-gzipped (and pre-packed) to reduce
server load. Serving the right version depending on the Accept-Encoding header
is usually easy in most webservers.

~~~
TomGullen
Didn't know that thanks. As far as I can tell though with static resources IIS
caches the gzipped versions so I don't think it would make much impact at all.

~~~
jacques_chester
It makes a useful difference in performance for a website I run.

However, the single biggest speedup I ever got (on Wordpress) was in pushing
MySQL onto a separate machine from the web server. Nothing else has come close
in my experience.

------
CWIZO
I was hooping for something more than what Google's page speed or YSlow can
tell you. Is this really "everything" one can do to optimize his site? I'm
really hungry for some more advanced (if you will) techniques as I don't know
what else to do with our pages (they are fast mind you, I'd just like to make
them faster still).

And to add something constructive (besides a question) to the whole thing:

"Aggressive caches are best, with an incrementing querystring parameter at the
end of them to force browsers to reload them when you make a change to them."

That is good. But even better is to include the version number in the filename
(or path) itself: image.jpg?v=2 -> image_v1.jpg ... this doesn't throw some
public proxies off and they cache your content that way. See this page for
more details: [http://code.google.com/speed/page-
speed/docs/caching.html#Le...](http://code.google.com/speed/page-
speed/docs/caching.html#LeverageProxyCaching)

edit: I would also like to see some article explaining how EXACTLY is google
measuring our page load times. Because everything I've measured myself on our
pages is fucking fast, but google seems to disagree (in order of magnitudes
different results). The only logical explanation I've come up with is that
google is measuring speed of our page from the states and our servers are in
Slovenia. And that means that the page is super fast for our Slovenian users
(which are the only one that matter since it is a local page) but slow for
google who is way over there on the other side of Earth. This really pisses me
off to be honest since there is only so much I can do and we are being
punished (as in lower rankings due to speed) for not following some rules that
we don't know.

~~~
TomGullen
In response to your edits I agree, it would be nice if we knew more. Our
website performance chart is under-representing our site in my opinion and
there doesn't seem to be much we can do about it.

I also think they are possibly counting Tweet button/g+1/Facebook frames in
the time which is wrong in my opinion as these are such small fragments on a
webpage yet are often responsible for a large percentage of the load time (for
us it is up to 50%!)

~~~
CWIZO
+1

Also I think that if you have video (at least flash, since that's what we
have) or games, that then google times the amount of time it takes for the
complete video/game to load. Which is complete madness considering that the
video/game is playable long before it's fully loaded. But I can't say that it
really does that since I have no way of knowing :/ But I do think that you are
correct on the widgets part.

~~~
pbhjpbhj
If you've got a few games or games on different sites then it seems this would
be an easy thing to do A/B testing on. You could provide (what amounts to a
splash screen) a small flash file that loads a second flash file when you
click "play" or "help" or whatever. Does this for one set of sites and compare
how they do in the SERPs and for retention, etc..

Then make a HN post about it.

------
aw3c2
Nothing new to see, nothing newsworthy, just the usual bunch of suggestions on
how to speed up a website. Sorry to be so negative but this is just random
noise and not something I want to see for the hundreds time.

~~~
chrisacky
It sounds like you already know everything there is. How great it must be to
have enlightenment such as your own.

This was a very well structured and thought out article. It is actually
identical to an article I have been wanting to write for a while. Few things I
might have added as an addition, being that while the spec notes 2 open
connections per host, most browser these days allow considerably more.

> Chrome allows 6 connections per hostname > FF allows 6 connections per
> hostname Even the iPhone allows 4 connections per hostname.

Good job on the article. Exactly as you say, even if one person doesn't find
it useful, doesn't mean other people share the same sentiments!

~~~
nigra
You should always follow the standards.

~~~
chrisacky
I'm not sure if you are being sarcastic, or serious, but it's pretty common
practice to "hack" around particular provisions noted in lots of different
specs in order to provide a faster browsing speed (perceived or actual for the
user).

ie, RFC 3390 ( <http://www.rfc-editor.org/rfc/rfc3390.txt> ), provides
algorithms for determining/detecting the maximum initial window for TCP
traffic.

min (4 _MSS, max (2_ MSS, 4380 bytes))

Just a few lines later the spec _specifically_ says that:

> "This change applies to the initial window of the connection in the > first
> round trip time (RTT) of data transmission following the TCP three-way
> handshake."

Well, guess what, lots of large companies ignore this, and they start with a
much larger initial window. They do this in order to shave off as many round
trips as possible, to reduce your load time. (Google AMZ Apple etc)

This spec was written 10 years ago! (Read here for more information
[http://blog.benstrong.com/2010/11/google-and-microsoft-
cheat...](http://blog.benstrong.com/2010/11/google-and-microsoft-cheat-on-
slow.html) )

------
chrisacky
On the topic of using subdomains, I was thinking of using subdomains as a way
of cache busting when I release new versions. Rather than the typical build
number in the URL folder name. I didn't build my application to have a
"version" number variable. but did have a CDN URL. (Foolish mistake). So I'm
left with the annoyance, of having to change a lot of code to include this.

~~~
elithrar
> I didn't build my application to have a "version" number variable.

What about just post-deploy hooks with your preferred version control
software? Identify your CSS/JS, add a current time+date?

~~~
chrisacky
The problem with that is that it would have the potential of preventing these
CSS/JS from ever being cached, since proxy servers will commonly/often/always
refuse to cache pages with a query string?

I think I am just going to bite the bullet and go through everything I have
ever written and wrap it in some "CDN"izer, method to easily include version
number.

------
pbhjpbhj
Google Page Speed can be used online now -
<https://developers.google.com/pagespeed/>.

Also wrt the article, <http://www.scirra.com/blog/74/making-a-fast-website>,
won't naive bottom loading all your JS lead to them being serialised and thus
cost you in page load time?

Also if your JS modifies the DOM tree shouldn't it arrive earlier so as to
prevent reflows and such?

See eg [http://code.google.com/speed/page-
speed/docs/rtt.html#PutSty...](http://code.google.com/speed/page-
speed/docs/rtt.html#PutStylesBeforeScripts)

~~~
rimantas

      > Also if your JS modifies the DOM tree shouldn't
      > it arrive earlier so as to prevent reflows and such?
    

And what will it modify, if DOM has not arrived yet? Javascript on top can
block other components from loading, hence the recommendation.

------
quattrofan
Not bad but very much focused on the optimising the "front-end" of delivery.
My experience is generally optimising the back-end, for instance I notice you
did no real digging to find out why your old server was slow, was it CPU
bound? Memory bound? IO bound? In a lot of cases you can get big performance
increases without expensive new hardware, the golden rule here is CACHE
EVERYTHING. A good caching plan, a product like memcached and cheap hardware
with lots of memory can often well outperform a much more expensive multi-core
rig.

~~~
joshfraser
The reason for that is that Performance Golden rule which states that 80-90%
of the end-user response time is spent on the frontend. This rule turns out to
be surprisingly accurate across the internet.

[http://www.stevesouders.com/blog/2012/02/10/the-
performance-...](http://www.stevesouders.com/blog/2012/02/10/the-performance-
golden-rule/)

~~~
quattrofan
Actually I've not seen this, excellent, thanks for posting

------
rb2k_
My current setup makes me pretty happy in terms of having a speedy blog:
Octopress + S3 + Cloudflare.

Octopress will generate static HTML and use Disqus for comments:
<http://octopress.org/>

S3 added the ability to serve an index.html file a while ago:
[http://www.allthingsdistributed.com/2011/02/website_amazon_s...](http://www.allthingsdistributed.com/2011/02/website_amazon_s3.html)

Cloudflare will be your DNS host and CDN in one: <http://www.cloudflare.com/>
(they also have a lot of other fun features)

I tried Amazon's Cloudfront too, but they aren't as great with hosting static
files. (especially index.html data in subfolders seem to error out for me).

We also use Octopress to run a german-language podcast
(<http://blog.binaergewitter.de/>). Adding the iTunes feed and flatter support
wasn't all thaaat hard. Although I still think the liquid template system is
kinda weird. (Our "Octopod" fork is here:
[https://github.com/Binaergewitter/binaergewitter.github.com/...](https://github.com/Binaergewitter/binaergewitter.github.com/tree/source)
, not 100% reusable though)

------
joshfraser
I'm surprised to see there hasn't been any mention of webpagetest.org yet.
It's a great (free) service for measuring how fast your site is loading from
different locations around the world in most of the major browsers.

The first step in losing weight is stepping on a scale.

------
jaequery
this article left out the importance of IO. from our past experiences w/
several projects, implmenting SSD/SAS were the single biggest improvement in
performance (as you can notice right away visually), all the other
optimizations (minify/yslow/etc...) were just very subtle improvements. CDN is
very good at relieving load off your server and comes in pretty important at
times of HN/Digg/Slashdot effect. been toying around with cloudflare lately
and I think it's a good replacement for your DNS hosting. it gives you CDN out
of the box just by pointing your DNS at them.

------
mmorey
Ironically the site is down. Anyone have a cache link?

Edit: working now

~~~
TomGullen
Hi, it's up for me? Have you tried via a proxy?

~~~
mmorey
Working now.

------
pqdbr
Tom, I was researching on the web about setting up a different, cookieless
domain.

Should I use CNAME alias or direct A entries for those static0 ... static3
hostnames ?

------
franze
i don't know why sprites are still recommended? they save big on HTTP request,
but they have quite a DOM rendering overhead. the best icons, are no icons.
you can cut back on icons big time without loosing any usability (as measured
by time on site, average page views, ...). the benefit of a faster site by far
outweighs the loss of some meaningless icons that are mostly only there to
stroke the ego of designers. (but if you still need icons, use unicode)

~~~
rimantas
Care to provide numbers how "big" that DOM rendering overhead is? Saving HTTP
request is _very_ important ant very noticeable, especially on mobile (high
latency). DOM overhead with sprites is negligible in comparison.

~~~
franze
yeah, i agree totally, but even better are no icons at all.

i don't have exact figures by hand, but one sites (which used sprits very very
heavily) we managed to save about 250ms in DOM rendering - on some slow,
crappy, ugly plattforms (not talking about IE6 here, but IE7 and 8 aren't much
better in that regard).

i recommend using chrome speed tracer extensively for DOM rendering
optimization. what is just a little bit slow in chrome is unbelievable slow on
IE.

------
thegyppo
Been using Edgecast via Speedyrails for a few years now. Top quality CDN.

------
jebblue
Fast? No problem. GWT. Done.

------
shingen
That's a solid article for anybody getting started with web development, but
it really needs more coverage on the importance of CDNs for static content.
Between Amazon, Microsoft, Rackspace, etc. they're cheap and trivially easy to
utilize. They're now extremely accessible to less experienced developers and
developers on a shoe string budget.

My business has content that users can freely embed on external sites; it's
dynamic locally, but served up static (15 minute refreshes) off of Rackspace
Cloud Files CDN for the embed. The cost to serve 1 billion widgets on AWS came
out to $1,236 due to the transaction costs, but on Rackspace (which has
compression with Akamai and no per transaction cost)it got down to $180, which
is insanely cheap for serving a billion widgets.

~~~
TomGullen
Thanks for the feedback and recommendation!

I agree with you on CDN's, I was in two minds which direction to take with the
article as my original title was going to be along the lines of "... without
spending any more money". I think using your own subdomains over a CDN is
often more beneficial to startups as they save money, although it will cost a
little at first in time.

Also admittedly I've very little experience with commercial CDN solutions (but
this will change soon I think) so didn't feel confident enough to write about
it.

~~~
shingen
I can definitely understand that.

I'd recommend you give Rackspace a try. You can sign up for their CDN for
free; they hit you for the storage and bandwidth used, but not per
transaction. It's worth just experimenting with in a very simple manner to get
familiar with (small text files, css files, and so on). Rackspace has a very
convenient web based control panel for moving a smaller number of files into
their CDN (and of course APIs).

Amazon's Cloudfront is excellent from the standpoint that you don't need to
programatically upload any files into it, Amazon can absorb them to their CDN
and then serve the files up thereafter (you just create a dns entry for
Cloudfront, like cdn.mysite.com and use the same url structure you would on
mysite.com). With Rackspace you have to get the files into their cloud, which
can be inconvenient if you're talking huge numbers of individual files.

For less than $5 you can absolutely learn everything a developer would need to
know about both of those offerings through experimenting with them.

~~~
dminor
We didn't go with Rackspace/Akamai specifically because they don't have a
custom origin pull - pushing your assets somewhere else is at the very least a
pain, and if you have images or video that is dynamically sized based on the
URL it's pretty much a nonstarter.

We looked at Cloudfront but they don't have an Australian distribution point.
In the end we went with EdgeCast (via GoGrid).

Originally we were with Voxel, but they've had a couple network issues in the
past month (result of DDoS attacks I think).

~~~
TomGullen
Edgecast was one I looked at but more expensive than other options for a site
our site, are they _better_ than other CDN's?

~~~
dminor
They are cheaper through resellers like GoGrid. I believe it is competitive
with Cloudfront.

As far as speed, they were among the best as far as we were able to test.
There are lots of variables involved though that make it kind of hard to
compare - how long things stay in the cache at the edge vs. your traffic
patterns, etc.

