
Does CloudFlare really speed up your site? Tests say not really. - chewxy
http://blog.dh42.com/cloudflare-showdown/
======
driverdan
I'm not surprised he got these results. While the author raises some valid
points there are a lot of flaws with this. He doesn't say what the times are
for. Is it the time to DOM ready? To the page completely loaded and rendered?
Did he load each site at least once before running the tests to make sure CF's
caches were primed?

CloudFlare can make your site faster _or_ slower depending on how you use it
and what you're serving through it. Since it's a proxy response times will be
higher for dynamic content it needs to fetch (the page itself, unless it's
cacheable). Static content (with proper cache headers) will be served through
its CDN (everything else) which will almost always be faster, again assuming
CF's cache is primed.

The author's finding may be accurate about a base level install of these
platforms with no performance improvements. I believe that without proper
caching headers CF is going to have to query your server for everything to
make sure nothing has changed. Sending proper cache headers with your static
files will eliminate this issue and improve your performance. AFAIK none of
these platforms use proper caching headers out of the box.

One last nitpick. Why the hell is this article spread across 6 pages? That's
incredibly useless, annoying, and takes away from the user experience.

Edit: Originally I called out the author for claiming his site has a PageSpeed
score of 97. I did so because the site issues 57 requests, including 8 CSS and
20 JS files, some of which aren't even minified. I assumed there was no way
his score could be 97 with these glaring issues. Turns out I was wrong. I
tested it[1] and it _does_ have a score of 97, which just goes to show that
PageSpeed and YSlow have plenty of their own issues. IMO his site shouldn't
have over a 90 based on these obvious and easy to fix flaws.

1: <http://www.webpagetest.org/result/130105_G4_DEA/>

~~~
saurik
> Since it's a proxy response times will be higher for dynamic content it
> needs to fetch (the page itself, unless it's cacheable).

This is actually not true; I mean, it might very well be true _for CloudFlare_
(as they may be doing something exceedingly stupid at the edge, might have bad
connectivity for their servers far from the trunk, etc.), but it does not
follow in general: due to how TCP works, there are numerous advantages to both
throughput _and latency_ from adding an intelligent middle-man. For reference:

<http://news.ycombinator.com/item?id=2823268>

<http://news.ycombinator.com/item?id=4203371>

~~~
driverdan
Interesting point. This will be dependent on the site receiving significant
traffic will it not? If the CDN node you go through doesn't have an open
connection it will be slower. If that CDN node isn't getting new traffic it's
going to close it. It's only going to hold that connection for a reasonable
length of time.

------
UnoriginalGuy
This article/site appears to be down. Cannot help but wonder if it wouldn't be
if they had used CloudFlare...

Jokes aside, I never really expected that CloudFlare would increase the speed
of your average site. I mean there is limited caching going on but in general
that isn't the benefit of a CDN.

The benefit of a CDN is: consistent speed across geographical zones (Europe,
Americas, Asia, Russia, etc), better handling of load variations (Slashdotted,
etc), and also some level of DDOS protection (just due to the virtue of more
availability).

Any increase in speed supplied by CloudFlare depends a great deal on how the
underlying site delivers content and how well cache-control is done. For
example if you move all of your static content onto a static host (e.g.
static.example.com) and then set the cache to a week, then CloudFlare is going
to do a lot more for you, then if your app supplies most content dynamic with
no-cache set.

~~~
Sami_Lehtinen
CDN doesn't help at all, if it is based on caching only (like CloudFlare). In
some cases http headers prevent all benefits of having cdn. Like pages with
no-store, no-cache, post&precheck=0. In these cases CDN might only add
latency.

I have seen sites hosting images with those headers too. (like Google sites).
If you're using slower network connection it becames painfully apparent that
all images are always completely redownloaded. With these parameters images
must be downloaded from the source, or otherwise caching cdn would break
things. Other cdn networks like coral cache clearly states that they always
cache content, what ever headers say... But it isn't acceptable for all sites.

------
dorianj
The unlabeled referral link here at the end praising his host bothers me a bit
-- shouting praises for a good service is fine, but at least note that you are
getting a kick back.

> Because I know I will get an angry email or two, just let me say this. I
> know that Cloudflare touts other features such as security, load balancing,
> and keeping your site up if your server is offline. I did not test or take
> any of these features into account. Their biggest sector is to people that
> want to speed their website up, so I took them to the task on just that
> claim.

If this is true and people really think that CDN = faster site, then it's a
misunderstanding, perhaps perpetuated by malicious marketing by the CDNs
themselves.

But putting a proxy in front of an unloaded site (hosted on a relatively fast
server) is of course unnecessary.

Re-run the test using `ab2' instead of simple 3-hit tests, and that's when the
CDN becomes more useful. Or perhaps host the site on an oversold bargain
shared server and see how it fares.

------
jws
This is a small test[1] that shows the absence of miracles. My interpretation
is:

• Location, location, location. – Sometimes cloudflare will have better
positioning than your server. In this test the Amsterdam clients benefit
significantly compared to his Atlanta Georgia US based server.

• It is not magic. – For the content he is using (base installs of popular
packages and highly web optimized site) cloudflare's compression and
optimizations are not helping much (at all?). My next path for further study
would be to see if these base installs are highly optimized already. It seems
reasonable that they would be, but it would need to be looked at. Then
consider if you are planning to have unoptimized content (3rd party
generated), or could benefit from skipping the optimization task and live with
what cloudflare does.

• Do you only have one customer? – One of the attractive features for me is I
can share my "worst case"[2] customer load with someone who won't even notice
the blip. Tests during a deluge would be interesting.

• Does no one hate you? – Haters aren't necessarily sane[3]. Performance tests
during a DDOS would be interesting.

I'm evaluating cloudflare for a site. I don't expect it to out optimize me. I
do expect it to help European and Asian load times, and I have high hopes that
it will kick in during bad times and make them less bad. To that end, does
anyone know of a friendly DDOS service? I'd like to be able to schedule an X
gbps DDOS for Y seconds with traffic of form Z for testing purposes.

␄

[1] 26 numbers, each made from three samples, five minutes apart, averaged
together. No standard deviations.

[2] Which is the best case in the big picture. Just hard for the computers.

[3] I ended up retiring an IP address out of our C block because someone hated
it. It was the "friends and family" email and hosting machine back before
Facebook and free web mail services. Maybe someone got offended by something,
and it got a persistent DDOS attack that would saturated our incoming IP
links. Six months later when I tried to reuse the IP the attack came back
immediately. I just marked it "unusable" in our DNS files.

~~~
nwh
Could it be that your denial of service attack is a misconfiguration
somewhere? I had a similar issue at one point, a typo in someone's DNS meant
that my private server got absolutely wiped out by junk requests.

I've heard of people using simple load testers like <http://loadimpact.com/>
to test dynamic pages, though I've not had a serious use for one yet. That's
probably as far as you can get without hiring a botnet yourself.

~~~
jws
The DDOS had a variety of strategies. Lots of UDP flooding with presumably
forged source addresses. Lots of tcp port hunting too. That was a company ago
and I no longer have access to my report on that.

------
jakobe
I stopped using Cloudflare because of their "protection service". They would
block certain visitors, seemingly by random, and present them with a captcha.
The page presenting the captcha had big ads on them.

There's no way that it's acceptable to show ads and captchas to potential
customers, before they can even see my website.

~~~
thematt
Were you on the free or paid plan?

~~~
jakobe
I was on the free plan. I didn't see any reason to pay for the service,
because the free plan had everything I wanted. It seemed too good to be true.
Until I saw that captcha page. I was surprised, as I had read all the docs,
and it was nowhere mentioned that ads would be shown by cloudflare to my
customers. Made me feel amateurish. I immediately deactivated cloudflare and
decided to be more careful with 'free' services in the future.

------
carsongross
If you are fronting a heroku-based app on their cedar platform and have rails
(or whatever) serving up static resources CloudFlare is a no brainer: you free
up your dyno's to handle the dynamic stuff, and it's a few config-clicks to
get it going, with no crazy asset deployment steps.

Additionally, for the cost of _just_ Heroku's SSL endpoint per month,
CloudFlare will effectively issue you a wild-card SSL cert (hundreds of
dollars a year) and provide SSL service.

Add on top of that the CDN and the DDOS-mitigation features... Well, suffice
to say, I love CloudFlare.

~~~
blakefrost
My experience with CloudFlare couldn't be more to the contrary.

We rolled it out for a good few months and gave them quite a while to get
their act straight. Ultimately, we had to do an emergency switch to another
CDN because the performance was SOOOOOO bad and we had an important event
occurring the following day (Not the ideal time to be playing with DNS on a
production website).

The Theory behind CloudFlare makes sense right? They'll protect you from DDOS
by getting everyone onto their network, so the network gets so big no one can
take it down and they have specialized equipment and techniques. Well, maybe
that makes sense if you have a problem with DDOS, but if you don't, why join a
network that is obviously being DDOS every day? That doesn't make much sense
to me. I assume they were being DDOS'd because every time they went down,
taking us with them, that's what they would said on twitter.

The worse part was response times. With them, individual assets where taken
around 500ms to 800ms to load. Once we switched to another service provider,
we were seeing around 20ms-30ms. And if it's not already obvious, dynamic
pages served off Heroku are faster if they're not stuck behind CloudFlare. Our
total cold page load time when from 5s-6s down to 2s with this switch.

Also, all the asset rewriting and page optimization magic is so silly IMOHO.
Just use a good framework like ROR with Asset Pipeline and write good code and
you won't have that problem. Not like for a small site it should be much of a
problem, and for a big site, they should have competent programmers and
adequate resources.

Also the SSL Cert they give you sucks. It will have a bunch of other companies
names on it, and perhaps besides allowing you to rollout SSL very quickly and
easily, doesn't do much in the way of validating your identity.

I wish CloudFlare luck, and hopefully they will fix their issues. Until then,
I'm staying away from them.

~~~
carsongross
My concern is more about scaling than raw page speed: of course going through
a reverse proxy is going to be slower end-to-end than going directly to
heroku. But taking the static asset load off of heroku without a Rube Goldberg
CDN deployment is great, IMO: they just leverage standard HTTP caching headers
and then those requests rarely if ever hit your dynos, like the old varnish
functionality you got with bamboo stack on heroku. It's much more brain dead,
which works for me.

I don't use or care much for all the dynamic optimization they offer (I turned
it all off) and, with respect to the SSL cert, I don't really care about how
it is issued (to a first approximation, no one looks at certs except the
browser verifying the cert against the URL) but YMMV.

It does appear that wildcard certs have gotten _very_ cheap now:
<http://www.namecheap.com/ssl-certificates/comodo.aspx> so the cost savings
isn't as great as I thought. (Thanks for pointing that out aioprisan.)

I've found that heroku performance thinking is a bit different than raw page
speed thinking: my goal in life is to minimize the load on my dynos. I'd be
willing to put up with a CDN that was _slower_ serving static assets than
heroku is, just to keep the load off the dynos, but I've found CloudFlare to
be plenty fast.

~~~
onetwothreefour
For your thing about not looking at certs... that's not true since all
browsers now show the 'O' in the address bar for EV certs. And people
definitely look at those, in my experience.

------
wldlyinaccurate
If dh42.com was behind CloudFlare, it wouldn't be down right now.

------
mazsa
FYI: [http://www.x-pose.org/2012/06/cloudflare-response-times-
are-...](http://www.x-pose.org/2012/06/cloudflare-response-times-are-getting-
worse)

~~~
jws
Would be more interesting if it stayed on the page for more than 200ms.
Displays, then goes white in Safari. Chrome is fine.

~~~
X-Istence
Authors site has this in the <html> tag after javascript has run ...:

    
    
      style="margin-left: -32767px; "
    

Not sure why that is there, or what use it has. In Chrome for some reason an
empty style tag is applied to the <html> tag, it too flashes to white and then
back, so I assume some JavaScript is adding the margin-left and then removing
it.

\---

Some quick testing ... may not be accurate:

It seems that the *-blink.js stuff that comes down from gstatic.com is the
culprit.

~~~
xpose2000
The blog is using google pagespeed service with prioritize content caching
enabled. blink.js helps render the page faster. I had no idea rendering was
broken in Safari. I just hard coded margin-left: 0 !important as a temporary
fix.

Thanks for pointing this out!

------
andypants
If you are going to make these tests, you need to write how you tested, what
you tested with, how many times you tested and exactly what you are
measuring...

There is basically no information there except some load times, which we are
supposed to just take your word for. Even the load times are unclear about
what is included in that time, and if they are aggregated values or not.

Also, don't paginate it over so many pages. You aren't even running any ads,
so there's no point.

------
joshfraser
There are a ton of variables that affect website performance --- location,
connection speed, browser used, etc. There is simply no possible way to get an
accurate measurement of how a CDN will impact the performance of your site
without using RUM (JavaScript instrumentation of the page to record the actual
experience for every visitor). Every website has a unique audience with unique
characteristics. We offer a free RUM tool at Torbit for anyone interested in
getting an accurate measurement of how Cloudflare (or any CDN) is performing.

Check out how Wayfair used Torbit to measure how much of a difference Akamai
was making to their site: [http://torbit.com/blog/2012/07/23/wayfair-uses-
real-user-mea...](http://torbit.com/blog/2012/07/23/wayfair-uses-real-user-
measurement-to-evaluate-their-cdn/)

If you're interested, I'd love to help you do a similar test for your site.

------
druiid
CDN's are not magic, neither are proxy services. Cloudflare is a proxy service
which is able to do the following things for you:

Distribute load around the world, when if you're not a massively large site
like Facebook you will only have one, maybe two points of presence. Protect
you from DDoS attacks if someone decides they don't like you. As long as you
set proper caching rules for ALL your static content (keyword: ALL), there is
a decent chance that your bandwidth usage will drop in half or more... the
list goes on.

I can't comment on this particular test as the page is down (Guess probably
they should be using a proxy service or CDN...), but if it's anything like the
previous report of Cloudflare slowness, it was light on science and high on
personal thoughts.

Edit: It appears that this page was using a shared hosting provider called
Netfirms? Probably shouldn't do that and then post to HN front-page...

------
Falkvinge
This article misses the point entirely.

CloudFlare doesn't shine on single page loads. CloudFlare shines when your
site hits Reddit's front page, you have 1000 people online at the same time,
and your servers are already pumping 30 megabits per second of data of content
to visitors.

Source: I hit Reddit's front page a couple of times a month. Before I used
CloudFlare, my servers would die. Now, they idle leisurely at loads 1.0 to 2.0
when that happens.

Cheers, Rick

------
gojomo
It's unclear if the tools used (Pingdom/Neustar) put cache-busting headers on
their requests. It's likely that they do: such probes are more
comparable/reproduceable between runs, and more likely to give guidance to the
app-designer/web-designer (as opposed to cache-runner).

If so, of course CloudFlare would be slower in such tests: its largest
benefit, caching unchanged resources for subsequent reloads, has been
disabled.

Also, it's hard to take performance hints from someone who splits such a
short, simple blog post across 6 (!) pages. Six discretionary click-requiring
page-loads is _always_ worse than one, and is the easiest thing to fix if
you're respecting my reading-time.

------
jaysonlane
Wonder if this blog post would've been better off using CloudFlare as it's
struggling through HN traffic...

------
bbuffone
I have to wonder about the testing results that were obtained. Testing the
improvements of optimizations need to be done carefully; you need to make sure
capture sufficient data to draw conclusions.

#1 -> You need to capture enough data samples for each location and browser #2
-> You need to capture data from a set of global locations #3 -> You need to
capture data from the commonly used web browsers.

You can see a test run from a single location and browser using one sample
(2.873) second for the time to interact.

<http://www.websitetest.com/ui/tests/50e89376479876092f000012>

but when you run the test over 17 location and run 5 samples for each
locations. (6.4) second for the time to interact.

<http://www.websitetest.com/ui/tests/50e893d7479876092f000016>

There is a big difference between the one location and the multiple locations
with 5 samples. Looking at just Washington with 5 samples; the time to
interact is (4.1) seconds.

(Disclosure, I work at yottaa.com the provider of websitetest.com) For those
people looking to verify optimizations are working (automated or hand-tuned)
you should use websitetest.com to simplify the testing process. It makes
running tests (multiple locations, multiple browsers, multiple connectivities)
possible with one click and test results make it easy to draw conclusions.

\--- All test data for the information in the comment is available through
these links

Tests by browsers in Washington DC ->
<http://www.websitetest.com/ui/tests/50e89598479876124100000e>

------
mscarborough
I can barely read this article but from what I can see so far, it's missing
the point. This author is not even doing the basics like combining CSS and JS,
much less minifying that, and the site cannot take a traffic spike.

This page requires 42 requests for me, many of which are to dh42.com ... it
may not be a big difference when you throw 25 requests at it but if you're
serving any real traffic (as this HN traffic spike is clearly demonstrating),
you're just sabotaging yourself. On some sites I've consulted with, just
turning on KeepAlive and combining resources where possible is enough to get
them to the next level, without resorting to a CDN. The difference between 40
requests to your box per page and 5 is pretty significant with any real load.

------
Sami_Lehtinen
It seems that they took whole blog site down. This was funny and very lame.

Forbidden

You don't have permission to access /cloudflare-showdown/ on this server.

Additionally, a 403 Forbidden error was encountered while trying to use an
ErrorDocument to handle the request.

------
bcl
In addition to this, when I tried out CloudFlare their caching feature never
worked for me. I also strongly disliked having to setup all my DNS in their
system. I've switched to S3+CloudFront and have been very happy.

------
jerf
What's the point of this test? Yes, if your webserver's purpose in life is to
serve the Wordpress default template 3 times total in its lifespan, neither
Cloudflare nor any other such service will be useful. In fact, if anything,
I'm a little unclear on what took .5 seconds at all.

I have neither connection to nor interest in Cloudflare, but I would
intellectually be interested in a useful comparison test. This looks less like
an interesting comparison test and more like somebody constructing a benchmark
to say what they wanted it to say in advance.

------
kristianp
Two annoying usability flaws about this website (for me).

1\. The article is over 6 pages, each page is too small. 1 - 3 pages would be
preferable. There are no next/previous page links, which would provide a
larger area to click. There are no print or 'all pages' options. Nice would be
a keyboard shortcut to go to the next page.

2\. The Olark chat box is positioned right over the main text, which makes it
very distracting. There is no option to close it. It would be preferable
though if I didn't have to close it, and it was in an area of the page where I
could just ignore it. This is a problem with many websites these days, where
the user has to find and click a close button in order to use the website.

------
ciderpunx
Not used CloudFlare myself. I am guessing that the results are much closer
with a dynamically generated php site with response headers set to "Cache-
Control: max_age=1" (as is sh42.com) than they would be with something static
and cacheable like, say, a flat html page. The main benefit will be that
images, js and such get cached, but the poster says that the sites are vanilla
(hence probably light on images and such) or already quite well optimized
(with yslow and such).

I've had good success with apc for locally caching php code; I bet that would
have made a much bigger difference than a CDN like cloudflare for the use
cases described.

------
m0th87
We ran some tests on using CloudFlare for our S3 assets. It didn't speed
things up at all.

On top of that, quite a few people have reported that CloudFlare would serve
CAPTCHAs to users / mark the site as down and serve a static version.
Anecdotally, I have seen this happen before browsing CloudFlare-enabled sites
once or twice.

One other issue we saw: the _incorrect_ image was once served. That was pretty
disconcerting.

Between all of these issues, we gave up and ended up using CloudFront. It's a
shame because CloudFlare is so much more feature-ful and pleasant to use. But
I don't think it's a mature product yet.

------
jakozaur
I would expect that serving huge static files is a best use case for CDNs and
that is where you could score major performance gain.

Moreover, IMO dealing 100 ms latency for ability to survive HN/reddit traffic
is a good trade-off.

------
minimaxir
If you use a Wordpress blog and the W3 Total Cache plugin (the one that's most
recommended and versatile), it has native CloudFlare support.

Using that, I've survived a HN+Reddit spike, no sweat.

~~~
wyck
I think that is somewhat the point of the OP. If you're average joe and just
want to click a few buttons WTC + cloudflare is popular. If on the other hand
you don't like bloated problem inducing options, you do it yourself. For
example;

Throw in Paul Irish's .htaccess boilerplate:
[https://github.com/h5bp/html5-boilerplate/blob/master/.htacc...](https://github.com/h5bp/html5-boilerplate/blob/master/.htaccess)

Enable APC <http://wordpress.org/extend/plugins/apc/>

Minify using a build script

Use one of the Many CDN plugins for images.

If you're worried about load balancing and DDOS there are plenty of solutions
out there that are handled at the host level.

------
johnnymonster
Maybe your site could use cloudflare to not go down when it makes front page
of HN... o_0

------
redegg
CloudFlare speeds up your site in its current state.

~~~
oellegaard
Certainly, I cannot access the site at all.

------
milkman
Am I the only one who appreciates the work he did to bring us these findings?
I've tried Cloudflare before and "underwhelmed" would be as good a description
as any. I don't know if it helped at all.

If anything, this post confirms it.

~~~
andypants
The work? He didn't bring us any findings. He brought us ambiguous 'load
times' with barely any information except that the numbers under the
cloudflare section are sometimes slightly larger than the numbers under the
non-cloudflare section.

Actually, I just took another look at the post. The lack of detail is because
his 'tools' are just free web services like pingdom. There are no more details
about his method on the blog because nobody knows how these businesses make
their measurements, as they are closed source.

Exactly what is included in the load times? How many values are aggregated in
the load times? Are the times averaged or aggregated in some other way? Were
the websites consistent in terms of cache in each of the measurements? Etc.

Scientific method, please! Or at least, a bit more effort.

------
sabat
My site is wildly faster with CloudFlare. As with everything, YMMV.

~~~
lucb1e
For slow websites, caching helps a lot. Cloudflare does caching. My page
generation times are around 15ms so for me there wasn't a measurable
difference when using Cloudflare.

