If a page is static, then CloudFlare can cache it. But if you set your cache headers appropriately, and use efficient serving code like nginx, I imagine serving static content is pretty darn cheap.
If a page is dynamic, then how can CloudFlare really speed it up? You don't want them serving stale pages to users. So it has to hit your server every time, in which case the user might as well hit your server. In that case, I don't really see how CloudFlare improves things.
Am I misunderstanding how CloudFlare works? It seems like if you follow typical performance tips like  then most of CloudFlare's benefit is eliminated.
I guess  does tell you to use a CDN. You can save end user network latency for cached static pages, since they cache them in multiple geographic locations. But if you have a simple site with 1 .js and 1 .css file per page, and compress and minify everything, I wonder if it's worth it.
2. Static content is served locally from their CDN. Same thing, your JPEG served to a guy mombasa is coming from a few miles away, not half a world away.
3. If your clients are using old browsers without keepalive, CloudFlare will still keep connections alive from their local endpoint to your servers - making the new-connection cost only occur on the first couple of hops.
4. For dynamic content you can use a special proxy they created which keeps a synchronized cache with the far end so it can ships diffs. If you generate a page thats mostly similar to another page it can just send "Same As Cache Object 124567 except Line 147 says "Welcome chubot" instead of "Welcome orionhenry". A significant percentage of dynamic responses can traverse the world as a single TCP packet.
5. Their devs are really ruthless about keeping the crypto certs as small as possible, with the goal of all handshakes taking a single packet per step.
With the static content it's not the cost of serving it, it's the fact that Cloudflare is serving it from a large bunch of distributed servers that are likely to offer far lower latency to the end-user than your servers. With modern web pages often containing hundreds of objects, this can make a big difference to page load times.
If all your customers are in one geography this is less of an issue, but if you have a global audience this can make a huge difference.
So I guess the selling point of CloudFlare is that it's like a normal CDN, plus it offers security services like DDOS protection?
With a normal CDN, you don't change your DNS to point at their servers right? DNS points to your server, but you change your code to have <img src="" > and so forth pointing at their servers. To me that just seems a lot less invasive, but admittedly then you can't get the security features.
Also, a CDN which routes all of your traffic via DNS can also take advantage of a private network to get the packets to your users faster, i.e. Cloudflare could potentially own a faster link between Virginia and Berlin than the public internet would take, and lower response time that way. I think that's what point two on the benefits list here is about: https://blog.cloudflare.com/cloudflare-is-now-a-google-cloud...
My rough and uninformed impression is that for small-time users, where "small-time" includes the scale of Reddit, CloudFlare can do what Akamai does at a much more reasonable price.
But yes, that's a different sense of "CDN" from e.g. Google-hosted jQuery.
Not really. Akamai has presence inside many "too big to peer" networks, while CloudFlare doesn't (for now).
You can't just compare CloudFlare to Akamai at the moment, they are following very different strategies.
Is there something that Akamai can do that CloudFlare cant?
Both services seem to be highly available and very fast from around the world. What does Akamai do here that's better?
To provide an example: transit to DTAG via GTT (CloudFlare) is always saturated at peak hours. So Akamai has a big advantage (direct connection) if you want to reach Deutsche Telekom customers.
Do you know of any docs describing the algorithm Akamai or CloudFlare use to decide whether to hit your origin server? I'm a stickler for correctness but it seems to be pretty easy to get into the situation where your users aren't getting what you intended. HTTP cache headers are a mess.
To me it seems safer to code your application so the HTML points to JS/CSS/PNG with content hashes in the URL. Then you don't have any cache expiry issues -- nothing ever expires, but you control the assets exactly through your dynamic HTML.
I think it's important to have fast and reliable rollbacks. You can imagine some situation where private content or offensive text is accidentally included with some static asset... I would like some guarantee about when users stop seeing it (preferably as soon as the application is redeployed).
And most likely default to not caching if they are unable to determine.
For dynamic pages you can tell them which ones can be safely cached with PageRules (on paid plans):
Since then I have been hesitant to use it again.
People think that "loading gears" animation on Google blog is annoying, think if it showed up every time for 10 seconds on a company's homepage.
It is the only reason I've considered leaving Namecheap so many times (but haven't because except for this they are better at everything else)
If the website is serving content (i.e. articles, images, movies, you know, the normal use-case) then most people visiting a page will be first time visitors on that page. The cache headers you mention are only good for returning visitors and even so, the local cache is not reliable on mobile phones where the cache is being purged regularly to make room. Consider that there are mobile web developers that have decided to not use JQuery for this reason, even though JQuery is probably the most cached piece of JS in the world.
Also serving content from a properly configured Nginx doesn't help with network latency. Say, if your server is in the US and your visitors are in Japan or China, then the added network latency can be measured in seconds. The problem gets even worse for HTTPS connections because of that handshake. And consider that Google found an extra .5 seconds of latency in delivering their search result costs them a 20% drop in traffic, or that for Amazon 100ms of added latency costs them 1% in sales.
> If a page is dynamic, then how can CloudFlare really speed it up?
Even if the page contains dynamic content, you always have static content that you want to serve from a CDN.
You also forgot probably the biggest benefit for us - bandwidth ends up being freaking expensive and if you get a lot of traffic, then a CDN can save you a lot of money.
There was a similar exercise done with hosted versions of jquery, but I can't remember who did it or find a link, I'm afraid.
additionally it's geolocated, so we get that for free, which is nice.
The problem I ran into was setting up a CNAME for an S3 bucket requires the bucket name to have periods in it, but https:// access no longer works for buckets with that naming convention. So I ended up having to use CloudFront instead for my images.
we have a document store backed by S3 running on EC2 instances, the instances are behind one of amazon balancing routers and that's what the cloudflare cname points at
currently the beta env runs flexible ssl and production runs with strict.
Nothing to write home about :)
That being said, I've seen CloudFlare cutting down DNS lookup from 800ms to 60ms for a tiny website.
Another thing is that it depends if you're really concerned with visitors far from your server. I had some WordPress websites hosted in LA and with some really basic optimization page speed was almost as good as Google's home page.
Don't drink the paint, I guess :) It may not be worth it, it may be great. Test it. Of course, CF has other benefits too, it's not just about the page speed.
Don't get me wrong. I'm not claiming anything here. It's just a quick rant and a screenshot. Don't take it too seriously.
Other than that, it is becoming somewhat concerning just how much traffic goes through CloudFlare. Nothing against you CF guys. Just good ol' paranoia :)
For most places CloudFlare does a great, well, amazing job and keeps the page speed down to <1s, often <500ms. But again, it really depends where your visitor are. Check the History tab here http://tools.pingdom.com/fpt/#!/blmbP5/http://cloudflare.com
As for dynamic page caching, CloudFlare offers a service called Railgun that only sends the diffs of a page when it's been changed, rather than the full page, and then re-hydrates it at the edge of their network before handing it off to end-users. Theoretically this would reduce network time by sending less traffic inside the network. I've never personally used it so I can't vouch for it, but it sounds neat.
The real question is: why would you leave aside security?