Hacker News new | past | comments | ask | show | jobs | submit login

Exec summary: This is not caching. It is reduced latency for your visitors by removing connection setup over a large geographical distance. It also reduces the latency your web server has to deal with to effectively zero, but you can do that with Nginx. It also will reduce your bandwidth consumption from your providers perspective to a fraction of what it was.

I think describing this as caching hides the real benefits. My startup pushes 150 Mbps on average so I care about this stuff.

Rather than being caching, this introduces a proxy server that is geographically close to your visitors that then communicates with your server using an efficient compression protocol. So, much of the network path that would transfer a full payload is replaced by a compressed connection that does not have to build up and tear down a TCP connection with a three way handshake. The most important benefit I see here for site visitors is reducing latency by removing the connection setup. They will spend a few ms setting up the connection with a server a few hundred miles from them instead of on the other side of the planet. That server then serves a cached page or uses an established connection to only get the changes, which could mean as little as a single send and single receive packet.

Another benefit of this is that your local web server will be talking to a local cloudflare client which means there is practically zero latency from your perspective for each request. This means that each of your app server instances spends less time waiting for it's client to send or receive data and more time serving app requests. It's why people put Nginx in front of Apache.

I think the most important cost benefit here is reducing your bandwidth consumption. We're constantly negotiating our colo deal based on 95th percentile and getting your throughput from 1 Gbps down to 50 Mbps which I think this may do will drastically reduce your real hosting costs. Of course Cloudflare need to maintain their servers and will be serving 1Gbps to your customers but those cloudflare servers will be geographically closer to your customers. However because data centers bill based on your throughput at the switch and not how far your customers are away from you, I don't see that there are any cost savings they (cloudflare) can pass on to you. They're going to be billed what you were being billed for bandwidth, but they'll mark it up. I suppose you could argue there are economies of scale they benefit from, but that doesn't seem like a compelling argument for reduced costs.




CloudFlare does not bill customers for bandwidth consumed. It's a flat fee: https://www.cloudflare.com/plans


Holy cupcakes. I might use this. So I can use your free package which includes the CDN to serve 100 Mbps of static JS and images?


Yes. And we'll also send you some cupcakes.


I'm curious if I'm going to run into a catch. It seems unsustainable. Just to be clear, that's 30.8 Terrabytes of data I'll be transferring per month on your network for $0. Can someone verify this?


I'm not on the business side, but looking at our global traffic 30TB per month is a tiny percentage of what we are doing in terms of traffic.

I doubt it would go unnoticed, though, so I expect someone will be interested in persuading you to get a Business account with us which is $200/month because at that point we give you an SLA.


If they go down, you go down. But in my experience they have been quite reliable; if you aren't already in 3 sites on independent networks for HA, they are better.

I'd spring for the $200/mo service if you are pushing real traffic just to get to try the railgun. $200 is less than epsilon for a large site.

Also, the cupcakes are probably a lie, or at least are vegan.


What do you have to say to http://news.ycombinator.com/item?id=4188882 ?


I don't know who that is, or what site they had on CloudFlare and so I can't comment on it.


A client I built the infrastructure for pushes about 50Mbps. Not up to your level, but I can at least give some suggestions and input for Cloudflare. We basically switched all of our sites to use Cloudflare after running a few of the largest through them for a year plus.

During that time there was a total of one Cloudflare related outage and that was resolved within about 15 minutes by their changing the data-center the site(s) were routed through. I can tell you that one of the greatest benefits you will see with Cloudflare as it stands currently is that your bandwidth utilization is going to go down substantially. Before switching to Cloudflare we were pushing a good deal more than 50Mbps. Essentially if you were to switch I have to imagine your side of the bandwidth utilization is going to drop to somewhere around 75-90Mbps if not more.

That said, understand what you're getting into. This is a 'cloud' service and they require you to switch your DNS records to their service. All things considered, running a multi-million dollar business through them has been much smoother than anticipated... this new feature we will be looking at very carefully as well because about half or more of our content cannot be cached.


It's still caching because your servers are only serving a small portion of the data that's changed between requests. But it's very nice that they're now taking on all of the customer requests, reducing your exposure to DDoS's dramatically.

The whole 'freeing you up to serve more requests' thing is not accurate: your app servers run as fast as they can and your frontend proxies deal with handing the data to the client, so your app servers are (or should be) always doing as many requests as they can. If anything, the reduced latency and caching will allow more connections than usual to come in, putting more potential load on your app servers. Catch-22 =)

"This means that each of your app server instances spends less time waiting for it's client to send or receive data and more time serving app requests. It's why people put Nginx in front of Apache."

Sounds silly to me. Putting a proxy in front of a proxy doesn't change the tcp/ip stack. If you tuned your network stack and Apache properly it should be able to handle anything you throw at it. I don't remember what the setting was, but modern versions of Apache should be able to only send a request to the app server once the client has finished its request to the frontend.


I'm not sure why you don't think it's caching. They are doing more than just sending the whole page over a preferenced connection.


Nothing new, this is all just repackaging. This is all just basic CDN proxy tech all providers (including CF) had for years. Giving it a cool name, dose not make it new or exiting.

Here are some helpful links - CF customer testimonies:

http://x-pose.org/2012/02/speed-up-your-site-disable-cloudfl...

http://www.husdal.com/2011/07/01/incapsula-versus-cloudflare...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: