The primary feature advertised on fastly's website is a feature every real CDN (as in, "not CloudFront") offers: an API to immediately purge your content.
Every CDN offers a mechanism to purge content, but they are not immediate. Edgecast takes up to 15 minutes, CDNetworks I've seen take 20, Cloudfront can take as much as 30. When we say immediate, we mean really immediate. Generally speaking, it takes about 150 milliseconds.
Meanwhile, their bandwidth pricing is insane (albeitimilar to CloudFront): their $/GB is a few times what I'm paying for a "real CDN", and is about what you will get if you call Akamai and then don't negotiate.
Obviously, we will negotiate as well when we're talking about significant amounts of traffic. And good luck getting Akamai to call you back if you don't have significant amounts of traffic.
The real question is: how many points of presence do they have? CDNetworks has over a hundred, and Akamai has over a thousand. Are we talking "even smaller than CloudFlare" here? (Apparently, the answer is "yes: even smaller, they only 7".)
Yep. That's true. We're a rather young company and are actively expanding. However, what is most notable about this is that despite having far fewer pops, we're still significantly faster than most other CDNs, especially in major population centers. We've put a ton of work into reducing latency inside our servers so as to make better use of the pops that we currently have.
And good luck getting Akamai to call you back if you don't have significant amounts of traffic.
Look, I realize this notion sounds right and fits with the common dogma about Akamai, but it is actually a lie: I have personally had long conversations with Akamai negotiating deals where I would have had no minimum commitment (although there were other totally reasonable non-monetary concessions involved), and their prices still beat the ones on your website (albeit only by a sliver).
(By the way, I am going to explicitly point out that if you had stopped after your first paragraph about how your API is different, I would now have just apologized and been interested to learn more about why people needed that, but this obvious and totally incorrect FUD about peoples' abilities to negotiate workable deals with Akamai is really bothering me. I wasn't actually "anti-fastly" before: I just found it expensive and confusing... but now?)
I'm sorry that my statement is bothering you. I'm going based on numerous conversations with people considering using Fastly. It's quite possible that I have a skewed sample, however I'm not intentionally spreading FUD, for what that's worth.
We like to think that the exact number of requests is less important than exactly how they're handled. While it would be cool to go "we serve a billion requests a second", we're still an early stage startup. We're spending more time making our responses even faster (< 1ms on the 99th percentile) and trying to provide things that no one else does (for instance, instant purging and surrogate key purging).
I must say - that kind of perf is superb, though top percentile and averages aren't super representative of the average customer's experience. What are your tp50 and tp90 like?
Your sub-150ms invalidation is equally if not more impressive, especially if you're talking about multi-region invalidation.
TTFB at the 50th hovers around 175 microseconds, 75th is at 250 microseconds, 95th around 450 microseconds.
As for the purging stuff, I do mean cross-region. So, it depends upon which node receives your purge request. 150ms is average, but really it's "network latency plus a millisecond or so".
Wait, so 95% of your customers experience a TTFB of less than 500 microseconds (tp95 of 450 microseconds)? I just want to make sure I'm understanding you correctly. Because that's awesome.
Actually, tests from us and several of our customers have shown us significantly faster than Edgecast throughout the US and Europe. (I work at Fastly.)
I have to admit. I found the pricing structure on Fastly, very confusing.
Are you someone affiliated with Fastly?
I'd be quite interested in it as a service if it can help me handle high volumes of traffic on a server thats running as a shared hosting account negating the need to move to a dedicated server.
"with no cache" is a bit of a misleading statement, considering the entirety of their data set is stored in RAM. Turns out you don't really need memcached if you don't read anything from disk.
Consider their scenario. 100,000 db operations and 50,000 updates per sec. In this case, simple cache costs more than without. Eviction is expensive.
Also, it's no surprise replication doesn't scale because Updates get propagated.
Not sure details, but they succeeded to relax the tight requirements of ACID transactions. So this is a good case when RDBMS(or traditional database) fails.
I guess their design is more like MMO, hope to hear from the guy.
No. Memcached on a reasonable server will do millions of requests per second. 50,000 updates per second is nothing for any modern cache.
Also, replication has nothing to do with whether or not "without a cache" is a meaningful statement. The point is that by holding their entire data set in RAM, they've nullified the need for a cache. Effectively, their database is their cache.
And considering the data isn't even written to disk for about 15 minutes, it's really more cache than database anyway.
"with no cache" means no cache in app server layer. I guessed that's how the guy uses the words "no cache".
Getting rid of cache out of app server layer has great benefit on a cloud.
Having cache in app server layer needs synchronizations to keep data consistent. Scale out design was made before cloud era, when we have our own dedicated system on our site. LAN can afford expensive sync communications, but on a cloud?
Still makes sense if a scenario is read intensive. But when update intensive?
That's why I see this interesting. Increasing memory is a cheap option on a cloud. So it's great if it scales by letting database utilize more memory.
You are right: "no cache" means no cache in the app layer, so no eviction logic and inconsistencies between an app-layer-cache and a database.
And yes the best thing a cloud can offer is lots and lots of memory. That's where it's good. And I/O (disk & network) is where it's weak.
I had a little project a while ago that compiled Mustache templates into C. Never was completely done, but it mostly works: https://github.com/tyler/speed_stache
I don't mean to be pedantic, but the word "transparent", in this context, means "easily perceived or detected". I believe you're using it to mean the opposite.
This has actually tripped me up a lot. In many figurative contexts, like "transparent government" for instance, transparent means the mechanisms are apparent or not hidden. However, when talking about computer processes or interfaces, it always means quite the opposite, "invisible to the user". So tiles used it correctly when he commented on how SPDY usage was transparent because he (the user) was unaware of it.
> The transparency with which Chrome did this
> was actually a problem for me
"with which Chrome did this" -- to me -- inferred that he was referring to the development group when he said "Chrome" (and also was referring to the process with which they executed said upgrade). This would imply transparency in the 'apparent or not hidden' sense being applied to the Chrome development group and/or the process that they were using.
Just sayin'. The choice of words was a little ambiguous in that it could be taken either way, and that wildly changes the meaning of the word 'transparent.'
> the word "transparent", in this context, means "easily perceived or detected"
This definition doesn't make much sense to me. Transparent objects are less easily perceived/detected than opaque objects. In technology, transparent proxies/compression/encryption are designed to have little/no impact on their contained data.
I think this word uses the assumption that between you and them is an object. If the object is transparent, you can see what they are doing. If it is not, and it is a cloak of some sort, you cannot see what they are doing.
My team often speaks of operating transparently, and by this we mean that management can easily see what we're doing and why. So think about the transparency if actions rather than some object.
It has a threadsafe mode. It is disabled by default and to my knowledge hardly anyone uses it because you would need to audit all your dependencies to make sure they were threadsafe as well.
So, what the "thread-safe mode" does is enable threads in Rails. (i.e. one thread per request.) To my knowledge, it does not switch out thread-unsafe code for thread-safe code.
Moreover, it's irrelevant. As I mentioned, this is unrelated to why Rails apps are typically run as multiple processes.
As with Twitter, I don't think Rails, nor Ruby, is the issue. Whatever you do with a framework, or a language, is mostly your fault. Ruby on Rails can only be as good as the people use it are, and while I realise the Rails aggressive marketing gives people the impression that RoR is a bullet-proof solution, it isn't, and shouldn't be blamed if a developer doesn't understand the tools he's using.
The primary feature advertised on fastly's website is a feature every real CDN (as in, "not CloudFront") offers: an API to immediately purge your content.
Every CDN offers a mechanism to purge content, but they are not immediate. Edgecast takes up to 15 minutes, CDNetworks I've seen take 20, Cloudfront can take as much as 30. When we say immediate, we mean really immediate. Generally speaking, it takes about 150 milliseconds.
Meanwhile, their bandwidth pricing is insane (albeitimilar to CloudFront): their $/GB is a few times what I'm paying for a "real CDN", and is about what you will get if you call Akamai and then don't negotiate.
Obviously, we will negotiate as well when we're talking about significant amounts of traffic. And good luck getting Akamai to call you back if you don't have significant amounts of traffic.
The real question is: how many points of presence do they have? CDNetworks has over a hundred, and Akamai has over a thousand. Are we talking "even smaller than CloudFlare" here? (Apparently, the answer is "yes: even smaller, they only 7".)
Yep. That's true. We're a rather young company and are actively expanding. However, what is most notable about this is that despite having far fewer pops, we're still significantly faster than most other CDNs, especially in major population centers. We've put a ton of work into reducing latency inside our servers so as to make better use of the pops that we currently have.