We're looking to move one of our sites off Akamai mostly for costs reasons. Fastly configuration in Varnish VCL - Senior Engineers from my company highly rate Varnish as a cache software so Fastly was an easy choice.
I believe Akamai have been our CDN from the start. The amount of reconfiguration we'd need to do to get everything off would create a huge number of tickets in our work queue. The primary advantage of Akamai has been the number of datacenters they have to service our traffic. A customer can be in remote Australia and have their packets cached in a datacenter in the nearby telephone exchange. That's the reach of Akamai that AWS and Fastly can't compete with.
Their WAF and Bot detection products are also very good. They are a definitely an enterprise/full service CDN provider. I definitely wouldn't call them 'legacy' by any means but the type of service they provide is so different to a new player like Fastly.
Also, if you're a big enough deal (thousands/month+) you'd be surprised how much of that config re-write the fastly sales engineers can help you with.
Small changes only take a few minutes to roll out on Akamai's network. Another commenter talked about Fastly's POP count. Akamai has far more as can be seen here: https://www.akamai.com/uk/en/solutions/intelligent-platform/...
No doubt having so many POPs (some probably legacy and slower to update) likely extend the update time.
I'm not sure why it wasn't given more attention. Their version of lambda/serverless looks interesting
Still, ~$50m in marketing/advertising spend to earn 147 new paying customers (340k/customer) seems high to my untrained eye. Do those enterprise deals and that "132% Dollar Expansion Rate" justify such high CPA?
Those with experience with this kind of enterprise-focused company - is this normal?
Without knowing the ROI it's really impossible to know if this is high or not.
Spending $1 to make $2 is always the right decision.
Unless you have an opportunity to spend $1 and make $3. Your supply of dollar bills is still limited. It's hard to spend huge amounts of money. You've always got to consider where the greatest leverage exists.
The market has been commoditized by CloudFlare. CloudFlare is charging $200/month for the business plan, or $2000/month for the enterprise plan with everything.
There can be banks or government paying 10 times more for custom plans (extensive support and long cycle cycles). These would never adopt Fastly.
Akamai can get away with charging millions of dollars to some historic customers who really don't need the service. If they ever migrate away, that's explicitly to take a zero off the bill.
> We had 1,582 customers and 227 enterprise customers as of December 31, 2018. This is an increase of 143, or 10%, in customers and 57, or 33%, in enterprise customers from December 31, 2017.
The same figure appears again in the table on page 76.
It drops to 132% by 31st Dec 2018. Still a quite respectable figure.
Cloudflare is fine but they are still an order of magnitude slower, which is why we switched. I just tested and Cloudflare is still taking 48ms their own DNS server to resolve our Cloudflare-hosted DNS, but only 14ms for Fastly to establish a connection and send the first byte.
That speed does make a difference. When we switched from Cloudflare to Fastly we had about a 7% increase in completed sales.
I think their ROI is still too low for most small businesses, the only way that actually makes sense for us is because it's free for open source/nonprofits. But I can imagine it's a big deal for larger ones.
1 Google Cloud CDN 36.70 ms
2 jsDelivr CDN 36.80 ms
3 Akamai CDN 38.00 ms
4 Verizon (Edgecast) CDN 38.30 ms
5 Azure CDN 38.50 ms
6 Fastly CDN 41.38 ms
7 Cloudflare CDN 41.80 ms
8 AWS CloudFront CDN 43.00 ms
9 CacheFly 43.90 ms
10 BunnyCDN 46.00 ms
11 StackPath CDN 46.53 ms
12 KeyCDN 47.00 ms
13 CDN.NET 48.46 ms
14 G-Core Labs CDN 48.74 ms
15 Quantil CDN 50.20 ms
7% increase in sales? With what sample size? With what confidence? How did you isolate variables?
For reference, I work at a company with $110M in annual sales. We were planning to start using Fastly. For obvious reasons, we wanted to know how much of a net positive that would be for the business. So we wanted to A/B test it. For us, at least, it's not as straight forward as it would seem.
7% increase represents an increase in the number of students/parents purchasing tickets to our events. (It's not like they're spending more individually, but more people have been willing to give us money.)
That's based on several years with no substantial change in the way we market or the design of our site.
We got that part, but correlation doesn't always imply causation. Maybe that change was seasonal or just plain old growth not accounted for in your 7%?
About a 1-2% difference per 100 ms seems well supported (e.g. ), at least if your page load time is already low enough. 7% is very high, but some effect is expected.
So if we do $10m annually, 1ms decrease in sales should boost sales by $2,000!
We would all probably agree that 1ms will make no statistical difference.
The problem with these studies is that most are dealing with much longer load times. Like 3 seconds vs 19 seconds! Obviously, that will make a HUGE difference. You can't then extrapolate that down to the millisecond.
The other problem is that many of these studies are basing their numbers on average load times. So they are comparing two groups and averaging load time. Group A averages 100ms faster than group B. And group B increased sales by 2%.
But what really happened is group A had 800ms load times across the board. And group be had 800ms load times for 98% of their page loads and 20,000ms on the remaining 2%.
So working with averages can be largely misleading.
I can't see the details of the 1 study that claims 100ms increments, but I'm very skeptical.
I agree with your larger point that average latency is not a good measure. Even for the perception of a single user consistency is more important than a good average. For large groups the average is even less useful.
But intuitively I see lots of places where 100ms makes a world of difference. Just like there's a big, very perceptible difference between a 200ms animation and a 300ms animation, a time to full render of 200ms can change your experience compared to 300ms. The slower the page load, the more deliberate your movement. The closer you get to 16ms (1 frame) for a page load the smaller the investment for clicking a link, and the higher the willingness to experiment and explore. Some of that inevitably leads to conversions and sales.
That said, it’s not 10x or whatever. Maybe like 5-10% depending on location.
Enterprise customers are generally able to get their SE to set whatever cache key they like, and if that is insufficient they can use the worker cache API: https://developers.cloudflare.com/workers/reference/cache-ap...
As I understand it a charity may get the former thing for free through https://www.cloudflare.com/galileo/ , but the latter probably still has usage-based billing.
Disclosure: I work for Cloudflare.
Deployment times for config changes have been operating at the sub-5m timeframe for at least two years. It used to take hours (never days) to propagate changes across the server estate, but not anymore. And while we were admittedly late to the DevOps train, we have made up lost ground. We have nearly 100 individual APIs to control almost any aspect of our products. CLIs if you don't want to write to the API. A Sandbox to test config changes locally. The ability to validate OAuth tokens at the Edge, cache GraphQL responses, and throttle and/or quota API traffic on a global basis. Hashtag "legacy CDN."
Fastly is sub 5 seconds.
> And while we were admittedly late to the DevOps train, we have made up lost ground.
Not anywhere close. Your cache invalidation takes forever. Ability to assign tag objects does not exist. Engaging "professional services" to make a config change like it is 2002?
Akamai also has a fast purge these days, sub 5 seconds as well I believe. Works nicely.
Fastpurge is a hack.
You would think purging a video stream would be the same as purging a standard site delivery object, after all it the stream is http(s) accessible .m3u8 and a pile of .ts chunks but that's not the case -- in some cases it can take up to one hundred and twenty minutes.
There has definitely been a push to catch up, but I wouldn't say you're there yet. The Terraform provider for instance is not up to par with the Fastly one. Having attended an Akamai DevOps workshop not so long ago, there didn't seem to be an easy way to use the CLI tools to configure a property in an idempotent way in a CI/CD pipeline. Maybe things are different now.
Akamai has some advantages for enterprise customers though such as easy assignment of cost using the CP codes. That's very handy. The integration with Let's Encrypt is also very nice.
Fastly delivers in it's simplicity and documentation. Whatever use case you need, you'll most likely find it in their docs and on their blog. Being able to use VCL to configure your caching is in my opinion way easier (and with less limitations) than the Akamai rule tree (even if you don't have previous Varnish experience). Varnish's finite state machine makes it easy to configure and debug any kind of behavior you want. The Akamai rule tree has caused me quite a few 'WTF' moments. It's difficult to debug when something doesn't behave as expected and there's a certain amount of 'black magic' in the behaviors sometimes that makes it difficult to judge what the outcome will be.
The good thing is that both companies have different features and get pushed to incorporate features deemed useful by their customers at one or the other provider.
How did you isolate the variables?
But it makes sense, too: if the metric here is average latency, that doesn't mean that some users didn't see a much more dramatic increase. Every tiny bit of frustration removed from the experience adds up.
You've measured and know that a user seeing a 905ms load time converts more than one seeing 915ms?
Also, latency measured at the server is amplified once its received by the browser. And when the user's connection isn't great, all this is worsened. It quickly adds up. In fact, it doesn't "adds" up, it "multiplies" up.
This is easily possible if you have a highly distributed customer base, and/or some small segment of your customers don't have good upstream peering with your provider.
Which is why I asked the question in the way that I did. I buy that a slimmed down webpage loading 10 ms faster on average will increase conversions because that makes the site usable for the visitors on bad connections. Moving to a CDN doesn't have that impact. It shaves off 10-100ms across the board.
I think this is where we disagree. I've seen (firsthand and through analytics) situations where using a CDN can dramatically improve response time in a small subset of customers (while also getting the across the board win for most customers).
I've also seen CDNs (Amazon's in the early days) that were signficantly slower than direct to linode, even with a warm cache. It's a weird world, and packet routing is hard.
Interesting. Is that for the flow that starts at https://www.srnd.org/sponsor => https://www.srnd.org/sponsor/pay or somewhere else?
(Those pages do load very fast.)
"Moreover, our platform is highly technical and complex and relies on the Varnish Configuration Language (VCL). Potential developers may be unfamiliar or opposed to working with VCL and therefore decide to not adopt our platform, which may harm our business."
"We will continue to work on open source projects, which will empower developers to build applications in multiple languages, and run them faster and more securely at our edge"
VCL can be challenging for complex flow control (IMO), but it is made easier with Fastly enhanced/custom VCL modules.
Their documentation was good, and the Fastly support team was excellent. Their sales engineers gave us a baseline configuration that suited our needs and were quick to answer any followup questions.
Haven't noticed any downtime or response delays to date.
Wow, I did not think an "enterprise-y" company like Fastly could be burning that much cash on growth!
The wildcard seems to be the companies that own cell towers. If they build a credible edge offering, that's a moat that is hard to beat.
Accessing the tower is expensive in time, and equipment that runs at the tower is exposed to a wider variety of temperatures and RF stress than in a nice warehouse somewhere in the metro area.
It's possible the right caching at towers could reduce the backhaul bandwidth requirements, but seems iffy.
Not saying no here, just pointing out a very large concern.
My concern is cert/key management where the edge is somewhere you have very little control over, like a cell tower, random building network, or a users house. Even with keyless, once that device is in my home, Im pretty sure that entire thing can be reverse engineered. Not easy, like probes and oscilloscopes on exposed leads hard, but physical access is pretty much game over, no?
I've worked in this space and the solution is detection and mitigation. Limit the damage to single devices, workflow the user in, look for human attack patterns. Defense is futile.
How much latency do you think is incurred between the ISP and cell tower?
IMHO, I think it's mostly a solution in search of a problem. The internet backbone is fast enough to not be noticeable for end users. A centrally located server in the U.S. will have a maximum ping of 40 ms to anywhere in the U.S. That's faster than is perceptible to the end user. The only mainstream usage I can see is cloud gaming where ping is that critical.
Establishing a connection can require several round trips, and the latency adds up. "early termination" with a point of presence near your end user makes a pretty massive difference, especially if your user isn't on the same continent. Every additional 100ms costs amazon 1% of their revenue https://news.ycombinator.com/item?id=273900
I agree, I've been looking at discussion one here recently on how someone was working on the problem of "data storage at the edge" to compliment their "compute at the edge" offering. I think you can argue that that when you move compute and data storage to the edge you've effectively moved the data center to the edge. At some point if you put enough data at the edge you operating your own backbone makes economic sense and CDN don't maintain backbones they mostly buy paid peering and transit and try to leverage public peering wherever they can. I would even say edge compute is a solution in search of revenue stream and hedge in a heavily commoditized market.
(2) You can achieve 100-130 ms latency with a "like one data center per continent" sort of network. You don't need edge computing on hundreds or thousands of edge sites to achieve that.
They mention other cloud platforms as competition, and Azure has a CDN. I doubt Github would switch anytime in the near future, but the dangers posed to smaller companies by the consolidation under giants is interesting. What happens when your competitor doesn't just try to steal your clients, but can actually just acquire them?
They also mention one risk as their dependence on AWS, a competitor, and that if all the cloud providers blackballed them, they'd be in trouble.
I'm sure Azure would welcome competitors to use their product - same way AWS welcomes Netflix. A good analogy would be Amazon switching from Oracle to AWS based solutions. In which case, they do have something to worry about.
For the Github thing, you're right, it's a different situation.
Though they also have several media heavy companies besides Vimeo highlighted in their case studies. Including: Wistia, A&E, iHeartMedia, Shazam, 7Digital, and FuboTV
A lot of people (including Cloudflare) just give it out for free, making this really a space for enterprise plays, which at their size, have the clout to push for better rates across many similar competitors, or just running their own infrastructure.
I honestly just don't get the value add here. It really doesn't feel like an IPO play.
We're talking about an S-1 here. Where's that growth going to come from? Cannibalizing Akamai is not a long term IPO strategy.
Based on my experience with the other providers they were also, by a large margin, the most modern - it felt like moving from a 2008 integration to a modern, fully RESTful API with great documentation and decent UI.
This is all anecdotal, but they did combine a great technical platform with great support. If transit prices are the same or similar for all providers in that size category, they have to fight on features and support instead.
I think the easiest short term play is the loooong tail of CDN business that's out there that never would've bothered with Akamai because it's too expensive. Their partnership and bizdev folks are really smart about reaching out to small and mid market PaaS providers to set up coverage for all of the sites under their management. It only takes a few of them to take off in their niche to add up to a fairly significant amount of revenue.
unrelated: I'm a fan of Neocities, thank you for that :thumbsup:
> These developments, or the perception that any of them could occur, have had and may continue to have a significant adverse effect on global economic conditions and the stability of global financial markets, and could significantly reduce global market liquidity and limit the ability of key market participants to operate in certain financial markets. In particular, it could also lead to a period of considerable uncertainty in relation to the UK financial and banking markets, as well as on the regulatory process in Europe. Asset valuations, currency exchange rates, and credit ratings may also be subject to increased market volatility.
Add in the necessity of porting vcl for all of their customers, and the prospect of upgrading varnish is obviously pretty daunting.
I suspect they'd rather focus on their new webassembly-based configuration solution, rather than try to keep up with changes in VCL
The fastly engineers I know are nice people. They take care, they're smart and afaik they've stayed small and focussed as a group. They're active in operations groups, standards.
What's not to love?
Maybe it’s just a good time to strike while the market is frothy.
It's just anecdotal evidence, but I talked to a CEO that had just closed a deal with his A round from Kliner Perkins in August or September 2008. When it came time to fund the deal, the partners at the VC firm made the capital call and the LPs couldn't fund. So Kliner Perkins called the CEO and told them they had to cancel the deal.
The CEO had to make big cuts, couldn't pay rent on the building, etc. He eventually sold the company, but I wonder if it or others would have been IPOing around 2014-2015 if they had gotten the funding they needed.
Venture capital spending keeps rising. and a lot of it is coming from SoftBank https://www.recode.net/2017/10/11/16459856/softbank-biggest-...
A 7x multiple on $140m in sales might suggest around $1b (divide that by # shares to get a possible share price).
You'd be surprised how random these appear. I guess there's no science to it, and much more chance is involved than we would like to admit.
The market for cloud computing platforms, particularly enterprise grade products, is highly fragmented, competitive, and constantly evolving. With the introduction of new technologies and market entrants, we expect that the competitive environment in which we compete will remain intense going forward. Legacy CDNs, such as Akamai, Limelight, EdgeCast (part of Verizon Digital Media), Level3, and Imperva, and small business-focused CDNs, such as Cloudflare, InStart, StackPath, and Section.io, offer products that compete with ours.
That's an interesting statement. Supposedly, 10% of web requests on the internet route through Cloudflare. And I imagine their free tier has plenty of non-business use.
It should have been the ugly but correct “small-business-focused CDN”. If they didn’t want to double hyphenate then “small-business focused” would have been read properly by everyone. But you definitely parsed their “small[,] business-focused” correctly :).
"Fastly, Inc. is an American cloud computing services provider. Fastly's edge cloud platform provides a content delivery network, Internet security services, load balancing, and video & streaming services"
Just judging by that statement, I feel they're going to be eaten by AWS, GCP, and maybe Azure. However, it seems their focus may be on creating a viable business, rather than trying to spin an open-source project into a business (e.g., Docker Cloud). We already see that Docker is losing business to people using the OSS, but paying Amazon or Google for ECR and GCR, respectively.
That being said, there are some smaller somewhat related players, such as PagerDuty that seem to be off to a good start. Twilio's stock has performed well historically, too, as has Splunk. But these later companies seem to be solving problems that make them less direct competitors with the bigger players.
Our platform spans several markets from cloud computing and cloud security to CDNs. We segment the competitive landscape into four key categories:
•Legacy CDNs like Akamai, Limelight, EdgeCast (part of Verizon Digital Media), Level3, and Imperva (for security);
•Small business focused CDNs like InStart, Cloudflare, StackPath, and Section.io;
•Cloud providers who are starting to offer compute functionality at the edge like Amazon’s CloudFront, AWS Lambda, and Google Cloud Platform; and
•Traditional data center and appliance vendors like F5, Citrix, A10 Networks, Cisco, Imperva, Radware, and Arbor, as well as networks that offer a range of on-premise solutions for load balancing, WAF, and DDoS.
Maybe Fastly will give legacy players a run for their money.
AWS, GCP, and Azure have CDNs as well, but those are more typically used by existing customers to front things running there.
Fastly has deployments in 17 north american cities. AWS is in 5.
They're pretty different strategies. For every massive AWS datacenter on a continent, fastly has at least 3 tiny ones -- inevitably closer to your customers, meaning lower RTT.
One related gripe I have with Firebase Hosting is that it refuses to renew certificates if you also add IPv6 addresses.