So, I'm the tech lead for Cloudflare Workers. In complete honestly, I did not even know we ran some sort of comparison benchmark with Compute@Edge until Fastly complained about it, nor did I know about our ToS clause until Fastly complained about it. I honestly don't know anything about either beyond what's publicly visible.
But as long as we're already mud slinging, I'd like to take the opportunity to get a little something off my chest. Fastly has been trumpeting for years that Compute@Edge has 35 microsecond cold starts, or whatever, and repeatedly posting blog posts comparing that against 5 milliseconds for Workers, and implying that they are 150x faster. If you look at the details, it turns out that 35 microsecond time is actually how long they take to start a new request, given that the application is already loaded in memory. A hot start, not a cold start. Whereas Workers' 5ms includes time to load the application from disk (which is the biggest contributor to total time). Our hot start time is also a few microseconds, but that doesn't seem like an interesting number?
We never called this out, it didn't seem worth arguing over. But excuse me if I'm not impressed by claims of false comparisons...
On a serious note, I've been saying for decades that benchmarks are almost always meaningless, because different technologies will have different strengths and weaknesses, so you usually can't tell anything about how your use case will perform unless you actually test that use case. So, I would encourage everyone to run your own test and don't just go on other people's numbers. It's great that Fastly has opened up C@E for self-service testing so that people can actually try it out.
This is the main problem with Compute@Edge, "for years".
They have been advertising publicly C@E for years but it was a restricted private beta. In my opinion the state of C@E has historically been falsely advertised. You could not go to the website and sign up, put in your credit card details and use C@E. If you contacted support they told you it's not generally available, it's for exiting customers and no more seats are left, it was at capacity.
After I bit of Twitter ranting one of the C@E advocates at Fastly did reach out and did get me test access and he did genuinely help as much as he could. I did like what I saw and in my personal testing found Fastly faster than Cloudflare Workers using Rust / Wasm / Wasi. Also one of my services where having issues in Cloudflare workers with ByteBuffers sending binary data, I don't know why but it just worked in Fastly Wasi runtime.
Regardless today my preference would still be to use Cloudflare. You go to the Cloudflare website, sign up with minimal information and are then good to go. Need to move outside of the test tier? put in your credit card, click a button, done. The pricing for Cloudflare is transparent and hugely attractive for workers. Fastly, there is no clear pricing, they are applying there CDN pricing model with a $50 minimum fee which you need to go and read in the small print. That is ok for enterprise clients but obviously they are missing up on all the pay as you go small scale devs that Cloudflare attracts and all the other pay for what you use PaaS's attract that then take it in to their day jobs becoming advocates where the big contracts are signed.
Fastly need to decide if they are a CDN or a internet company with a CDN product. They seem undecided what they are at the moment and have poor product structuring and it's genuinely very hard to go put in your credit card and give them money! Cloudflare are quite clear they are a internet technology company now not a CDN and make it extremely easy for people to give them money in a few clicks.
I greatly respect Fastly and their technology, but they (and Varnish) are biased against small companies and make it impossible for me to sign up customers who would benefit from their services and one day might grow into big spending customers.
AWS has shown that the days of "min $5000/year" (as both these services have quoted me for individual products) can be over and you can be profitable. Companies who don't catch up with that are cutting off their future.
I wish someone would smack Okta over the head with this and get them to figure that part out.
Yes, they're expensive, but the per-use pricing was acceptable for what we wanted. I didn't realise there were (non-public) minimum spend amounts until we hit the end of our trial period and couldn't find any way to pay them other than to contact sales.
Sales wouldn't take the order for just a handful of users.
If you read the discussions in the financial forums Fastly might be in a tricky place since their biggest customer is assumed to be Amazon and if one goes to their website https://www.fastly.com/partners/featured/ they show off all the biggest cloud providers and customers. Seems like Fastly are afraid to steal customers from their partners.
Cloudflare's pricing is sneaky. They are good for small and personal projects with their free tier, but as soon as you need anything non trivial you get hit with their $xxxx/mo minimum commitment, "contact us for pricing" Enterprise plan. On Fastly you pay $50/mo (peanuts for a business) and that's it for basically anything you might want from a CDN, VCL is endlessly flexible and free. That $50 buys you access to a featureset that is arguably beyond Cloudflare's Enterprise plan. You will need quite a lot of traffic to go over $50, so overall it works out significantly cheaper for an SME that wants to use anything but a very basic CDN.
It's also important to remember that a lot of cases are perfectly served by VCL, I believe most businesses won't even need C@E. Sure, most people would need to learn a bit about it, but it's amazing to have it on your disposal for free compared to a usual rigid configuration.
Besides, something they don't advertise but is now the main selling point for me is support. Fastly was the only company I had pleasure dealing with where after shooting a technical question to their free tier support I got not just a technical answer, but a piece of code that solved the problem I had. In an hour. With a link to their Fastly Fiddle demonstrating how it works. When I encountered what looked like a bug, a few hours later their support person filed a PR to their TF provider fixing the docs. It was and is a breath of fresh air compared to my experience with any other SaaS, Cloudflare included.
I think the only thing that could've swayed me is Cloudflare's Chinese network. It's gated behind their Enterprise plan and they won't hold your hands much, but if you need it it's there.
This is a MAJOR problem in our industry. Products that exist, but don't actually exist (this sitiuation). "Platforms" that are only platforms if you've got a secret deal nobody tells you that you need to have (Twitter, Facebook). "Open" systems that close themselves off once you hit a very, very low threshold of use, and then force you to do a lot of not-very-open things to regain access... I could go on and on.
(I'll note that this is separate from the problem of AI-based fraud detection run amok.)
Most SaaS are structured in an effectively "non competitive market" kind of way. Lots of information hiding (don't post prices), high costs of switching (vendor lockin), and so on. These type of things stifle competition and lead to a more pseudo Monopoly kind of state rather than a truly competitive market.
Anyway, transparent pricing should be required for all businesses IMO. The problem is particularly bad in healthcare, and big contributor into high costs.
Of course, it will take a few decades for laws to catch up
In their annual reports Fastly have previously said C@E was available. They regularly released PR pieces of new functionality on C@E.
The reality was it was in private beta and deep in active development. Their official annual reports and PR pieces used by investors for investment decisions where misleading insinuating a product was available that was not. Fastly annual reports where painting a misleading picture. What's worse is the honest answer, we are deep in development and now have several large existing customers onboard for private beta is nothing to be ashamed of putting out to your share holders. Fastly where probably trying to hide the amount of time it's taken them to roll C@E out since first announced early 2019.
I'm sure every piece of public information went through their legal department and vetted but at the same time every piece of mandatory training I've done at publicly traded companies have me believe I could suffer huge legal consequences with the SEC for doing similar.
Are you going to remove the ToS clause or issue a correction on the blog post?
If I didn't state upfront that I was the tech lead of Workers, someone would (rightly) call me out for astroturfing.
The clause says "Unless otherwise expressly permitted in writing by Cloudflare, you will not and you have no right to: [...] (f) perform or publish any benchmark tests or analyses relating to the Services without Cloudflare’s written consent;"
IANAL, but this seems to very unambiguously prohibit benchmarking Cloudflare's services unless you have written permission. I know you don't want to get into an argument on HN, but could you like... bring it up to someone inside of CloudFlare who would be capable of changing it? You can point to this thread about how this clause is generating negative publicity.
 https://www.cloudflare.com/terms/ section 2.2
An unusual and excellent CEO stance. Now I like Cloudflare even more.
No JS required, just feed all web requests directly through them where they can see all first party cookies, encrypted contents, etc.
Not quite at the level of a public figure (Pres. Biden can't go around making flippant comments) but more than being just a private citizen or "I just work here". No amount of disclaimer can remove that, and for better or worse it's part and parcel with the job.
Without this in your T&Cs I could create the account for them in a couple of minutes. And avoid doing a screenshare to walk those who fail through the sign up process.
If you're an employee, yes. If you're a consultant, contractor, freelancer or similar then you are a third party doing as you do it on behalf of your client (the first party). This is for UK law, and the distinction of first/thrid party is important when it comes to tax (see IR35 for the mess created).
Fastly made it sound like contacting Cloudflare is "impossible", yet here you and one of your top devs are.
I've seen you reply about ToS issues before (specifically over caching of non-html assets): https://news.ycombinator.com/item?id=20791605
You verbally allowed it in that thread but have you considered officially adding that into the next revision of your ToS too?
As for "stand-up" with Fastly, I believe the whole situation brings only negative consequences on both parties. I always become wary towards any service that posts comparisons with its competitors (or simply with services of similar nature).
Good luck and wise decisions to you.
This - I mean even the conversation prior to this message - surely constitutes "expressly permitted in writing by Cloudflare"?
I don't want the following to come off as unnecessarily argumentative, but regarding the ToS, I'm not a lawyer either, but my "ability to read English" interpretation of the section on "perform or publish benchmarks..." certainly sounds like it is prohibiting folks from doing their own side-by-side comparisons. Which is, of course, nonsense, because any engineer worth their salt would do their own analysis, even if they didn't publish it.
Just sounds to me like the CloudFlare lawyers got a little too aggressive to the point of absurdity, but I still think it's fair to call out CloudFlare for this.
Even without the ToS language, if you're really going to stress test a service, it's probably a good idea to give them a heads up, lest you get marked a bad actor.
I'm assuming this isn't the only overly restrictive clause in the contract. Maybe it's an anomaly in an otherwise respectful ToS.
We aren’t owed a historical explanation, and yet we’ll likely receive one with what I presume will be a TOS update blog post in a few weeks.
I feel like this is that moment where someone lays on the car horn because they want to be sure the other driver understands that they’re a bad person, and should feel bad about themselves. It’s not about making you right by their actions, it’s about making sure they know the depth of your anger at them.
That has little value here. It’s socially valuable in interpersonal interactions, but it’s a tire fire when left uncurbed at Internet scale, and becomes vitriolic and harmful to discourse.
I may have misunderstood your specific intentions and desires from the CEO, and if so, I apologize; but I stand by my point in the general sense for all of us.
The ToS explicitly say that performing any benchmark analyses or test relating to the services is not allowed without written permission from CloudFlare.
99% of the people reading and attempting to abide by the CloudFlare ToS are not lawyers, but as a rule contracts mean what they say when they say it clearly and unambiguously as this seems to.
The way I usually see people handle this, and what I do myself, is to both state your relation to the company and clarify whether you're speaking for the company. Ex: "I'm the tech lead for Cloudflare Workers, but speaking in my personal capacity..." or "I'm the tech lead for Cloudflare Workers (speaking only for myself) ..."
On top of that, Kenton is a frequent commenter here, and I've never gotten the "air of superiority" vibe from him where such verbosity would be necessary.
I think the nuance is that you are presenting yourself as someone with responsibility/authority/control over the subject of these benchmarks. As a comparison, consider wording that skirts taking up that mantle:
Full disclosure, I work for Cloudflare, but ...
Not trying to be argumentative and really don't have any hostile feelings or intent. I understand where you're coming from.. just providing my outside take on how the interaction appears. You're within your right to defend your product. Nobody seems to have a problem with that. But you also decided to throw mud on the pile, metaphorically. You admitted you aren't plugged into the back and forth, that's fine. But this isn't news for Fastly and it's hard to take your side in this discussion when your solution is "go do your own tests" which is exactly what Fastly is would like to do. They clearly call attention to your ToS preventing them from doing that in the piece we're discussing here.
He did not. Profiling how your particular app and use case runs on a given serverless provider is not benchmarking.
It's the legal department... And the CEO already mentioned that they are removing it and he gave a valid response/reason.
It seems that they will actually benchmark Fastly in detail now ( could be after another improvement week), which probably isn't what Fastly wanted.
Something definitely seems to be happening if you read their response and i'm awaiting it with actual stats!
@dwwoelfel that's what you wanted? :)
It's nice that eastdakota responded here, but he had two weeks since the original tweet thread from Fastly's VP of Eng calling out the problems with their benchmarking. They didn't respond or retract the blog post in those two weeks.
As a cloudflare shareholder (and a fastly shareholder), I want Cloudflare to act ethically and either retract the blog post or issue a correction.
You are insinuating bad will/faith and that's not the impression i observed.
It is unethical to leave that up after Fastly pointed out core issues with the benchmarking, like using a free tier that was rate-limited.
Incidentally, this means Fastly's blog post is currently displaying test results that compare the enterprise version of Compute@Edge against the free version of Workers. Granted, our bad for the ToS clause, but still.
Despite the strong language in their post, Fastly has not actually demonstrated that anything was intentionally biased or unfair in Cloudflare's test. They've only laid out their opinions as to what would make a more representative benchmark. That's a debate you can have about any benchmark, but that doesn't somehow make the original benchmark "unethical".
Cloudflare's free tier does not optimize for speed/performance, but for available ( = unused ) datacenter capacity based on location. Which makes it less fast than their paying tier.
Additionally, there will be an update soon as mentioned before, based on past comments.
There are other things that you ignore/are unaware of that are not even mentioned in Fastly's post. Eg. That cloudflare also optimizes their network for routing to denser cities instead of rural areas. That metric is not even mentioned by Fastly...
Note: i don't know any inner workings of them as I don't work there. It's based on what I remember from their blog about their SDN and performance weeks. I suppose it's applicable to this scenario, if i got the details right.
I dont think Fastly has ever explicitly called out a 35us against Cloudflare 5ms. ( Please Correct me if I am wrong ) They may have some marketing material about 35us being faster than the rest of industry. But that doesn't imply it is Cloudflare because there are lots of player in the industry. First name comes to mind would be AWS.
Compared to Cloudflare blog post directly naming Fastly in a benchmark is completely different.
Second being Fastly was unhappy about this test and posted on twitter for weeks even quoting and mentioning @cloudflare.
But I agree everyone should run their own benchmarks.
But the same point applies when comparing against Lambda. It's not a cold start if the app code is already resident in memory in advance. Workers and Lambda also proactively load some apps in advance, and we don't call it a "cold start" in those cases.
> But I agree everyone should run their own benchmarks.
But they literally can't because the ToS forbids it.
Benchmarking accurately is hard, so CF trying to make sure they're not going to be subject to a "report" by someone whose tests are garbage, or by a competitor who's benchmarking poorly to make their own offering look better isn't totally unreasonable. Ideally, as far as CF are concerned, they'd be so far ahead of the competition it wouldn't matter, but presumably that wasn't the case when the ToS were written. The fact the CEO posted to say the clause is being removed means we can all freely test in future.
This is how things are supposed to work. Someone does something, someone else reacts, and the first person updates their position. Expecting everyone to get things perfectly correct on the first try is nonsense.
I don't have the full TOS in front of me but wouldn't this violate it?
Then again, a Cloudflare employee just gave written consent in this post for us to perform benchmarks, so maybe we're all off the hook! :)
1.8 in https://aws.amazon.com/service-terms/
Here's some background from 2008 on benchmarking clauses and their precariousness: https://corporate.findlaw.com/business-operations/n-y-case-c...
Why do firms continue to make unenforceable claims? (My least favourite are restriction of trade clauses in employment contracts that are simply illegal, unless paid for, where I come from. But they appear all the time in contracts)
It's legal 'chicken'.
The EU e.g. has 93/13/EEC aka "Unfair contract terms directive" which bans a lot of shenanigans in ToS (e.g. mandatory location for law disputes). Or just writing in your ToS that "you agree that contract parties are not bound by the EU GDPR" for another example, like some companies do write, doesn't mean this is a enforceable clause. Another example, Germany recently made a law that mandates automatic subscription renewals - such as mobile phone contracts, gym memberships or online services - can be cancelled every month (there already was a law before that limited the initial subscription contract to a duration of max 2 years).
If there are some laws specifically that disallow general "benchmarking" I don't know, but I wouldn't be surprised if at least in the EU such a clause would be unenforcable. That sounds like a clear unfair one-sided advantage (the customer cannot check the services actually provided match what was promised). Publication of such results is another matter.
Also, I am curious about what you said about Illinois? Sounds bonkers. I mean I could imagine criminal prosecution for a set of defined, deemed specially bad violations that constitute crimes, like "hacking" and "sabotage", but not for things like e.g. "failure to pay membership dues in time" and other civil matters. Then again, I remember that case of some guy in Florida who couldn't keep his lawn nice and green, and didn't have the money to replace the lawn again and again, so the HOA took him to court, he was ordered to replace the lawn (which he still couldn't afford) and ended up in prison for a couple of days for "contempt of the court" until some friends and neighbors replaced the lawn.
Sorry, what? Where? How's that even possible?
What legal reasoning exists to give companies this kind of unchecked power?
>According to the court, DeJohn's claim that he did not read those terms was
>irrelevant because, absent fraud (which was not alleged), "failure to read a
>contract is not a get out of jail free card."
At the very least, it is an indication of where they stand. They are free to take any customers they want, and if they say they don't want customers with a certain use case, that's good enough for me.
Or maybe its entrapment!
Same as when accepting a job, never trust someone who says a section of their contract/terms won't be enforced.
I might need to start looking for another provider...
I find this statement egregious: benchmarks are probably the most important thing in producing quality software and are often (outside of FAANG) the first thing to be ignored. I have the displeasure of working with extremely poor software for the majority of my career, where a simple benchmark and brain-power would illuminate the issues days-weeks-months before production issues occurred.
I also see "benchmarks are useless" used a lot by vendors who perform poorly in said benchmarks. Or use useless benchmarks (like TTFB which can easily be fudged) which is just underhanded.
I meant they aren't great for comparing two very different technologies. I've looked at a lot of serialization benchmarks, having once maintained Protocol Buffers and later Cap'n Proto, and it's amazing how wildly different the results are depending on the shape of the data you're serializing.
Many engineers, product advocates, and sales engineers/architects build careers, playbooks, and talk tracks around debunking or handwaving others (and sometimes their own) benchmarks.
The pressure to make a sale and your livelihood against a customer that is looking at products on a top ten scorecard is a lot of pressure
So, I would encourage everyone to run your own test and don't believe what either side posts.
Otherwise there would be no business for the Gartners and Accentures of the world if customers were actually savvy themselves.
Strange statement. Didn't Cloudflare start the mud slinging?
I have no shares in this. But Cloudflare publishing questionable benchmarks with competing services while prohibiting competitors publishing benchmarks with Cloudflare services is certainly not fair play. Reminds me of Oracle.
I imagine it's pretty normal for the front-facing blog/PR department to be a bit disconnected from development, but we can't entirely blame Fastly for "complaining" seeing as Cloudflare already uploaded a bit of a diss post.
Anyways I think I once read a phenomenon where corporate environments eventually reach a point where the outside world knows more about the company than the employees. Not sure if this is actually an example of that or not.
Also I agree these blog posts are all kind of meaningless and people should do their own investigation if they're considering alternatives.
For us/me XX works better because of...
As a dev, it sounds to me like Fastly's whole purpose is wrapped up in attacking Cloudflare whereas Cloudflare is focused on innovating.
This then makes your response look accusatory and helps further a broader narrative post about Cloudflare's behavior here, which it seems your goal is to dispel, given your statement on whether engineering was involved.*
A suggestion for something actionable that'll make things better in this difficult moment: email product counsel about starting the process to override your EULA and enable Compute@Edge to do benchmarking. It'll take forever and it's the only way to have an objective, evolving conversation about benchmarks, which your leads will (at least, should) want after this happened.
* intentionally vague here, in order to give you space to delete and leave no record
The post's current title is Lies, damned lies... and the article's synopsis is,
> This nonsensical conclusion provides a great example of how statistics can be used to mislead. Read on for an analysis of Cloudflare’s testing methodology, and the results of a more scientific and useful comparison.
That is definitely mud-slinging.
And Kenton himself agrees:
"On a serious note, I've been saying for decades that benchmarks are almost always meaningless..."
Benchmarks can be used to mislead which CF was clearly doing and which Fastly pointed out with their "Lies, damned lies..."
My point about benchmarks in general was that unless your application is extremely similar to the one tested -- which is unlikely -- then these numbers don't really tell you much about how your own app will perform. This is a criticism I have of every benchmark that attempts to compare performance between very different technologies. I don't think it makes such benchmarks unethical, just not very useful.
But mostly importantly, kentonv's last paragraph.
At the end of the day, benchmarking for your use case is the only true north. Everything else, on all sides, is marketing fluff.
But the title of the post validates his opinion and his claim about fastly and cold starts can literally be found anywhere in their blog and the compute@edge landing page.
I think it's a valid and valuable comment.
Sure felt like mud slinging to me. Just the whole list of complaints they had at the start of the post was hard to get through.
They could just tell us what they did differently and show the result.
5 microseconds is a round trip distance of ~500 m in optical fibre, and is hence totally irrelevant for 100% of CloudFlare's end-users... unless they happen to live in a 2-block radius of a Cloudflare PoP and have their ISP providing them with Internet connectivity with direct fibre from the same data centre...
Take some responsibility and ownership for the authority and soft power you wield as a very senior individual contributor.
You've been in this industry too long, and have risen too far to not feel a sense of personal investment and responsibility to 1) the benchmarks your company publishes about your product, and 2) the policies around public experimentation and transparency with that product.
Talk to your manager, talk to his manager, figure out who is responsible for both and FIX IT. You are the only major provider that has a DeWitt Clause. It's embarrassing.
* Fastly performance throttles their free accounts (ok, but not Cloudflare's fault, and I definitely don't want to get into your pricing game if you're going to start offering different tiers of serverless workers... part of Cloudflare's allure is its simplicity in pricing).
They do have two good points:
* Benchmarks between CDNs should be done from the same client locations (fine, but why did you then use your own set)
* Cloudflare should not prohibit benchmarking of their products (fair)
But this whole post read like a feeble attempt at misdirection, and made me distrust Fastly as a company. The complaints about their methodology aren't solved by using your own, even more flawed, methodology that doesn't even use the same language. You could've just emailed them and asked to work together on an apple-to-apples test instead of creating an even more flawed benchmark.
Second, when you’re measuring milliseconds for response latencies, the difference between JS taking 10us CPU and Rust-wasm taking 1us is negligible. And system overheads far outweigh the actual computation time.
Can you explain this? JS intermediate is wildly different from WASM, not to mention WASM requires a max memory album and a lot of other things resulting from the different VMs
For such a small program (write the current time to a socket), it's trivial for TurboFan to examine the types of the JS code at runtime . Also, escape analysis should also be trivial . So, at runtime, the optimizing compiler can (1) replace string lookups with very fast address dereferences, and (2) insert appropriate deallocations. My guess is that with these two optimizations the performance difference can be reduced significantly. (Of course there is still some overhead -- deoptimization checks, etc. -- but it's not nearly as big as between a 5 MB JS file vs the same amount of code in Rust.)
 E.g. https://v8.dev/blog/fast-properties
I think you're wrong. Small programs don't necessarily minimize the difference between languages. Small programs often exacerbate them. Often for weird, idiosyncratic reasons. Especially if you're measuring time-to-first-byte, and not letting the VM warm up.
But maybe this is evidence you're technically correct. The performance differences can be reduced. But we have every reason to assume nodejs will have way more overhead than rust at executing the same code.
Also, I don't see Rust + WASM on the linked benchmarks. WASM code still has to run in a VM and probably uses the Node.js runtime for syscalls. (Based on a quick search, it seems like most people compile to WASM and create Node bindings with wasm-bindgen.)
Third, my claim isn't that JS can reach Rust+WASM speeds; just that it's not as slow as you might expect because you do have type information.
Anyway, the whole point is moot because we're arguing over microsecond differences.
Because it is not meant as a comparison but they are the one defending a benchmarks ran by their competitors?
TBF, people who care about benchmarks usually expect high loads and value performance enough to put sufficient money on the table. I'd see the free tier people having different priorities.
Sure, it's still valuable info to know what's the free account limits are, but it shouldn't be headline news (except if it's exceptionally fast...)
It would have been interesting and more honest to compare other language runtimes to see whether there’s a trend for the entire platform or simply one language runtime.
I think it's rather disingenuous for Cloudflare to publish their own benchmarks calling out competitors when they won't allow anyone to run their own tests and comparisons.
I never knew how true that anecdote was but that’s typically the boogeyman story that gets passed around for things like this
In the EU there are various ways of ToS are limited in their power,
if that applies here I'm not sure.
But they are basically forbidding you to evaluate how well a product you (might be) paying for works, depending on what they advertised they might even prevent you from evaluation if the advertised product characteristics uphold. Especially in the later case I guess that the ToS clause would be void.
That's because you don't have a good enough reason to want to ask for permission though. If you were spending $xx thousands on edge workers every year you'd just ask, and you'd rule CF out if they said no. But, very likely, they'd say yes.
That's a fake opening with the same effect as prohibiting something by ToS.
Who's to say that the performance of the services won't coincidentally perform better after you've given your request to test?
I am sure they might mention some legalese of "avoiding putting arduous strain on the services which may cause disruption to our other users" as a justification, but that PR speak won't fly with me since there is a separate section that already references that.
I say this as a big fan of Cloudflare and I am sure they don't need to use such underhanded tactics to make their services look good.
Frankly, that's a distinction without a difference in this context, at least in my opinion.
The TOS clause, for reference: https://www.cloudflare.com/terms/#:~:text=(f)%20perform%20or... (ctrl-f benchmark )
The Cloudflare terms prohibit benchmarking without their explicit permission.
Am I the only having cognitive dissonance here? I was confused how they complained about not being able to perform benchmarks but then… ran a benchmark? There must be some distinction here that I’m missing
However, I see that the TOS says you may not "perform or publish" benchmarks. Seems this still violates the "or publish" part, even if Fastly avoided doing the performing. What would be the point of forbidding publishing if it is already forbidden to perform said benchmarks in the first place? The TOS seems to also forbid publishing on the benchmark even when someone else performs the Fastly part of the benchmark.
Edit: It must be the TOS doesn't apply if you don't use the service.
From "Cloudflare Self-Serve Subscription Agreement" https://www.cloudflare.com/terms/ , "2.2 Restrictions. Unless otherwise expressly permitted in writing by Cloudflare, you will not and you have no right to: [...] (f) perform or publish any benchmark tests or analyses relating to the Services without Cloudflare’s written consent;"
From "ENTERPRISE SUBSCRIPTION TERMS OF SERVICE" https://www.cloudflare.com/enterpriseterms/ , "2.3. Restrictions and Acceptable Use. Customer must not: [...] (i) perform or publish any performance or benchmark tests or analyses relating to the Service, other than solely for Customer’s internal use;"
Generally, it sounds like the kind of thing that would be obvious to a lawyer (we want to prohibit random people badmouth our product, right?) but stupid to engineers.
The best outcome would be if Fastly and Cloudflare both dropped any language limiting benchmarking and sharing of results.
If they want to protect their brands, the most common sense requirement would be "We require you to tell us, within X hours after publishing, of benchmarks you're publishing and send us a copy of the article in which you published them (or link if publicly accessible)."
> So instead, we ran the same tests (echoing headers and measuring TTFB via Catchpoint) against our own platform
Consider the following scenario: You calling your ISP that you're going to do a benchmark. Then the ISP gives you the best service, at the expense of other customers bandwidth, for the duration of the test. Then after the test, the ISP removes the QoS modification, and reverts to the old behavior. That could end up with a biased, unrealistic result.
As for the latter, that benchmark wouldn't reproduce and would require extensive configuration (which in a sense is just a part of negotiation between the customer and the client). If a company got caught faking benchmarks run by competitors and it could be shown publicly they did, that's pretty much the end of that company's public reputation.
Cloudflare have pulled enough shady stuff now that they've fallen out of my favor. Their generous free product bought them a lot of community goodwill but their real face has been showing the past few years.
There's a non-sketchy way for a company to do that kind of thing which avoids putting them in Oracle territory. Simply require that anyone publishing benchmarks (1) publish complete configuration information sufficient to allow the company and other interested third parties to reproduce the benchmarks, and (2) allows benchmarking of their products that they are benchmarking against yours under similar terms.
It is nicer of course to not have any benchmarking restrictions, but if you do that you are putting yourself at a disadvantage to companies like Oracle. They can benchmark against you put you can't do the same against them.
Once one important player in a market goes down the Oracle route others tend to follow. But if they aren't asses they will follow with the kind of restriction with reciprocity I outlined above instead of an Oracle-like restriction.
One idea: The edge competitors can create a public (SourceHut?) project that runs various daily tests against themselves. This would similar to JSON library benchmarks.  Then allow each competitors to continuously tweak there settings to accomplish the task in the shortest amount of time.
Also: It would be nice to see a cost analysis. For years, IBM's DB2 was insanely fast if you could afford to pay outrageous hardware, software license, and consulting costs. I'm not in the edge business, but I guess there are some operators where you can just pay a lot more and get better performance -- if you really need it.
That said: this is definitely not the kind of behavior I expect from them and I’m hoping there’s a better reason than them not willing to submit their products to an impartial evaluation.
Then you have a lot of private data centers, e.g. I worked at a major bank that had its own datacenters (multiple around the world). Admittedly, these are super small compared to an AWS DC, but there's literally thousands of these around the world (I know for banking and government, but I assume other industries do this too).
A very harmful one - they are trying to centralize the web.
And kill Tor, while at it.
https://www.fastly.com/pricing/ does not make a distinction it says that "Any size company looking to give Fastly a try." They make distinctions on paying for more bandwidth, not that they are giving you an inferior product performance wise.
I think this is a non-sequitur. Bandwidth is a fundamental property of the performance of a system, if you can "pay for more bandwidth" then you are literally getting an inferior product when you do not pay.
Consider the implementation for "paying for more bandwidth". I assume that it is the same copper cabling hitting the same switches for fee paying and non-fee paying customers so the only way to "offer more bandwidth" is to implement some kind of quality-of-service metric on the network. This is explicitly giving the non-fee paying customer an inferior product.
Looks like it's just post-use monitoring that'll cut off the account after it exceeds the trial limit or start billing per-GB; it doesn't say 'slower connection', so CF didn't realize that a trial account would affect their benchmarks regarding Compute@Edge
It is more evident here:
(To be clear, I agree with your definition, but it is not used)
>> [CLOUDFLARE TERMS] (f) perform or publish any benchmark tests or analyses relating to the Services without Cloudflare’s written consent;
That is not so different from Fastly's own sign-up process.
Unlike cloudflare I dont think fastly is in business for solo devs but for enterprises. And each enterprise has its own requirements.
If fastly prefers to know customer requirements before they start using it or tailor them whats wrong?
Meanwhile cloudflare prevents its OWN EXISTSING customers from benchmarking, which is actually draconian.
"General CDN stuff" is a bit too vague so I'm not really sure what you're referring to. I'm also not really sure what "get past the published pricing" means or how that relates to features being "locked" behind sales calls.
I'm sincerely trying to understand your experience and any suggestions you have to understand how I can push internally for us to be better and I appreciate your time in any responses you provide.
edit maybe I'm wrong,
Vaguely reminiscent of the Glasgow Ice Cream Wars, I'd rather not buy from anyone involved
Simple solution is for Cloudflare to change TOS to allow benchmarking and for 3rd party to publish a reproducible benchmark suite.
However TBH benchmarking a distributed system like Cloudflare/Fastly is going to be REALLY HARD. There are 1000s of servers involved and reproducing the results might be impossible.
Why doesn't Fastly just gasp ask Cloudflare about performing a benchmark? If they say no, publish that.
Fastly does not publish their pricing . And, you need to contact a rep to sign up for commercial use , which implies everyone gets a different deal. It's absurd to say that contacting Cloudflare to do benchmarking means it's impossible for them to do a comparison,
At some point, you either have to make free tiers worse, or deal with thousands of people using them to run the same benchmark over and over.
Cloudflare loves alliances: start a distributed benchmarking alliance! :)
Indeed. We already have comments claiming Fastly are dismissive and evasive while others claiming the fault of Cloudflare. The community is divided, which is quite usual on recent HN.
Let there be another holy war of CDN between Cloudflare and Fastly.
I don't see any division. It's clear to me who has experience to back up their words and who doesn't.
With some careful design to take advantage of it, I've gotten 5-10x faster page loads using Fastly. (The last time I pointed this out, people accused me of being a sockpuppet account. Feel free to Google me!)
and besides that it is slower than nodejs it is still plenty fast (no matter that it is not as fast as they want) btw. it's startup is faster than node. (maybe better pgo might help)
its insane what they built, to do it and it will bring the whole wasm community forward if they succeed (and I hope they do)
Post on HN: https://news.ycombinator.com/item?id=24897641
Sorry if I was mistaking.
While there was no acquisition involved, a whole group of folks working on WebAssembly at Mozilla (myself included) moved to Fastly last Fall. What I tried to emphasize is that instead of the projects at hand here being open source because they somehow had to be, we all joined Fastly because it's a place where this kind of project can be made open source (and be created in the first place!) :)
We’re at least moving towards objective comparisons, talking about numbers.
Cloudflare prohibiting benchmarking is not good, obviously.
Kind of? At minimum they made misleading representations. They used abbreviated explanations of how they're better and then expanded on what the test actually consisted of later on.
Restricting any comparison isn't satisfactory since we care about both the supported languages and performance. Let there be writeups about both and allow users to make up our own minds.
Beyond just those points, the whole Cloudflare blog post reads like an attempt at a performance sales kill sheet. (In other words, their sales may point to this as to why folks should choose Cloudflare Workers over Fastly Compute@Edge.)
Do you want to highlight any of those? RTT is mentioned in a post from "Cloudflare Research" which was just shared by their head of research .
> Beyond just those points, the whole Cloudflare blog post reads like an attempt at a performance sales kill sheet. (In other words, their sales may point to this as to why folks should choose Cloudflare Workers over Fastly Compute@Edge.)
There is nothing wrong with advertising. This comment section is just about keeping them honest.
I hope Fastly stops calling things "cold starts" that aren't .