Hacker News new | past | comments | ask | show | jobs | submit login
Benchmarking static website hosting providers (savjee.be)
168 points by rencire 60 days ago | hide | past | favorite | 65 comments

I must be a bit old-fashioned but I would love to see comparisons against running a simple VM with Nginx.

> Google Cloud’s regional and multi-regional buckets perform fairly alike. Interestingly, both are much faster than S3, which is a comparable service. Is Google doing some caching behind the scenes?

These are both incredibly complicated services and describing what they do as “caching” would be too much of a simplification.

Agreed. In many cases sites are serving local customers and a CDN is overkill, simply hosting nearby and having a half decent hosting provider is suffice. I've used OVH a number of years for small sites with UK customers, a couple of bucks a month for a reasonably specced VPS.

Pinging it just now, 20ms and since it's non-static, 50ms to fetch with curl. Fairly miniscule times.

A CDN is not the same as a hosting provider. Proximity to edge locations of the CDNs likely has something to do with latency delays. Also proximity of your hosting server location to Cloudflare's CDN.

I ran a similar benchmark for time to first bite with a Heroku site behind CloudFlare and since the Heroku edges location varied wildly, latency varied wildly as well. CloudFront in front of an S3 bucket in the same location would likely really fast. That said when you get hit by a bot attack using your registration form to spam QQ emails, you would be putting CloudFlare in front and that might be worth a benchmark as well.

One thing I would improve: benchmark of hosting provider speed and then benchmark the CDNs in front of specific hosting providers. If your hosting gives you a static IP to connect to your CDN, the speed will be less varied.

Re: DDoS protection:

AWS includes their standard anti-DDoS support for free with most AWS services exposed to public including Cloudfront. https://aws.amazon.com/shield/getting-started/

Does not seem to be at the level CloudFlare offers. The AWS advanced tier seems $3000+ per company and there are a bit of fox guarding hen-house incentives for the free tier (Attacks lead to higher AWS bills, while proper protection is another expensive subscription).

Cloudflare doesn’t cache HTML pages by default only static files https://support.cloudflare.com/hc/en-us/articles/200172516. You need to specifically tell Cloudflare to cache HTML pages via page rules etc https://support.cloudflare.com/hc/en-us/articles/202775670

Cloudflare Workers don’t cache by default either for HTML pages only static files unless you write the Worker to cache HTML pages.

So if they tested on default index.html page then it would of been a Cloudflare Cache miss/bypass which may explain their results.

A couple of months I did a similar benchmark of static hosts. Instead of relying on Pingdom I used Pulse by Turbobytes (which now seems dead) so the results were provided from end user networks all over the world.


I put Cloudflare in front of my portfolio, which is hosted on Github Pages, so this made me do a bit of spit take. I always just assumed that GH Pages was a sort of “free perk for using Github” and that it must surely be hosted on “meh” infrastructure that would benefit from having a CDN in front. This article basically says that analysis is upside down.

I’ll need to see more benchmarks with a range of methodologies before I go in and redo how my portfolio is deployed, but this sure got me started searching.

Github Pages uses Fastly (https://www.fastly.com/customers/github) as a CDN, which is pretty comparable to Cloudflare AFAIK.

Rob Pike's rules are still as relevant as ever, also for web dev:

Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.

Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.


I also looked into removing Cloudflare from my github pages hosted website, but I need IPV6 support, and github pages doesn’t have that (1). Cloudflare proxies those requests to IPV4 so my website is accessible to everyone in the world.

(1) https://github.community/t/cannot-reach-any-github-io-page-v...

github pages had IPv6 sometime on 2017[1]. they removed after changing the IPs to one of their own.

They have been dragging their feet on enabling IPv6 even on github.com and continues to be headache when working on a IPv6 only server.


I'd like to see a comparison among Netlify, Vercel, and Render.com as they all offer free hosting for static sites with similar usability.

It's a growing service category (free hosting for static sites deployed from GitHub). Are there others (besides GitHub Pages) I don't know about?

Shameless plug for my static website service: https://perspect.com/

Worth mentioning: my service is also wire compatible with Wordpress mobile clients.

There is also fast.io - from Mediafire. What i like there is their dropbox interaction. Basically your dropbox folder becomes website with few clicks.

Neocities has a CLI that can be used with static generators like Jekyll (used by Github Pages).

GitLab Pages can use a GitHub repository as a source as well.

CloudFlare's slowness is extremely surprising to me. From my house in Greece, I get a one millisecond ping to (devices in my house have higher pings), so I was assuming that would carry over to their caches.

Too bad about that, looks like I'm going to have to rethink my setup of having them cache my site on their edge.

You can use https://cloudflare-test.judge.sh/ to check from which Cloudflare nodes the sites are actually served.

In my experience, sites on Enterprise plan are always loading from the closest node, which is never used by sites on Free plan.

What’s funny is that for me in NYC metro area, all of the Free and Enterprise ones come from the closest location to me (EWR), but the Pro/Business ones are a wide mix, some EWR, some ORD, some YUL. The ones on the Toronto hosted pop are by far the longest ping time, yet the owners of said accounts seemingly paid for the privilege of slower response times. What am I missing?

> ...so I was assuming that would carry over to their caches.

I see mixed results with Cloudflare (pro plan): Sometimes the load times are as low as 50ms, the other times as high as 250ms; both for multiple hours on end, even when I am connecting to the same co-location.

I gave AWS Cloudfront a try as well, and to my surprise their load times held consistently at less than 40ms throughout. It is another thing that, quite unfortunately, for my workloads, Cloudfront turns out to be very expensive.

I get 8ms even to my WiFi router, how can you have such a low response time?

My time is 20ms (Romania).

> I get 8ms even to my WiFi router, how can you have such a low response time?

That’s almost assuredly a sign of radio congestion on the channel you are on causing retransmits. It could also be a poor quality AP or you could have to many clients on your AP, but most likely it’s the first one (in band interference causing lots of retransmits) based on experience.

Yeah, I don't get it either. Even wired I don't think I've ever seen a ping time of under 10ms to anything on the broader Internet here in the UK and I have both FTTC and cable connections. POPs seem to be weird in the UK generally though, I often get geolocated to somewhere 100+ miles away.

No idea frate, though I use a wired desktop for the test, so wifi latency isn't a consideration.

I have 10 ms to in RO/RDS on wired. I blame their idiotic pppoe.

I'm getting single digits from Finland:

    $ ping
    PING ( 56 data bytes
    64 bytes from icmp_seq=0 ttl=58 time=3.766 ms
    64 bytes from icmp_seq=1 ttl=58 time=4.120 ms
    64 bytes from icmp_seq=2 ttl=58 time=4.636 ms
    64 bytes from icmp_seq=3 ttl=58 time=3.587 ms
    64 bytes from icmp_seq=4 ttl=58 time=5.370 ms
    64 bytes from icmp_seq=5 ttl=58 time=3.286 ms
    64 bytes from icmp_seq=6 ttl=58 time=4.084 ms
    64 bytes from icmp_seq=7 ttl=58 time=3.766 ms
    64 bytes from icmp_seq=8 ttl=58 time=3.548 ms
    64 bytes from icmp_seq=9 ttl=58 time=6.760 ms
    64 bytes from icmp_seq=10 ttl=58 time=3.667 ms
    --- ping statistics ---
    11 packets transmitted, 11 packets received, 0.0% packet loss
    round-trip min/avg/max/stddev = 3.286/4.235/6.760/0.973 ms
And this is on wifi.

I have ~2.2 ms to on RDS, wired as well. I guess it depends on the area and how lucky you are with the equipment they use.

Frankly 10 ms is good enough.

It depends on how close CF's edge is to you, a normal traceroute from my home's router is 7 hops, but if I'm on VPN, it's just 2 hops from VPN exit.

That seems to imply that you are within about 100km of something on cloudflare's edge.

Yes, I have a suspicion they have an edge node in my city.

They publish a list of their POPs: https://www.cloudflare.com/network/

Ah, thanks! Yep, there's one here.

seems like a worthy correction/addition to your top post that you have lower ping to a cf-pop in your city over cables than to inhouse devices over wifi. doesn’t serve your websites. You’ll have to ping a hosted page.

He obviously understands that, hence the “I was assuming that would carry over to their caches.”

Vaguely related: We use AWS S3 to poll for changes in a file from dozens of deployed devices regularly (polling is not the smartest way...). We get failures that the file header cannot be retrieved many times a week. Well, one could guess it's a connectivity problem of the device. Most are on cellular networks, but the problem also occurs from devices connected to one of the top US universities' network. The devices send an alarm right away when it happens via AWS SNS and we receive them, so we are forced to believe that AWS S3 reliability is not that high. If it really were a general connectivity issue we wouldn't get the alarm either; there is (light) retry for the S3 access, but none for the SNS alarm. These are authenticated accesses so more points of failure involved on S3 side. We use several regions relatively local to the devices and it happens in most if not all of them.

Am I understanding this right. You get several read failures per week from various devices trying to access a rarely/infrequent updated s3 object?

(Ouch if this is right)

Exactly. The error to

  aws s3api get-object --if-none-match ...

  Could not connect to the endpoint URL: "https://foo.s3.eu-central-1.amazonaws.com/dir1/dir2/bar.json"
At the "same" time SNS works.

How was CloudFlare set up on the workers and CDN side? If it’s not set to cache everything, the origin server will still need to be contacted, and each edge will need to contact the origin when the cache expires (which is defaulted to 4 hours). This feels like it’s configuration related. CF is very fast in my experience, but I haven’t done similar tests so I can’t know for sure.

I would like to add Google App Engine CDN to the list. It is easy to use with Single Page Apps and also works with regular static websites. We have both use cases in the company I'm working at, and it works fine.

However custom domains (i.e. not appspot.com) have an increased latency that can be a huge issue in some regions, like Oceania.

Netlify results seem disappointing, I say that as a happy user of their service.

Netlify is an outlier in this comparison. For them, static site hosting is their core product, bread and butter if you will. To have such mediocre performance, as well as a relatively bad time to first byte is quite surprising.

This article compares it to CDN services, which specialize in static file hosting. Static site hosting is just static file hosting with a different marketing strategy and behind-the-scenes workflow.

The real disappointment here is Cloudflare, which manages to be among the slowest despite their supposed focus on performance.

Google Cloud Storage manages to be significantly faster than Cloudflare, while serving files publicly is "just a feature" (like for S3, which trails the pack).

How about both being a disappointment? Cloudflare because they are supposed to be all about performance, which is clear from these benchmarks doesn't automatically mean their performance is actually good.

And Netlify is also a disappointment because they are supposed to be all about hosting static websites, and their performance is poor for doing just that.

I've had my fair share of issues with Cloudflare, but I'm pretty sure something is way off with the numbers in this post. My guess is that caching was off.

I recently moved my personal site from Netlify to GitHub Pages, just to be dependent on one less service—it was already being built from GitHub anyway.

Anecdotally I thought it seems snappier, so it’s cool to see that backed up.

I’m pretty amazed by what you get for free on GitHub Pages. I use the Eleventy static-site generator, and I’m using Actions to automatically rebuild my site every time I push. Works really slick.

It is kind of strange comparing AWS Cloudfront (CDN) with normal GCP Buckets (S3 like). It would have been interesting using a GCP loadbalancer in front of the bucket because then you can enable the cdn option, this should be an equal setup to cloudfront.

I'd be interested to see comparisons with the "S3-alike" services like Wasabi [1] and BackBlaze's B2 [2]. Their selling point is that they're a lot cheaper, so performance comparisons would be interesting.

1: https://wasabi.com 2: https://www.backblaze.com/b2/cloud-storage.html

I don't think it's a good idea to serve a static website out of a Wasabi bucket - they might suspend your account if it receives any significant traffic.

Their pricing model is to be cheap by not apply egress bandwidth charges, on the understanding that this will be low compared to the storage used.

The pricing FAQ says "If your monthly downloads (egress) are greater than your active storage volume, then your storage use case is not a good fit for Wasabi’s free egress policy", and "If your storage use case exceeds the guidelines of our free egress policy on a regular basis, we reserve the right to limit or suspend your service" [1].

[1]: https://wasabi.com/cloud-storage-pricing/pricing-faqs/

Thanks, that's very interesting!

A CDN doesn't actually offer you to host websites. Basically it caches your website content, and can help you in improving the performance your visitors. Eventhough Cloudflare is the most widely used free CDN, there are other CDNs you might need to look : https://www.nets4.com/2020/07/free-cdn-providers.html

CDN providers often provide static website hosting as a separate service though. Cloudflare offers "Workers Sites".

So it seems like this is a complicated measurement of latency. The CDNs should win where they have a nearby endpoint.

A comparison of the additional latency (due to slow webservers etc) seems like a more relevant thing to measure if we'recomparing the services themselves. Eg something like time to first byte MINUS network latency. Otherwise you already had your answer with the ping times..

It would be interesting to know how Vercel stands.

Me too. I've uploaded the same website to pretty much every static host listed, and anecdotally, Vercel seems to get me pages noticeably faster.

I'd love to know whether that's because of my specific geography and whether it holds up globally.

I happened to monitor a vercel app using Sematext Synthetics for last 7 days. These are results - https://gist.github.com/sivasamyk/44645fbdf9798ead68360e18ad...

All measurements are made in AWS Lambda env from Frankfurt Germany.

Full disclosure - I work for Sematext

And to run the tests using updown.io instead of Pingdom. It has a great api for this kind of thing.

Cloudflare in front of S3 and Google storage for me. Interesting that is so slow. I did some CF performance tuning while I had a page on HN and found tweaks improved the outcome hugely. (How to get the caching mechanism to actually properly kick in) so I’d want to check the cache hits and misses. Apologies if I missed that in the article.

Thanks for the data. I recently moved https://app.qvault.io from GH pages to Netlify. I liked the simplicity of pages, but Netlify had SSR features that I really needed :P

Was the page actually served from cache for Cloudflare CDN? Cloudflare does not cache html by default and the post does not include the configuration that was used or the response headers.

+1 Pretty cool analysis. Would be interesting to see how load also affects these numbers (e.g. connections/second).

Love it. Could you please include Fastly in your benchmark?

The services were probed once every minute for 10 days

If a page is being hit 1440 times a day for more than a week, without any content changes, wouldn't it end up being served from a caching server at the bigger hosts, with very little chance that it'll ever get flushed from the cache? It's valid to test that if that's the experience you want to test, but a website is only getting a few hundred views a week then you could have very different results in the real world.

That's an odd complaint. We are talking about static website hosting. Stuff that only changes once per deployment. If your static site cache expires after less than one minute then the static site hosting provider is just bad at his job and can't even get very basic settings right.

Low expiry only makes sense when you are caching dynamic content like hackernews or reddit. You want caching because reddit is a high traffic site but you also don't want to delay content updates too much because your expiration time is too long.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact