> Google Cloud’s regional and multi-regional buckets perform fairly alike. Interestingly, both are much faster than S3, which is a comparable service. Is Google doing some caching behind the scenes?
These are both incredibly complicated services and describing what they do as “caching” would be too much of a simplification.
Pinging it just now, 20ms and since it's non-static, 50ms to fetch with curl. Fairly miniscule times.
I ran a similar benchmark for time to first bite with a Heroku site behind CloudFlare and since the Heroku edges location varied wildly, latency varied wildly as well. CloudFront in front of an S3 bucket in the same location would likely really fast. That said when you get hit by a bot attack using your registration form to spam QQ emails, you would be putting CloudFlare in front and that might be worth a benchmark as well.
One thing I would improve: benchmark of hosting provider speed and then benchmark the CDNs in front of specific hosting providers. If your hosting gives you a static IP to connect to your CDN, the speed will be less varied.
AWS includes their standard anti-DDoS support for free with most AWS services exposed to public including Cloudfront. https://aws.amazon.com/shield/getting-started/
Cloudflare Workers don’t cache by default either for HTML pages only static files unless you write the Worker to cache HTML pages.
So if they tested on default index.html page then it would of been a Cloudflare Cache miss/bypass which may explain their results.
I’ll need to see more benchmarks with a range of methodologies before I go in and redo how my portfolio is deployed, but this sure got me started searching.
Rule 1. You can't tell where a program is going to spend its time. Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you've proven that's where the bottleneck is.
Rule 2. Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.
They have been dragging their feet on enabling IPv6 even on github.com and continues to be headache when working on a IPv6 only server.
It's a growing service category (free hosting for static sites deployed from GitHub). Are there others (besides GitHub Pages) I don't know about?
Worth mentioning: my service is also wire compatible with Wordpress mobile clients.
Too bad about that, looks like I'm going to have to rethink my setup of having them cache my site on their edge.
In my experience, sites on Enterprise plan are always loading from the closest node, which is never used by sites on Free plan.
I see mixed results with Cloudflare (pro plan): Sometimes the load times are as low as 50ms, the other times as high as 250ms; both for multiple hours on end, even when I am connecting to the same co-location.
I gave AWS Cloudfront a try as well, and to my surprise their load times held consistently at less than 40ms throughout. It is another thing that, quite unfortunately, for my workloads, Cloudfront turns out to be very expensive.
My 22.214.171.124 time is 20ms (Romania).
That’s almost assuredly a sign of radio congestion on the channel you are on causing retransmits. It could also be a poor quality AP or you could have to many clients on your AP, but most likely it’s the first one (in band interference causing lots of retransmits) based on experience.
$ ping 126.96.36.199
PING 188.8.131.52 (184.108.40.206): 56 data bytes
64 bytes from 220.127.116.11: icmp_seq=0 ttl=58 time=3.766 ms
64 bytes from 18.104.22.168: icmp_seq=1 ttl=58 time=4.120 ms
64 bytes from 22.214.171.124: icmp_seq=2 ttl=58 time=4.636 ms
64 bytes from 126.96.36.199: icmp_seq=3 ttl=58 time=3.587 ms
64 bytes from 188.8.131.52: icmp_seq=4 ttl=58 time=5.370 ms
64 bytes from 184.108.40.206: icmp_seq=5 ttl=58 time=3.286 ms
64 bytes from 220.127.116.11: icmp_seq=6 ttl=58 time=4.084 ms
64 bytes from 18.104.22.168: icmp_seq=7 ttl=58 time=3.766 ms
64 bytes from 22.214.171.124: icmp_seq=8 ttl=58 time=3.548 ms
64 bytes from 126.96.36.199: icmp_seq=9 ttl=58 time=6.760 ms
64 bytes from 188.8.131.52: icmp_seq=10 ttl=58 time=3.667 ms
--- 184.108.40.206 ping statistics ---
11 packets transmitted, 11 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.286/4.235/6.760/0.973 ms
(Ouch if this is right)
aws s3api get-object --if-none-match ...
Could not connect to the endpoint URL: "https://foo.s3.eu-central-1.amazonaws.com/dir1/dir2/bar.json"
Anecdotally I thought it seems snappier, so it’s cool to see that backed up.
I’m pretty amazed by what you get for free on GitHub Pages. I use the Eleventy static-site generator, and I’m using Actions to automatically rebuild my site every time I push. Works really slick.
However custom domains (i.e. not appspot.com) have an increased latency that can be a huge issue in some regions, like Oceania.
Netlify is an outlier in this comparison. For them, static site hosting is their core product, bread and butter if you will. To have such mediocre performance, as well as a relatively bad time to first byte is quite surprising.
The real disappointment here is Cloudflare, which manages to be among the slowest despite their supposed focus on performance.
Google Cloud Storage manages to be significantly faster than Cloudflare, while serving files publicly is "just a feature" (like for S3, which trails the pack).
And Netlify is also a disappointment because they are supposed to be all about hosting static websites, and their performance is poor for doing just that.
Their pricing model is to be cheap by not apply egress bandwidth charges, on the understanding that this will be low compared to the storage used.
The pricing FAQ says "If your monthly downloads (egress) are greater than your active storage volume, then your storage use case is not a good fit for Wasabi’s free egress policy", and "If your storage use case exceeds the guidelines of our free egress policy on a regular basis, we reserve the right to limit or suspend your service" .
A comparison of the additional latency (due to slow webservers etc) seems like a more relevant thing to measure if we'recomparing the services themselves. Eg something like time to first byte MINUS network latency. Otherwise you already had your answer with the ping times..
I'd love to know whether that's because of my specific geography and whether it holds up globally.
All measurements are made in AWS Lambda env from Frankfurt Germany.
Full disclosure - I work for Sematext
If a page is being hit 1440 times a day for more than a week, without any content changes, wouldn't it end up being served from a caching server at the bigger hosts, with very little chance that it'll ever get flushed from the cache? It's valid to test that if that's the experience you want to test, but a website is only getting a few hundred views a week then you could have very different results in the real world.
Low expiry only makes sense when you are caching dynamic content like hackernews or reddit. You want caching because reddit is a high traffic site but you also don't want to delay content updates too much because your expiration time is too long.