Hacker News new | past | comments | ask | show | jobs | submit login

Very good info. Didn't know B2 is cheaper than S3.

It’s cheap but it’s proving unacceptably slow for me - sometimes I see 2.5s TTFB for accessing tiny audio files in my region (Berlin, EU). Server uploads are also quite unreliable, had to write a lot of custom retry logic to handle 503 errors (~30% probability when uploading in batch).

Great for it’s intended use (backups), but I’ll be switching to an S3 compatible alternative soon - eyeing Digital Ocean Spaces or Wasabi...

Wasabi is great. Same price but much more usable because it follows the S3 API. All cloud storage tools work seamlessly with it.

B2's API is frustrating to use and has limited compatibility, and also throws errors that need to be constantly handled, as you found.

Wasabi also has free egress plan if you don't download more than your entire storage account per month.

Wasabi is not the same price as B2.

B2 is: - .5 cents/GB/mo - 1GB/day free egress, 1 cent/GB after - generous free API call allowances, cheap after that

Wasabi is:

- $0.0059 cents/GB/mo (18% higher) - all storage billed for at least 90 days - minimum of $5.99 per month - this doesn't include delete penalties - all objects billed for at least 4K - free egress as long as "reasonable" - free API requests - overwriting a file is a delete, ie, delete penalties


With HashBackup (I'm author), an incremental database backup is uploaded after every user backup, and older database incrementals get deleted. Running simulations with S3 IA (30-day delete penalties), the charges were 19 cents/mo vs 7 cents/mo for regular S3, even though S3 is priced much higher per GB. So for backups to S3, HashBackup stores the db incrementals in the regular S3 storage class even if the backup data is in IA.

For Wasabi, there is no storage class that doesn't have delete penalties, and theirs are for 90 days instead of 30.

It used to be $0.0049 for the free egress plan so that's changed then. They do have lower storage pricing if you are on a paid-egress plan which is the same as Backblaze.

Either way, Wasabi is about simplicity and doesn't have any concept of storage classes. It's true that there's a 90-day min storage fee involved but that's only an issue if you're deleting constantly.

Those stats sound insane to me, and certainly don't reflect what I see.

I see 50ms or less TTFB, for images in the sub 200Kb range, and for videos in the 500Mb+ range, from Australia where the internet is still terrible.

I've only ever a single serve upload fail me - and it occurred when an upload hit a major global outage of infrastructure. In two years of regularly uploading 8Gb/200 files a fortnight (at the least), I've never needed custom retry logic.

If you are seeing 50ms TTFB between B2’s only datacenter (in California) and Australia, there is something wrong with your methodology or you have discovered FTL communication.

I've been seeing pretty bad upload failures (probably around 30%) for uploading hundreds of 30-40 MB files per month to B2 from New Zealand since I started using B2 over a year ago.

And I'm not convinced it's connectivity issues, as I can SCP/FTP the same files to servers in the UK...

When I test using an actual software client (Cyberduck) to do the same thing to B2, I see pretty much the same behaviour: retries are needed, and the total upload size (due to the retries) is generally ~20% larger than the size of the files.

Interesting. I have a webm media website where I've migrated hundreds of thousands of videos about that size from s3 to b2 with thousands of additional per month with almost zero issues. I didn't even have/need retry logic until I was on horrible internet from a beach for a month where long connections were regularly dropped locally.

Felt TTFB and download speed were great too considering the massive price difference compared to s3. Though also used Cloudflare workers anyways to redirect my URLs to my b2 bucket with caching.

How well can you cache the worker responses on CF? Can you prevent spinning one up & therefore incurring costs after the first given unique URL request is handled? Looking into now.sh for a similar use case (audio), but pondering how to handle caching in a bulletproof way as I'm afraid of sudden exploding costs with "serverless" lambdas...

Had a similar experience with B2, Wasabi was much faster in my testing.

You're very welcome - I'm glad it was helpful. B2 is significantly cheaper than S3, especially when paired with Cloudflare for free bandwidth. If you're interested, my company Nodecraft talked about a 23TB migration we did from S3 to B2 a little while ago: https://nodecraft.com/blog/development/migrating-23tb-from-s...

How's the outgoing/ingress bandwidth comparison? Outgoing bandwidth is expensive on AWS.

It's entirely free if you only use the Cloudflare-routed URLs thanks to the Bandwidth Alliance: https://www.cloudflare.com/bandwidth-alliance

How does CloudFlare themselves afford to give bandwidth for free? I understand that I can pay $20/mo for pro account but they also have a $0/mo option with fewer bells and whistles. What gives them the advantage to charge nothing for bandwidth?

Because we’re peered with Backblaxe (as well as AWS). There’s a fixed cost of setting up those peering arrangements, but, once in place, there’s no incremental cost. That’s why we have similar agreements to Backblaze in place with Google, Microsoft, IBM, Digital Ocean, etc. It’s pretty shameful, actually, that AWS has so far refused. When using Cloudflare, they don’t pay for the bandwidth, and we don’t pay for the Bandwidth, so why are customers paying for the bandwidth. Amazon pretends to be customer-focused. This is a clear example where they’re not.

Maybe Amazon is afraid of sabotaging Cloudfront and losing revenue coming from outgoing data transfers?

Thank you for clarifying. If I were to use a Google cloud service from a Cloudflare Worker would there be no bandwidth charges? That would change everything for us.

Bandwidth between GCP and Cloudflare isn't free, unlike with Backblaze, but the cost is reduced.


as AWS is primary cash cow for Amazon I doubt they would ever change that. Bandwidth fees are a key profit maker for them on the other hand AWS's crazy bandwidth pricing is prob. pretty beneficial to driving customers towards you guys.

Their terms indicate it should be used for caching html primarily. So if they find costly abusers, they could use this clause to get them to upgrade to a paid tier.

> 2.8 Limitation on Non-HTML Caching The Service is offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as a part of a Paid Service purchased by you, you agree to use the Service solely for the purpose of serving web pages as viewed through a web browser or other application and the Hypertext Markup Language (HTML) protocol or other equivalent technology. Use of the Service for the storage or caching of video (unless purchased separately as a Paid Service) or a disproportionate percentage of pictures, audio files, or other non-HTML content, is prohibited.


wasabi has it free and (afaik is cheaper than s3 and B2)

According to their documentation, Wasabi will move you on to their $0.005/GB storage $0.01/GB download plan (same price as B2 without api charges) from their $0.0059/GB storage free egress plan if you download more than your total storage (eg. with 100TB stored don't download more than 100TB/month).


Any web service from amazon is usually on the pricey side of normal.

You get more features and higher cost. Basically B2 gives you storage, but not detailed access management, endpoint internal to your vpc, and other extras.

It is but I think they only have US datacenters.

For backups, media and archival use-cases it looks really good for the price if you can live with it being in the US.

If you are doing any large data-processing using S3 you get the advantage of data locality, with VPC endpoints you can also bypass NAT gateway data charges and get much higher bandwidth.

> backups, media and archival use-cases it looks really good for the price if you can live with it being in the US

For these use cases S3 has lower pricing tiers (down to 0.1¢/GB-mo, matched by Azure and promised by GCP).

I think B2 is still only single-datacenter.

Same, thank alone is very useful takeaway for me :).

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact