Great for it’s intended use (backups), but I’ll be switching to an S3 compatible alternative soon - eyeing Digital Ocean Spaces or Wasabi...
B2's API is frustrating to use and has limited compatibility, and also throws errors that need to be constantly handled, as you found.
Wasabi also has free egress plan if you don't download more than your entire storage account per month.
- .5 cents/GB/mo
- 1GB/day free egress, 1 cent/GB after
- generous free API call allowances, cheap after that
- $0.0059 cents/GB/mo (18% higher)
- all storage billed for at least 90 days
- minimum of $5.99 per month
- this doesn't include delete penalties
- all objects billed for at least 4K
- free egress as long as "reasonable"
- free API requests
- overwriting a file is a delete, ie, delete penalties
With HashBackup (I'm author), an incremental database backup is uploaded after every user backup, and older database incrementals get deleted. Running simulations with S3 IA (30-day delete penalties), the charges were 19 cents/mo vs 7 cents/mo for regular S3, even though S3 is priced much higher per GB. So for backups to S3, HashBackup stores the db incrementals in the regular S3 storage class even if the backup data is in IA.
For Wasabi, there is no storage class that doesn't have delete penalties, and theirs are for 90 days instead of 30.
Either way, Wasabi is about simplicity and doesn't have any concept of storage classes. It's true that there's a 90-day min storage fee involved but that's only an issue if you're deleting constantly.
I see 50ms or less TTFB, for images in the sub 200Kb range, and for videos in the 500Mb+ range, from Australia where the internet is still terrible.
I've only ever a single serve upload fail me - and it occurred when an upload hit a major global outage of infrastructure. In two years of regularly uploading 8Gb/200 files a fortnight (at the least), I've never needed custom retry logic.
And I'm not convinced it's connectivity issues, as I can SCP/FTP the same files to servers in the UK...
When I test using an actual software client (Cyberduck) to do the same thing to B2, I see pretty much the same behaviour: retries are needed, and the total upload size (due to the retries) is generally ~20% larger than the size of the files.
Felt TTFB and download speed were great too considering the massive price difference compared to s3. Though also used Cloudflare workers anyways to redirect my URLs to my b2 bucket with caching.
> 2.8 Limitation on Non-HTML Caching
The Service is offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as a part of a Paid Service purchased by you, you agree to use the Service solely for the purpose of serving web pages as viewed through a web browser or other application and the Hypertext Markup Language (HTML) protocol or other equivalent technology. Use of the Service for the storage or caching of video (unless purchased separately as a Paid Service) or a disproportionate percentage of pictures, audio files, or other non-HTML content, is prohibited.
For backups, media and archival use-cases it looks really good for the price if you can live with it being in the US.
If you are doing any large data-processing using S3 you get the advantage of data locality, with VPC endpoints you can also bypass NAT gateway data charges and get much higher bandwidth.
For these use cases S3 has lower pricing tiers (down to 0.1¢/GB-mo, matched by Azure and promised by GCP).