This is a cool project and something I will probably use for some hobby projects.
I would caution against it for anything more than a hobby project as it violates the Cloudflare TOS:
> 2.8 Limitation on Non-HTML Caching
> The Service is offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as a part of a Paid Service purchased by you, you agree to use the Service solely for the purpose of serving web pages as viewed through a web browser or other application and the Hypertext Markup Language (HTML) protocol or other equivalent technology. Use of the Service for the storage or caching of video (unless purchased separately as a Paid Service) or a disproportionate percentage of pictures, audio files, or other non-HTML content, is prohibited.
For something small, they won't care. If your images make the front page of reddit, you might get shut down.
The main point of this article is to use a Cloudflare cache-everything rule and use that caching to create a free image host. From the article:
> I'd heavily recommend adding a page-rule to set the "cache level" to "everything", and "edge cache TTL" to a higher value like 7 days if your files aren't often changing.
The word on the street is that they will start throttling and contacting you once you hit several hundred TB per month. 
Of course this is still extremely generous and the upgrade plans are usually still several orders of magnitude cheaper than any cloud provider per gb. But don't build a business or hobby project around cf providing unlimited free bandwidth forever.
(basically search HN for cloudflare + non-html)
Should CloudFlare later ban you for the practice, will the random support person you reach unpack the CEO's comments here and ensure nothing changed internally that prevents allowing your continued use and advocate restoring your account for you?
If the perk saves you money, you put that money in savings. Once your budget expands to depend on that perk you are trapped, and when it goes away the pain will be noteworthy.
In words that are more applicable to a business case: You have to have a strategy for when the Too Good To Be True situation ends, because it will, and you have less control over when than you think you do.
Support could not help, and it took me months to empty a bucket that way.
There are lots of problem spaces where deletion is expensive and so is time shifted not to align with peak system load. Some sort of reaper goes around tidying up as it can.
But I think by far my favorite variant is amortizing deletes across creates. Every call to create a new record pays the cost of deleting N records (if N are available). This keeps you from exhausting your resource, but also keeps read operations fast. And the average and minimum create time is more representative of the actual costs incurred.
Variants of this show up in real-time systems.
IMO an "Empty" button should have been implemented by Backblaze.
A single pass: paginating through all entries in the bucket without deletion, just to build up your index of files. And then using that index to delete objects in parallel.
> no way to empty a bucket.
Backblaze currently recommends you do this by writing a “Lifecycle rule” to hide/delete all files in the bucket, then let Backblaze empty the bucket for you on the server side in 24 hours: https://www.backblaze.com/b2/docs/lifecycle_rules.html
Great for it’s intended use (backups), but I’ll be switching to an S3 compatible alternative soon - eyeing Digital Ocean Spaces or Wasabi...
B2's API is frustrating to use and has limited compatibility, and also throws errors that need to be constantly handled, as you found.
Wasabi also has free egress plan if you don't download more than your entire storage account per month.
- .5 cents/GB/mo
- 1GB/day free egress, 1 cent/GB after
- generous free API call allowances, cheap after that
- $0.0059 cents/GB/mo (18% higher)
- all storage billed for at least 90 days
- minimum of $5.99 per month
- this doesn't include delete penalties
- all objects billed for at least 4K
- free egress as long as "reasonable"
- free API requests
- overwriting a file is a delete, ie, delete penalties
With HashBackup (I'm author), an incremental database backup is uploaded after every user backup, and older database incrementals get deleted. Running simulations with S3 IA (30-day delete penalties), the charges were 19 cents/mo vs 7 cents/mo for regular S3, even though S3 is priced much higher per GB. So for backups to S3, HashBackup stores the db incrementals in the regular S3 storage class even if the backup data is in IA.
For Wasabi, there is no storage class that doesn't have delete penalties, and theirs are for 90 days instead of 30.
Either way, Wasabi is about simplicity and doesn't have any concept of storage classes. It's true that there's a 90-day min storage fee involved but that's only an issue if you're deleting constantly.
I see 50ms or less TTFB, for images in the sub 200Kb range, and for videos in the 500Mb+ range, from Australia where the internet is still terrible.
I've only ever a single serve upload fail me - and it occurred when an upload hit a major global outage of infrastructure. In two years of regularly uploading 8Gb/200 files a fortnight (at the least), I've never needed custom retry logic.
And I'm not convinced it's connectivity issues, as I can SCP/FTP the same files to servers in the UK...
When I test using an actual software client (Cyberduck) to do the same thing to B2, I see pretty much the same behaviour: retries are needed, and the total upload size (due to the retries) is generally ~20% larger than the size of the files.
Felt TTFB and download speed were great too considering the massive price difference compared to s3. Though also used Cloudflare workers anyways to redirect my URLs to my b2 bucket with caching.
> 2.8 Limitation on Non-HTML Caching
The Service is offered primarily as a platform to cache and serve web pages and websites. Unless explicitly included as a part of a Paid Service purchased by you, you agree to use the Service solely for the purpose of serving web pages as viewed through a web browser or other application and the Hypertext Markup Language (HTML) protocol or other equivalent technology. Use of the Service for the storage or caching of video (unless purchased separately as a Paid Service) or a disproportionate percentage of pictures, audio files, or other non-HTML content, is prohibited.
For backups, media and archival use-cases it looks really good for the price if you can live with it being in the US.
If you are doing any large data-processing using S3 you get the advantage of data locality, with VPC endpoints you can also bypass NAT gateway data charges and get much higher bandwidth.
For these use cases S3 has lower pricing tiers (down to 0.1¢/GB-mo, matched by Azure and promised by GCP).
I wrote a simple uploader script that adds a random ID to each upload so they don't clash, but this will work fine regardless.
You get 100,000 request per month and up to 1,000 requests in a 10 minute timeframe. So if you have a page with 10 images on it and you get 100 people visiting that page within a 10 minute timeframe, you will use up all of your free tier and all new visitors will get a 1015 error.
For paid plans you must pay at least $5 and get 10 million requests included and additional requests are 50 cents per million.
Google seem not to care
is this true, I can have 110GB cloud storage for 0.5$ per month? it sounds TGTBT
One way they can achieve that pricing is by using consumer drives, instead of enterprise drives. See https://www.backblaze.com/blog/vault-cloud-storage-architect...
Backblaze is quite transparent about how they do things. They publish their drive reliability numbers (including brand/model numbers), storage pod design, and how their sharding/redundancy works.
Seems like most cloud storage vendors just say "We do object storage right handwave and we have lots of 9s". Backblaze says they shard your data into 20 pieces onto 20 servers and can recover with any 17 of those pieces. More details at https://www.backblaze.com/blog/reed-solomon/
Sure that's not enough redundancy for some, but at least you know what to expect and can plan accordingly. I've not see any other cloud vendor do that. Please post URLs for similar info from other companies.
5x 1TB for like 50 bucks. Also skype minutes and office software
It's a much better deal than paying $80/year for 1TB of OneDrive if you have 2+ users.