
Using Backblaze B2 and Cloudflare Workers for free image hosting - CherryJimbo
https://blog.jross.me/free-personal-image-hosting-with-backblaze-b2-and-cloudflare-workers/
======
InvaderFizz
Edit: See replies below. Cloudflare CEO says this use case is fine.

This is a cool project and something I will probably use for some hobby
projects.

I would caution against it for anything more than a hobby project as it
violates the Cloudflare TOS:

> 2.8 Limitation on Non-HTML Caching

> The Service is offered primarily as a platform to cache and serve web pages
> and websites. Unless explicitly included as a part of a Paid Service
> purchased by you, you agree to use the Service solely for the purpose of
> serving web pages as viewed through a web browser or other application and
> the Hypertext Markup Language (HTML) protocol or other equivalent
> technology. Use of the Service for the storage or caching of video (unless
> purchased separately as a Paid Service) or a disproportionate percentage of
> pictures, audio files, or other non-HTML content, is prohibited.

[https://www.cloudflare.com/terms/](https://www.cloudflare.com/terms/)

For something small, they won't care. If your images make the front page of
reddit, you might get shut down.

~~~
eastdakota
That’s only for our traditional service. For Workers the ToS is different.
Don’t see anything troubling about this project!

~~~
InvaderFizz
Workers are being used to do some URL rewriting.

The main point of this article is to use a Cloudflare cache-everything rule
and use that caching to create a free image host. From the article:

> I'd heavily recommend adding a page-rule to set the "cache level" to
> "everything", and "edge cache TTL" to a higher value like 7 days if your
> files aren't often changing.

~~~
steve19
The guy you replied to is the CEO of cloudflare. If he says it's OK then I'm
pretty sure it's OK!

~~~
eastdakota
^—- what he said.

~~~
InvaderFizz
In that case, thanks for the blessing. :)

~~~
zawerf
I am not saying not to trust the word of the CEO, but this exact use case of
using cloudflare as a image hosting comes up a lot on HN.

The word on the street is that they will start throttling and contacting you
once you hit several hundred TB per month. [1][2][3][4][5][6]

Of course this is still extremely generous and the upgrade plans are usually
still several orders of magnitude cheaper than any cloud provider per gb. But
don't build a business or hobby project around cf providing _unlimited_ free
bandwidth forever.

[1]
[https://news.ycombinator.com/item?id=20139191](https://news.ycombinator.com/item?id=20139191)

[2]
[https://news.ycombinator.com/item?id=19368684](https://news.ycombinator.com/item?id=19368684)

[3]
[https://news.ycombinator.com/item?id=13580113](https://news.ycombinator.com/item?id=13580113)

[4]
[https://news.ycombinator.com/item?id=12826389](https://news.ycombinator.com/item?id=12826389)

[5]
[https://news.ycombinator.com/item?id=5214480](https://news.ycombinator.com/item?id=5214480)

[6]
[https://news.ycombinator.com/item?id=19829740](https://news.ycombinator.com/item?id=19829740)

(basically search HN for cloudflare + non-html)

~~~
StavrosK
To be fair, I was expecting to be contacted for this unlimited service way way
waaay before hundreds of TB per month.

------
kaivi
Backblaze is cheap, but if you're uploading millions og files, beware -- there
is no way to just nuke/empty a bucket with a click of a button. If you're not
keeping filename references in an external database, you are left to
sequentially scan and remove files in batches of 1000 in a single thread.

Support could not help, and it took me months to empty a bucket that way.

~~~
tty7
That doesn't really make any sense, Backblaze were not limiting you to a
single thread - you were...

~~~
kaivi
You do need access to an index/DB of all files in a bucket in order to delete
them in parallel. Otherwise you're stuck paginating with the B2 API.

~~~
hinkley
You need a DB of all of the _dead_ entries that need to be deleted, and that’s
a fine thing to have.

There are lots of problem spaces where deletion is expensive and so is time
shifted not to align with peak system load. Some sort of reaper goes around
tidying up as it can.

But I think by far my favorite variant is amortizing deletes across creates.
Every call to create a new record pays the cost of deleting N records (if N
are available). This keeps you from exhausting your resource, but also keeps
read operations fast. And the average and minimum create time is more
representative of the actual costs incurred.

Variants of this show up in real-time systems.

~~~
kaivi
My case was really simple. I was done with my ML pipeline and nuked the
database, but pics in B2 remained with no quick way to get rid of them and/or
to stop the recurring credit card charges.

IMO an "Empty" button should have been implemented by Backblaze.

------
ww520
Very good info. Didn't know B2 is cheaper than S3.

~~~
haywirez
It’s cheap but it’s proving unacceptably slow for me - sometimes I see 2.5s
TTFB for accessing tiny audio files in my region (Berlin, EU). Server uploads
are also quite unreliable, had to write a lot of custom retry logic to handle
503 errors (~30% probability when uploading in batch).

Great for it’s intended use (backups), but I’ll be switching to an S3
compatible alternative soon - eyeing Digital Ocean Spaces or Wasabi...

~~~
shakna
Those stats sound insane to me, and certainly don't reflect what I see.

I see 50ms or less TTFB, for images in the sub 200Kb range, and for videos in
the 500Mb+ range, from Australia where the internet is still terrible.

I've only ever a single serve upload fail me - and it occurred when an upload
hit a major global outage of infrastructure. In two years of regularly
uploading 8Gb/200 files a fortnight (at the least), I've never needed custom
retry logic.

~~~
berkut
I've been seeing pretty bad upload failures (probably around 30%) for
uploading hundreds of 30-40 MB files per month to B2 from New Zealand since I
started using B2 over a year ago.

And I'm not convinced it's connectivity issues, as I can SCP/FTP the same
files to servers in the UK...

When I test using an actual software client (Cyberduck) to do the same thing
to B2, I see pretty much the same behaviour: retries are needed, and the total
upload size (due to the retries) is generally ~20% larger than the size of the
files.

~~~
hombre_fatal
Interesting. I have a webm media website where I've migrated hundreds of
thousands of videos about that size from s3 to b2 with thousands of additional
per month with almost zero issues. I didn't even have/need retry logic until I
was on horrible internet from a beach for a month where long connections were
regularly dropped locally.

Felt TTFB and download speed were great too considering the massive price
difference compared to s3. Though also used Cloudflare workers anyways to
redirect my URLs to my b2 bucket with caching.

~~~
haywirez
How well can you cache the worker responses on CF? Can you prevent spinning
one up & therefore incurring costs after the first given unique URL request is
handled? Looking into now.sh for a similar use case (audio), but pondering how
to handle caching in a bulletproof way as I'm afraid of sudden exploding costs
with "serverless" lambdas...

------
homero
I didn't realize workers had a free plan. I avoided trying it for a while.

~~~
StavrosK
They added it very recently.

------
StavrosK
Here's the script, edited to cache files forever:

[https://www.pastery.net/tdszux/](https://www.pastery.net/tdszux/)

I wrote a simple uploader script that adds a random ID to each upload so they
don't clash, but this will work fine regardless.

------
bitmedley
Using CloudFlare workers to clean up the URLs is neat but it seems like it
would be really easy to reach the limits of the free tier.

[https://workers.cloudflare.com/docs/reference/runtime/limits...](https://workers.cloudflare.com/docs/reference/runtime/limits/)

You get 100,000 request per month and up to 1,000 requests in a 10 minute
timeframe. So if you have a page with 10 images on it and you get 100 people
visiting that page within a 10 minute timeframe, you will use up all of your
free tier and all new visitors will get a 1015 error.

For paid plans you must pay at least $5 and get 10 million requests included
and additional requests are 50 cents per million.

~~~
CherryJimbo
You get 100,000 requests per day, not per month. The burst limits are
definitely a concern for heavy traffic, but for just $5 you can remove the
burst limits entirely, as you mention.

------
svnpenn
Wow I was just able to drop Google Drive for basic storage with this and
rclone. Thank you!

~~~
anderspitman
We're you able to use Google Drive for web image hosting somehow?

~~~
rasz
images, pirated videos, you name it

[https://openloadmovies.bz/movies/men-in-black-
international-...](https://openloadmovies.bz/movies/men-in-black-
international-2019/)

Google seem not to care

~~~
svnpenn
i dont think it works

[https://openloadmovies.bz/movies/blade-
runner-2049-2017](https://openloadmovies.bz/movies/blade-runner-2049-2017)

~~~
rasz
The one I linked works fine and rides on Google Drive, other pirate streaming
sites often use Youtube as a backend, or even Google Docs!

------
hinkley
How are we feeling about Cloudflare stability versus AWS these days?

~~~
floatingatoll
They rarely (or never?) go down at the same time for any reason, other than
the standard Internet BGP drama that all providers are at risk of and have no
control over.

------
Markoff
> Backblaze has a 10GB free file limit, and then charges $0.005/GB/Month
> thereafter.

is this true, I can have 110GB cloud storage for 0.5$ per month? it sounds
TGTBT

~~~
bufferoverflow
1TB OneDrive is $80/year ($6.66/month). So 110GB is $0.73/month. Not that far
from 50 cents.

~~~
Havoc
Protip - get a o365 home instead.

5x 1TB for like 50 bucks. Also skype minutes and office software

~~~
yowmamasita
that's 50 bucks per month vs 80 bucks per year

~~~
dragonwriter
o365 home is $99.99/yr (not $50/mo), and allows up to 5 users, each of whom
gets their own 1TB OneDrive allotment, evergreen desktop and mobile office
software, skype minutes, etc.

It's a much better deal than paying $80/year for 1TB of OneDrive if you have
2+ users.

------
ebg13
Ironically, none of the images on the page are loading right now because of
"Error 1101 Worker threw exception". So, you know, caveat emptor.

~~~
CherryJimbo
That was entirely my bad. I was moving a few things around.

------
gozzoo
From what I'm seeing workers are used only for URL rewriting. This can be
acheived much simpler with page rules.

~~~
CherryJimbo
Workers are also used for basic CORS headers, and stripping some other
unnecessary headers. They're definitely not required, but I don't believe you
can do URL rewriting with page rules; redirects, sure, but not rewriting.

------
jasonlingx
Some possibly simpler alternatives to consider: Google Photos, Netlify,
Gitlab/Github Pages

~~~
techntoke
There are actually a couple pretty serious limitations of the Google Photos
API:

[https://tip.rclone.org/googlephotos/#limitations](https://tip.rclone.org/googlephotos/#limitations)

------
abafazi
Or just use Imgur, or Discord

~~~
bufferoverflow
Imgur deletes photos after a while.

~~~
CherryJimbo
And compresses them heavily.

