Hacker News new | past | comments | ask | show | jobs | submit login

I've been seeing pretty bad upload failures (probably around 30%) for uploading hundreds of 30-40 MB files per month to B2 from New Zealand since I started using B2 over a year ago.

And I'm not convinced it's connectivity issues, as I can SCP/FTP the same files to servers in the UK...

When I test using an actual software client (Cyberduck) to do the same thing to B2, I see pretty much the same behaviour: retries are needed, and the total upload size (due to the retries) is generally ~20% larger than the size of the files.




Interesting. I have a webm media website where I've migrated hundreds of thousands of videos about that size from s3 to b2 with thousands of additional per month with almost zero issues. I didn't even have/need retry logic until I was on horrible internet from a beach for a month where long connections were regularly dropped locally.

Felt TTFB and download speed were great too considering the massive price difference compared to s3. Though also used Cloudflare workers anyways to redirect my URLs to my b2 bucket with caching.


How well can you cache the worker responses on CF? Can you prevent spinning one up & therefore incurring costs after the first given unique URL request is handled? Looking into now.sh for a similar use case (audio), but pondering how to handle caching in a bulletproof way as I'm afraid of sudden exploding costs with "serverless" lambdas...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: