Hacker News new | comments | ask | show | jobs | submit login

I use Backblaze now and once I get my NAS, I’ll probably end up using a B2 based backup. But let’s make an honest comparison. Backblaze does not replicate your data across data centers. The standard S3 storage class does (0.23/gb). The comparible storage class for S3 is one zone infrequent access (.01/gb). B2 still comes out ahead, but I wouldn’t use either one for primary storage. For thier suggested “3-2-1” backup strategy, sure.

Then again, just for backup, I could use S3 glacier for $.004/gb. That’s cheaper than B2 and I get multiple AZ storage. The data charges would be higher - but its backup. If catastrophe struck and I lost my primary and my local backups, getting my data fast is the last thing I would worry about.

> Then again, just for backup, I could use S3 glacier for $.004/gb

Having done that in the past, I have to say that's just a million times less practical than basic S3-like storage. And if you want to automate that setup, Glacier is even worse.

Why do you say that?

I could see using something like rsync + Cloudberry (maps S3 and make it look like a network drive). Set it up to use one zone infrequent access, and then after x days use a lifecycle policy to move it to Glacier.

My use case for backups is solely for movies and music. For source code I use hosted git repos, pictures Google photos, and for regular office documents, they are either on Google docs or One Drive.

Last time I used Glacier, it was a separate product from S3 and had its own API.

You had to upload pre-prepared "tapes" for backups. You couldn't mutate an existing backup, you had to create a new one. And frequently fetching and/or deleting existing "tapes" (backups) would cost you money (more so than the original cost of the backup).

That meant you couldn't just ZIP it all up, backup the latest version and the delete the previous one to avoid being doubly charged for storage either.

Basically at time of archiving you needed to determine what was already archived and create a new bundle with only what's new, and archive that only. In the same spirit, restore meant piecing together multiple such tapes into a full restore-set.

Absolutely terrible. It was like having traditional backup-software constraints, but none of the software-support.

If Amazon has improved on that now, good for them, but I figured they probably had to if they wanted any users at all.

Honestly, I’ve never used the Glacier api directly. I’ve only used it as part of a lifecycle policy where objects were stored in S3 and then using the console to have AWS migrate data after a certain amount of time.

My offsite backup would only be accessed in the case of catastrophic failure - my primary and local backup data is unavailable. Data transfer does cost more but if I had that type of catastrophe, worrying about getting my movies back for my Plex server would be of little concern. Everything that I would care about - source code, photos, documents etc are stored other places.

That’s another strike against Backblaze backups (not B2 based backups). When we were in between residences last year - we left our apartment when the lease was up and stayed in an extended stay waiting for our house to be built, my main computer was offline for 5 months. One more month and my Backblaze backup would have been deleted. I forgot about it and I restarted my computer before I reconnected my external drive - so my backup from my external drive was erased from Backblaze as soon as I came back online. It wasn’t catastrophic but irritating. Luckily I have gigabit upload.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact