Hacker News new | past | comments | ask | show | jobs | submit login

Interesting, unlike Glacier this is significantly cheaper than Backblaze B2, meaning I might have to reconsider how I do my backups again. Any good backup tools supporting this type of service?

I rely on Restic at the moment which seems to need fast read access to data, but their incremental snapshotting is great. It'd be ideal if I could find something like that supporting these "cold storage" solutions.




One thing I do consider a real value add for AWS Glacier though has been their native support for offline media import/export. Ie., you can just send them a hard drive of your own for data load, and pay to get a hard drive back out as well. As gigabit (or faster) class WAN slowly spreads this will someday become unnecessary, but right now in many, many places a company could easily have terabytes to backup but 10/1 ADSL as their best available connection. Even with faster connections, aggressive data caps are sadly not infrequent. Whether it's for initial load, ongoing use or faster recovery, sometimes there is still nothing like a multi-TB drive or two in the mail.

There are 3rd parties that will do it for you (Iron Mountain is at least one) but that's an extra cost and Google takes no responsibility for it. I assume this is an example of a place where Amazon is able to leverage its wholistic business, with a Cloud service that can also take advantage of their physical logistics system. Google's service here is quite significantly cheaper and has some nice features though, but even if it's not worth a $4/$1.23 premium for Amazon I could definitely see continuing to pay Amazon some premium ($2 vs $1.23 say) for that alone anywhere with limited high speed WAN availability.


Disclosure: I work on Google Cloud.

We also have a Transfer Appliance [1], that comes in two sizes (100T and just under 500T). We don’t currently support shipping one filled up with your data for recovery/export though.

[1] https://cloud.google.com/transfer-appliance/


Backblaze also offers that option. You can mail them up to 8tb on an external HD and have it loaded into their system for $190, up to 256gb on a USB stick for $100. [1]

You can also request a "B2 Fireball" [2] from them. It's basically a small array that they mail to you for $550 with 70tb of storage. You fill it up and send it back to them within the month, and they'll load the data into your account.

[1] https://www.backblaze.com/b2/cloud-storage-pricing.html (Bottom of the page)

[2] https://www.backblaze.com/b2/solutions/datatransfer/fireball...


For comparison, Amazon supports up to 16TB in their basic service, with an $80 flat handling fee per storage device and then $2.50 per data loading hour. Since they support 2.5"/3.5" SATA and external eSATA & USB 2/3.0 interfaces and it's a pure sequential transfer, it's not much trouble to get at least close to maximum sequential, which even for decent spinning rust should allow a good half TB an hour at least. I've never tried an SSD so I'm not sure if they can saturate 6 Gbps, but as even a 32 hour transfer of 16TB would only be another +$80 it may not be generally relevant anyway.

Amazon's equivalent to B2 Fireball is "AWS Snowball" (amusingly enough, not sure if there is a bit of fun name riffing between the two here), which is a service fee of $200/50TB and $250/80TB device, any onsite days after the first 10 at $15/day.

It's interesting how the pricing mix is on this feature though. Amazon offers lower potential ingress pricing depending on your use, though notably if you kept the Snowball a whole month the pricing would get very close to the Fireball (+20 days @$15/day brings the price to $500/$550 respectively, though the former with 20TB less and the latter with 10TB more).

Backblaze and Google are both much cheaper to get data out of though, Amazon's Glacier and descendent services remain very much deep freeze focused.


What's the shipping costs on a Snowball or other appliances?


AWS's new Glacier Deep is actually cheaper than Google's Ice Cold, $1/TB/month.

https://aws.amazon.com/about-aws/whats-new/2019/03/S3-glacie...


Those retrieval costs, though...


Anyone know the retrieval cost for Ice Cold? I don't see it mentioned in the post.


If it's the same as their other storage, which isn't really clear... About $50 /TB


On B2 that's $10... yeah, might be reconsidering this...


I expect that Google will also charge for retrieval. Their egress is really really expensive.

There may also be a minimum storage period, like Amazon has.

Let's wait and see.


It looks like a lower tier than the existing Coldline and Nearline (7x cheaper for storage than the former). Both have a minimum period, so this one is likely to have one as well. Coldline and Nearline are more expensive than regular storage when fetching objects, which means ice cold storage is probably even more expensive when you restore (is it going to be 7x, too, keeping symmetry?).


Is their egress more expensive than Amazon's? Because when I had a look at that, it sure wasn't cheap either.


I've never tried it, but I know https://www.arqbackup.com/ supports Google Cloud.


The concept, idea, and flexibility of Arq is great, ideal even. The amount of control is nice. I wish it were open source.

The actual product is pretty painful when you need to do a recovery, especially if you don't know where the file lived on disk. I haven't tried newer Arq Cloud Backup destination to see if it improves the search experience.

That said my experience is from more than a year ago and I would try it again if they were able to bring their search on par with current consumer backup offerings.


The place where this won't be as cheap as Backblaze is retrieval. Unless Google makes a big change, you'll still have to pay for network egress, which is obscenely priced: https://cloud.google.com/storage/pricing#network-egress


Borg Backup is mostly the same as Restic (regarding dedup / incremental backup) [1] and aggregates data into large chunks.

If you only backup from a single machine it has a local cache of already backed up data, this has the large advantage that it basically only needs to push the delta data to the remote, not do any kind of synchronization to check what is already there or not.

[1]: https://stickleback.dk/borg-or-restic/


"Borg Backup is mostly the same as Restic (regarding dedup / incremental backup)"

... with one very big difference - you can only point borg at an SSH host. You can't point borg at S3 or B2 or Glacier, etc.

rsync.net supports both borg and restic, but even the heavily discounted plans[1] are much more expensive than "Cold Storage" or Glacier, because they are live, random access UNIX filesystems ...

[1] https://www.rsync.net/products/borg.html


Shameless plug: I built a backup service[1] just for Borg and the price per TB on the large plan is $5/TB. Not as cheap as "cold storage", but still better than rsync.net and the same as B2.

Also worth pointing out that my storage is calculated after compression and deduplication. So depending on the data a Borg backup can be much smaller than the actual data.

1: https://www.borgbase.com


Interesting. I've been backing up to a storage node at time4vps. I have an older plan at about $15/ quarter. https://billing.time4vps.eu/?affid=1881


True - which is kind of weird, because as far as I understand their respective "databases", borg would be more suited for arbitrary remote storage because it should only need a "upload file" command basically without any interactivity, except for its robustness checks and some additional flexibility (having multiple backup sources, deleting data that is no longer needed).

Restic seems more made from the ground up to utilize the existing power of a filesystem as a database, so it needs remote storage that offers quick interactivity (esp. checking existing files), i.e. it's impossible to use something like Glacier as a backend.

It's not a problem for me since I just backup to a local drive and (am planning to setup) synchronization to a remote dumb storage.


I need more information than this post provides before switching archival solutions.

Actually, since it’s google I likely wouldn’t consider them regardless.


I've been using duplicati for some time. It works ok, not perfect. Wish I could send backups to multiple locations especially (eg local/B2)


What tool do you use to do your backups? rclone?


Rclone only copies stuff. It doesn't compress, deduplicate or version. Some backends to versioning though.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: