Hacker News new | past | comments | ask | show | jobs | submit login

They say[^1] that they can tolerate destruction of any 2 nodes without data loss. I don't know how many nodes Amazon S3 premium can tolerate.

[^1]: https://nimbus.io/architecture/

Amazon doesn't talk about their numbers either. The only thing they do say is that RRS (reduced redundancy storage) 'stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage, and thus is even more cost effective.'

This is at the main page: http://aws.amazon.com/s3/ (search for RRS)

Amazon says that S3 provides eleven nines (99.999999999%) durability of files. So if you have 100 billion objects in S3, you should expect to lose on average 1 per year. Or, if you have 10,000 files, you should expect to lose 1 per 10 million years. In addition they say it can tolerate the simultaneous failure of two datacenters. Nimbus, with 3 copies total, appears much less redundant... but nobody knows how Amazon calculated their eleven nines claim.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact