Hacker News new | comments | show | ask | jobs | submit login

80GB of redundant storage = 160GB at least. More than that, because you can't count on any one node almost at all, so you need more than two copies.

This means you should get more like 10-20GB per 100GB you commit, otherwise the cloud simply will not have enough space.

Then if you consider that even with many nodes containing your data, there is a decent chance all of them go offline at a certain time. You have to have many, many nodes for the odds to be small enough. Which means the best solution is to use the storage you committed as one of the nodes, so it is always available to you. Then, it really transforms into a cloud backup system, rather than a cloud file system.

You wouldn't store full copies, you'd stripe it across multiple machines using some type of error correction coding, like Reed-Solomon, which has less than 2x overhead.

I'm thinking of it in terms of RAID.

You are talking about RAID5. However, RAID5 is useless if more than a few disks go offline at the same time.

RAID1/10 is most useful when there's a higher chance of multiple disks failing at a time, or when the odds of multiple disks failing in your RAID5, while low, are unacceptable.

Of course there are other things at work when you talk RAID0/5/10, but this is a large part of it.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact