Hacker News new | past | comments | ask | show | jobs | submit login

Just remember this is durability not availability. Jeff states this clearly: "If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so."

Eleven nines availability is the service not working/responding only 0.0003 seconds per year ... indistinguishable from perfect.




I was actually thinking about this claim. Seems kind of unreasonable; seems that the amount of data lost should be proportional to the size of the data, not the number of objects it is split into.


I just figured they're going off an average object size stat. Data size can be equally meaningless - a single bit of data loss might be catastrophic in a 10 GB file, or it might not be noticeable in a 1 KB file.


I am not sure if what you claim is meaningful. A single bit of data loss is a data loss, no matter what the file size is.

If you meant you could fix the 1-bit error easily in the 1KB case, as you have just 8K bits to flip through, then it makes much more sense. If you split the big 10GB file into smaller chunks of 1KB (at which error detection/correction is done), then the fault becomes much more manageable.


> A single bit of data loss is a data loss, no matter what the file size is.

Sure, but in a lossy JPEG, or a heavily compressed video file, a single bit in a single frame of a two hour movie really isn't going to matter much.


Not all bits are created equal. A one bit change in a mailmerge datafile could put a lawsuit from a Mr. Whitman in your hands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: