Hacker News new | past | comments | ask | show | jobs | submit login

In an old discussion regarding ECC/ZFS (in particular, whether hitting bad RAM while scrubbing could corrupt more and more data), user XorNot kindly took a look the ZFS source and wrote

"In fact I'm looking at the RAID-Z code right now. This scenario would be literally impossible because the code keeps everything read from the disk in memory in separate buffers - i.e. reconstructed data and bad data do not occupy or reuse the same memory space, and are concurrently allocated. The parity data is itself checksummed, as ZFS assumes it might be reading bad parity by default."

His full comment can be found here:

https://news.ycombinator.com/item?id=8294434




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: