Hacker News new | past | comments | ask | show | jobs | submit login

Error correction provides the same value as a checksum, but better. (the tradeoff is ECC codes are much larger and more expensive to compute)

It's also worth noting the CRC is used for power-loss and doesn't actually provide error detection for metadata-blocks.

Checksumming data is a bit complicated in a file system, mostly because of random file writes. If you write to the middle of a file, you will need to update a CRC, but updating that CRC may require reading quite a bit of additional data to build the CRC back up.

To make random writes efficient, you could slice up the files, but then that raises the question of how large to make the slices. Too large and random file writes are expensive, too small and the overhead of the CRCs gets costly.

You could make these slices configurable, but at this point we've kinda recreated the concept of a block device.

The block device representing the underlying storage already has configuration for this type of geometry: erase size/program size/read size. If we checksum (or ECC) at the block device level we also get the added benefit of also protecting metadata blocks. Most NAND flash components already have hardware ECC for this specific purpose.

TLDR: It's simpler and more effective to checksum at the block device level.

And for MCU development, simpler == less code cost.




> mostly because of random file writes.

That's not an issue in CoW filesystems since you need to write out the whole modified block. You might as well hash it.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: