Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

According to the AnandTech article, they expect 16GB DIMM's to be certified soon, which would allow 128GB.


Considering the haswell parts don't support ECC, how many random bitflips per hour are your 128GB of DDR ram going to be taking and how many instances of unrecoverable data loss per year does that translate into on average?


That's it, I'm running my RAM in a ZFS pool from now on! Now I just need 128GB of RAM to run ZFS... /s


You joke, but running ZFS on disk w/o ECC RAM is not a good idea. See https://groups.google.com/forum/#!topic/zfs-macos/qguq6LCf1Q...


I've heard this from several sources, but is it worse than any other FS? I mean it seems obvious that data corrupted in RAM will be corrupted when you write it to disk. I think people are worried about software RAID vs hardware RAID, since all hardware RAID platforms have ECC cache and software RAID should too.


It is worse as ram corruption can cause a loss of the entire zfs pool. there are no zfs recovery tools available, so data recovery can be next to impossible or very expensive.


5


ummmmmm, twice the number I get from my current 64GB non-ecc system?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: