Hacker News new | past | comments | ask | show | jobs | submit login

You can buy a 2tb drive for $100. $200 and you've got 430 DVD's worth of data redundantly stored. $300 and you've got local redundancy AND offsite backup.





Even if magnetically they don't have 'bit rot', they use bearings where the lubrication can dry up and wear out when they're not turning for long periods of time.

You need to keep them spinning on a regular basis, and replace them as they begin to fail.


HDDs are also prone to silent bitrot, where it will simply return incorrect bytes for a sector, even without any smart errors. (Optical disks also bitrot; but so does HDDs).

This is usually a precursor to SMART errors happening in the near future, but unfortunately, it can still result in corrupted replication and corrupted backups; as your backups would be backing up the rotten (corrupt) data.

I've witnessed this happen on both Seagate and WD drives, on systems with ECC memory. I can only suspect this is due to HDD manufacturers wanting to reduce their error rates, and RMA rates: it may happen with the ECC bits in a sector is corrupt, making bitrot undetectable. Instead of giving an error (and being grounds for a RMA replacement), the HDD firmware may choose to return non-integrity-checked data; which would usually be correct but also could be corrupt.

It's why filesystems like ZFS and btrfs are so important.

My rough estimation of this, based on my own experiences and those on r/DataHoarder, suggests 1 hardware sector (4KB for most drives post 2011) will silently corrupt per 10TB year. Such corruption can be detected via checksumming disc systems like ZFS.

Usually, the whole sector is garbage, which is not indicative of cosmic ray bitflips.

External flash memory storage like USB sticks and SD cards fare far worse. In my own experience, silent corruption occurs more like 1 file per device, per 2-3 years; irrespective of the size of memory. I've had USB sticks and SD cards return bogus data without errors, so often. I only know because I checksum everything, otherwise I would have thought the artefacts in my videos or photos came with the source.

If, in 2020, you are not using ZFS or btrfs for long term archival, you are doing something wrong.

ext4, NTFS, APFS, etc may be tried and tested, but they have no checksumming, and that is a problem.


Interestingly, on my home ZFS raidz with 3 4TB hard drives, I have had to replace a drive a couple of times because ZFS scrub was reporting silent corruption. They were consumer-grade SATA drives.

However, at work, I have backed up ~200TB of data to a large server with RAID-6 and ext4, storing the backups as large .tar files with par2 checksums and recovery data, and regularly scrubbing the par2 data. I have yet to see any corruption whatsoever. These are enterprise-grade hard drives. This is the strongest evidence I have yet seen that the enterprise-grade drives are actually better than the consumer-grade ones, rather than just being re-badged.


Enterprise drives have different firmware, especially from an ECC and integrity perspective. From a price/perf standpoint tho, shucking consumer grade drives with ECC win.

Thanks. What are the drives at your workplace?

I actually have no idea. I didn't have any part in purchasing that particular system, I don't have root, and all the drives are hidden behind a RAID controller. Sorry.

How do you know they are enterprise drives then?

I have a home "NAS" (opensuse server) where my main /data partition is xfs, but it mounts a btrfs backup partition, rsyncs, and takes a snapshot.

I should really get around to converting the main drive to btrfs, but this works well.


Do proper use of ZFS also require ECC memory?

ZFS protects you from disk errors. ECC protects you from memory errors. Using one or the other is safer than using neither. Using both together is even safer.

100% yes. With non-ECC you will always have bad RAM bits eventually. With ZFS this is especially bad because it can corrupt your checksums or your ZFS metadata, which means either silently corrupting your data, or corrupting ZFS itself and losing your entire zpool (akin to losing a RAID array).

Maybe not: that ZFS needs ECC is "common wisdom", but the disaster scenario appears not so likely. See

https://news.ycombinator.com/item?id=14207520

https://news.ycombinator.com/item?id=8293025


This is FUD perpetuated by a certain individual on the FreeNAS forum.

Ideally you would temporally separate the purchasing of the drives and make sure to hook them up and check them every so often (once a year?).

Much like other commenters I'm no expert on the topic, but I think you'd have to be pretty incredibly unlucky to have a mechanical failure on 3 drives at once from lack of use, especially if they were from different manufacturing batches.


I'm no expert on the issue so correct me if I'm wrong, but I've heard modern HDDs use fluid bearings and isn't susceptible to drying up.

My setup (laptop w8.1pro): external 4TB disk, assigned to letter L (for Library). I got Acronis running once per month, dumping a 70-80GB .tib file on L. L also has a backup folder with everything I got (setup files, books, photos, every audiobook/video I need such as trainings, etc). The whole backup is ~2TB.

Now get Carbonite (not affiliated, I just like the infinite space backup), and get it to backup your key laptop folders (Docs, Images, Desktop, etc) and your L-drive. I don't remember how much it costs ($6-10?/mo), but I have stopped worrying since then. I got a monthly tib file for my system and an "instant" backup for everything else. So even if my laptop is stolen I can set up a new laptop (the .tib may be useless but I can open it and see what s/w I had and I can take the config files/folders to move to my new system).

I don't remember how much the disk was but it didn't hurt my wallet, and the ~$100 (?) per year on Carbonite (had CrashPlan) definitely doesn't hurt my wallet.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: