Presumably the major cause of data degradation in archival time-scales is bit-rot from either component failure or media corruption. What are the sources of bit-rot and component failure? Can they be accelerated to provide a rough benchmark for component failure over longer time-scales?
>The Samsung 840 Series started reporting reallocated sectors after just 100TB, likely because its TLC NAND is more sensitive to voltage-window shrinkage than the MLC flash in the other SSDs. The 840 Series went on to log thousands of reallocated sectors before veering into a ditch on the last stretch before the petabyte threshold. There was no warning before it died, and the SMART attributes said ample spare flash lay in reserve. The SMART stats also showed two batches of uncorrectable errors, one of which hit after only 300TB of writes. Even though the 840 Series technically made it past 900TB, its reliability was compromised long before that.
It'd be interesting to run these same tests on enterprise grade drives as well.
Edit: You meant that the 840 Pro is still going, I see.
TLC cells should sustain 1-1.5K Program-Erase cycles
MLC 3-5K P/E cycles
eMLC 10-30K P/E cycles
256GB eMLC SSD with 10K PE should be able to sustain 2.56PBW, which is pretty much what the 840 Pro 256GB with MLC was able to sustain in the test.
Also enterprise ssds usually come with huge overprovisioning, a raw 1TB drive usually comes with 800G usable space.
I had few 720G fusion iodrives with 1.1PBW and 0 reallocated sectors, and these were rated 10PBW.
Granted i am a poweruser with a lof of small writes because of software development related activities, but it still strikes me that everyone else is of the impression that SSDs last forever. Certainly not my experience. The story is similar for a couple of my buddies.
Between us we've got a fair few Crucial drives running pretty much 24/7 (in my case at home: the system drive in a Windows desktop that never turns off, a pair in RAID1 for the system + core VMs volumes in a server), and a selection from other manufacturers. IIRC the ones in the machine at the office are by Samsung.
Other people I know have had similar experience. Most of the failures we've experienced were early on, which either means it was down to luck or quality has improved over time. I wouldn't say I find SSD to be any less reliable than traditional drives, though when they do fail it is more often that they "just die without warning" than other failure modes.
1) Corsair Performance Pro 128Gb; Marvell controller; Toshiba toggle-NAND memory;
Used for 2.5 years; written about 7-8Tb.
2) Corsair Neutron GTX 120Gb; LAMD controller; Toshiba toggle-NAND memory;
Used for 1.5 years; written about 4Tb.
Both are up and running 24/7 most of the time, both are usually filled to 60-80%, one is system partition, one is for remaining software and games. Both SSDs suffer about 10-20 electricity outages per year.
I haven't seen this on any rotational drives. I actually find that puzzling, as it's a very interesting stat to track.
Please do not depend on ID 241 - it may vary with manufacturer