

The SSD Endurance Experiment: Two petabytes - geoffgasior
http://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking-petabytes

======
gerbal
So, we know these SSDs will survive an ungodly amount of read-writes. Is there
a way to test how they will survive as archival media?

Presumably the major cause of data degradation in archival time-scales is bit-
rot from either component failure or media corruption. What are the sources of
bit-rot and component failure? Can they be accelerated to provide a rough
benchmark for component failure over longer time-scales?

~~~
toomuchtodo
Heat is typically used to simulate aging; other than that, just time.

------
knd775
Wow. And the 840 is still going error-free. That's pretty impressive.

~~~
_DadeMurphy
Where did you see that the 840 is still going error-free? It says that the 840
maxed out it's reallocated sectors at around 900TB and veered into a ditch
right before the petabyte threshold.

It'd be interesting to run these same tests on enterprise grade drives as
well.

Edit: You meant that the 840 Pro is still going, I see.

~~~
buryat
840 has TLC, 840 Pro MLC

TLC cells should sustain 1-1.5K Program-Erase cycles

MLC 3-5K P/E cycles

eMLC 10-30K P/E cycles

SLC >100K

256GB eMLC SSD with 10K PE should be able to sustain 2.56PBW, which is pretty
much what the 840 Pro 256GB with MLC was able to sustain in the test.

Also enterprise ssds usually come with huge overprovisioning, a raw 1TB drive
usually comes with 800G usable space.

I had few 720G fusion iodrives with 1.1PBW and 0 reallocated sectors, and
these were rated 10PBW.

------
kayoone
For some reason my SSDs have never lasted very long. Ive been using consumer
grade SSDs since 2009 and among those which failed are a SuperTalent
Ultradrive 128GB, Intel X-25M and Crucial M400. Now i use a Samsung 840 Evo
which is actually a replacement since the first one died after just a couple
of weeks.

Granted i am a poweruser with a lof of small writes because of software
development related activities, but it still strikes me that everyone else is
of the impression that SSDs last forever. Certainly not my experience. The
story is similar for a couple of my buddies.

~~~
dspillett
The experience with SSD reliability of myself and my
friends/colleagues/contacts has been similar to spinning metal type drives,
though with a smaller sample-set thus far. I've used a number of drives at
home and work and had two fail: one just died, and other started reporting
write errors (one Sandisk and one OCZ, I forget which exact models and which
failed which way).

Between us we've got a fair few Crucial drives running pretty much 24/7 (in my
case at home: the system drive in a Windows desktop that never turns off, a
pair in RAID1 for the system + core VMs volumes in a server), and a selection
from other manufacturers. IIRC the ones in the machine at the office are by
Samsung.

Other people I know have had similar experience. Most of the failures we've
experienced were early on, which either means it was down to luck or quality
has improved over time. I wouldn't say I find SSD to be any less reliable than
traditional drives, though when they do fail it is more often that they "just
die without warning" than other failure modes.

------
Yizahi
I'm using:

1) Corsair Performance Pro 128Gb; Marvell controller; Toshiba toggle-NAND
memory; Used for 2.5 years; written about 7-8Tb.

2) Corsair Neutron GTX 120Gb; LAMD controller; Toshiba toggle-NAND memory;
Used for 1.5 years; written about 4Tb.

Both are up and running 24/7 most of the time, both are usually filled to
60-80%, one is system partition, one is for remaining software and games. Both
SSDs suffer about 10-20 electricity outages per year.

------
aperrien
As a side question, how can you tell how many writes a regular drive has
sustained? Say, like the one in my desktop or laptop?

~~~
McGlockenshire
SSDs tend to report this as SMART ID 241, total LBAs written.

I haven't seen this on any rotational drives. I actually find that puzzling,
as it's a very interesting stat to track.

~~~
rincebrain
It's an interesting stat to track, but not one that physically was an
endurance metric for spinning drives, which is probably why it didn't appear
until SSD vendors tracked it because, as much more obvious here, it's a very
serious thing to keep track of sometimes.

~~~
grogers
I thought the main endurance metric for spinning disks is total bytes read or
written? AKA time where the head is right up close to the platter. That
doesn't seem too far off to track both total read and write IOs and/or bytes.

~~~
pflanze
I believe (but can't cite any sources for that) that unless the head is parked
(which either only happens when the disk goes to sleep, or at least after
somewhat extended times of inactivity) HDD heads are always right above the
platter, i.e. regardless of whether something is being read, written, the head
moved, or there being a (short) activity pause. Assuming that the number of
repolarisations of the magnetic substrate is not limiting, while total
parking/sleep times should have a +-strong correlation with life times
(although it might be negative, too, when the periods are too short, due to
parking cost?), total bytes read/written should only have a weak correlation
(by being correlated with the former). I am not a storage device specialist.

------
n0body
glad i have the 840 pro, although the sample size of 1 of each drive
essentially makes these tests meaningless

~~~
wtallis
These tests are only meaningless if you completely misinterpret what they're
testing. It's not a test of the overall reliability of the drives. They're
just testing the write endurance (and occasionally the data retention). The
wear leveling and garbage collection algorithms will have zero variance
between different drives of the same model, so there's no need for a large
sample of controllers. And each drive itself constitutes a large sample of
flash memory so any random variation in the lifespan of individual NAND cells
is already averaged out.

