The original blog post linked from the article is https://blog.korelogic.com/blog/2015/03/24#ssds-evidence-sto...
It actually raises a point for a specific use-case, when a computer with an SSD is taken for analysis (f.ex. a court case as evidence by the police) the SSD is kept powered off and may lose its data and thus be worthless as evidence after some time if the system is not kept in a normal temperature (around 25C).
It's not a normal use-case and in fact it may be to the benefit of the user to use an SSD in that case in the hope that the data will self-destruct with complete plausible deniability.
All what they found is that there's a correlation between load, the operation temperature during which data was stored, and the ambient temperature in which the device is kept while switched off.
This means that at an active operational temperature of about 40c (which is normal for most consumer drives) and a storage temperature of 30c (which is pretty much the upper maximum of storage temperatures under residential conditions) your drive will retain data for at least 52 weeks.
That might be a normal storage temperature, but it certainly doesn't seem like a maximum. For instance, during the summer, an attic or similar unconditioned enclosed storage area can easily exceed 40C during the day.
> your drive will retain data for at least 52 weeks.
That seems far too short. I'd like to see similar data for mechanical drives and USB disks, but given the way flash storage works, I'm surprised that the powered-off lifetime is that short; I'd have expected decades. I'm curious what the actual failure mode is; what physically happens to the drive to make it lose data?
I also find the charts in that presentation interesting: data retention really increases when storing data at a higher temperature? So for data retention, it's better for the drive to run hot?
The failure mode is that there are only about 50 electrons in that charge trap of NAND flash these days and they can disappear on their own and they will disappear after enough time and when they do you don't have the voltage you should have and so can't read the bit value you stored in it. After enough such bits become incorrect you can no longer fix the block with the ECC and then you no longer have the ability to read the original data.
It should be noted that the more worn out the drive is the easier it is for the electrons to disappear from the cell. Which is why a new drive retains data perfectly well but an old drive can barely hold itself.
E.g. in Intel's product spec for the DC S3500 series, the data retention parameter is specified as: "3 months power-off retention once SSD reaches rated write endurance at 40 °C".
According to a recent presentation by Seagate's Alvin
Cox, who is also chairman of the Joint Electron Device
Engineering Council (JEDEC), the period of time that data
will be retained on an SSD is halved for every 5 degrees
Celsius (9 degrees Fahrenheit) rise in temperature in the
area where the SSD is stored.
In the "early" days of SSD's they didn't offered that much better performance especially in the enterprise storage arena than highend SAS/FC drives.
Both with improvements in controllers, the development of dedicated protocols for SSD drives and the usage of a cheaper alternatives to FC in the form of PCIexpress coupled with ever increasing storage density even in the enterprise market SSD's are pretty much setup to take the entire storage market.
In consumer grade computing you only see mechanical disks in entry level notebooks ("SSHD's usually with 16-32GB of SSD storage for OS and Cache"), for for home network attached storage.
1TB consumer SSD's already can be bought under the 400$ mark, and the closest mechanical drives with the same capacity in terms of performance (still quite below the SSD) which are the "professional grade" 10-15K RPM drives cost about the same. It's probably just a question of 2-3 years before mechanical drives will be almost completely written off the market, they already don't make much sense for enterprise storage since they provide a cheaper alternative to high performance storage even before you count the savings on electricity and cooling, and soon they wont make sense for everyday consumers either.
Is it becoming necessary to switch to a filesystem that ensures data integrity with redundancy and regular checksums?
Filesystems are not magic data restoration fairies. Checksums just tell you when your data is gone; they don't bring it back. Always mirror or erasure-encode, and store mirrored pairs in separate locations.
I presume it is assumed that corporate policy has better data integrity best practices than your average Joe Homeowner.
With this I'm actually surprised that the SSD's lasted as much as they did, for about 6 years now I've been using SSD's and the longest one of them survived so far is less than 18 months, and at much lower loads than what they've tested in. Since late 2008 I've been always running 2 SSD's for my main system drive in raid 0 with no much load on them other than games, some work stuff and the OS.
At any point in which the drive has no more spare blocks to relocate failed blocks too i toss it away and replace it (if read errors come close to critical i do the same). For the first couple of years the SSD's only lasted for about a year(my first Intel pair lasted something like 5 months, but those were shitty controllers), this time with the Samsung 840 EVO's i have a feeling that they might actually last the whole 2 year span of my desktop.
The SSD is in fact better because it better uses its internal resources to optimize for things other than retention without power.
In any case, it's terrible because it takes testing criteria under extreme conditions (don't store your SSDs at 55C) and then fearmongers this as the norm.