These were made by SanDisk (SanDisk Optimus Lightning II) and the number of hours is between 39,984 and 40,032... I can't be precise because they are dead and I am going off of when the hardware configurations were entered in to our database (could have been before they were powered on) or when we handed them over to HN, and when the disks failed.
Unbelievable. Thank you for sharing your experience!
Edit: here's why I like this theory. I don't believe that the two disks had similar levels of wear, because the primary server would get more writes than the standby, and we switched between the two so rarely. The idea that they would have failed within hours of each other because of wear doesn't seem plausible.
But the two servers were set up at the same time, and it's possible that the two SSDs had been manufactured around the same time (same make and model). The idea that they hit the 40,000 hour mark within a few hours of each other seems entirely plausible.
Mike of M5 (mikiem in this thread) told us today that it "smelled like a timing issue" to him, and that is squarely in this territory.
This morning, I googled for issues with the firmware and the model of SSD, I got nothing. But now I am searching for "40000 hours SSD" and a million relevant results. Of course, why would I search for 40000 hours.
This kind of thing is why I love Hacker News. Someone runs into a strange technical situation, and someone else happens to share their own obscure, related anecdote, which just happens to precisely solve the mystery. Really cool to see it benefit HN itself this time.
Interesting how something that is so specifically and unexpectedly devastating, yet known for such a long time without any serious public awareness from companies involved, is referred to as a "bug".
It makes you lose data and need to purchase new hardware, where I come from, that's usually referred to as "planned" or "convenient" obsolescence.
Depends on who exactly we are talking about as having the intent...
Both planned and convenient obsolescence are beneficial to device manufacturers. Without proper accountability for that, it only becomes a normal practice.
I wonder if it might be closer to 40,032 hours. The official Dell wording [1] is "after approximately 40,000 hours of usage". 2^57 nanoseconds is 40031.996687737745 hours. Not sure what's special about 57, but a power of 2 limit for a counter makes sense. That time might include some manufacturer testing too.
It might not be nanoseconds, but something that's a power of 2 number of nanoseconds going into an appropriately small container seems likely. For example, a 62.5MHz counter going into 53 bits breaks at the same limit. Why 53 bits? That's where things start to get weird with IEEE doubles - adding 1 no longer fits into the mantissa and the number doesn't change. So maybe someone was doing a bit of fp math to figure out the time or schedule a next event? Anyway, very likely some kind of clock math that wrapped or saturated and broke a fundamental assumption.
53 is indeed a magic value for IEEE doubles, but why would anybody count an inherently integer value with floating-point? That's a serious rookie mistake.
Of course there's no law that says SSD firmware writers can't be rookies.
A lot of companies have teams dedicated to hardware that don’t give a shit about it. And their managers don’t give a shit.
Then the people under them who do give a shit, because they depend on those servers, aren’t allowed to register with HP etc for updates, or to apply firmware updates, because “separation of duties”.
And that they were sold by HP or Dell, and manufactured by SanDisk.
Do I win a prize?
(None of us win prizes on this one).