The broken ramps clearly use significantly less material, and are thus weaker as a result. If you look carefully you can also see what looks like stress whitening (slightly lighter colour) forming on the bottom half of the cracked one, as well as both halves of the one below it.
"We must assume that this is an error in the design of the Seagate Grenada hard drive installed in the Time Capsule (ST3000DM001 / ST2000DM001 2014-2018). The parking ramp of this hard drive consists of two different materials. Sooner or later, the parking ramp will break on this hard drive model, installed in a rather poorly ventilated Time Capsule."
Mixing drives from different manufacturers may help, but really you shouldn't rely on RAID alone. For commercial use you could duplicate the data on multiple NAS systems, but at home you probably aren't going to do that. Simplest is to understand RAID is not a backup, and to store the data somewhere else as well :-)
The system reported a failure, so we scheduled the drive to be replaced and brought up the hot spare and started the parity resync process. A little while later there was another drive failure and we told the data center folks to tell the tech to hurry up. While the tech was headed to our cage, there was a third drive failure and the array was toast. We were able to restore from backup, but the data was a day old.
Lessons were: Mix drives from different production batches (we couldn't mix manufacturers because of the leasing contract). Have a backup that you can restore from. Parity resync operations while the array is in use will put more stress on the drives than production use alone will, and will kill any (remaining) weak drives.
Initially, I drove my HDDs offsite... and I keep another offsite backup in my car.
I've been maintaining an offsite at a friend's house that I rotate out whenever I visit. The car would be a good extra. Encryption required, but I'm already doing that.
I'd give them credit but I forgot where I got that idea from ;) but I don't even encrypt mine because it has a higher chance of recovery that way.
So even my NAS supports RAID5/6, I still go with single disk volumes so that in event of disk failures I'll only have to replace the failed disks and lose only the data on it in the worse case. In fact, I don't lose much because my data does not have to be in the same volume and the disks are ready fast enough and made my home network the bottleneck.
However, since I had to now replace 2 disks I decided an upgrade would be in order, the disks been there for 10 years and I wanted to have more that 1tb for a while.
Turns out it's two times cheaper to have a 4tb array with 3 2tb disks than a 4tb array with 2 4tb disks.
What I meant for disk failures was really disk failures, there would be no way directly get data out of the disk unless you open it and directly read the plates, in your case, all your data would be totaled as RAID0 offers no redundancy and thus tolerates no single disk failures at all. Your data seems not much and it would not be that hard to migrate a 4TB array. Things would be completely different if you will have to rebuild more even worse migrate a much bigger array i.e. 4x10TB ones. Even if everything goes on well, it will take a few days just to copy the data somewhere else. If there were cascading failures, then there is really not much you can do. I can promise you that once you went through all those nightmare, you will never ever want to do it again. Take into those cost/effort needed, whenever I need to build a personal NAS, I always go with 2/3 biggest capacity NAS/Enterprise HDDs available and leave at least 1 vacant bay so that I can easily add more space without fiddling with existing drives. Even if the NAS eventually become full, I can still simply replace smallest drive with a bigger one and copy the data back.
People use RAID 10 because single drive failures are cheap to replace (you copy one disk to another) you can survive some two drive failures, and the logical arrangement is simpler - it’s just stripes where each stripe member has a duplicate.
When you connect multiple similar devices together mechanically, their vibrations can sum catastrophically.
Ah, it’s mechanical resonance I’m thinking of.
I personally have like maybe 5 disks, two I purchased and three scavenged from systems they outlived. Not really gonna learn which ones are good and bad with that kind of sample am I?
I bought 8 of them and had 5 fail in 2 years. Fortunately the failures were never synchronized enough to kill a RAID.