My small datacenter results mimic BackBlaze too. Dead/dying seagates all over the place. So I notified management that we will only be purchasing Hitachi drives from now on. I have a BackBlaze server that I recently converted to FreeBSD & ZFS. I love the drive-density that Backblaze offers but I HATE the lack of physical notification when a drive dies.
Most ofther file-servers have a front-facing drive caddy, that usually has LEDs on the front to indicate disk access or errors. This is great because you can walk into the datacenter, and SEE which disk has failed. With the backBlaze system you can get /dev/DriveID but not know where in the array that particular disk is.
You definitely don't want to use drives from the same manufacturing run (batch) on the same array, since they are the most likely to fail all at the same time.
Second to that, you probably don't want to go single-source for your drives -- maybe use Hitachi with a mix of WD.
Because then a single firmware bug could wipe you out. Once the drives all hit 4500 hours or 700 bad blocks or some other trigger, they die. It's happened before.
Could also be caused by bad grease, or shoddy bearings, or pretty much anything.