There is something funny in the name here, given that a black dwarf is what results after a star has gone through a melt-down and I think that is a serious risk with that many drives in such a tight enclosure.
He seems to have that factor under control though, 90 watts under load really isn't that bad.
TLER is well worth the 50% premium, in many cases. With desktop firmware, my experience has been that fully half the time when a disk fails, it doesn't properly fail, it retries forever. Linux MD and 3ware hardware raid (the only hardware raid I tried) both hang up waiting for the drive to fail. (the 3ware keeps resetting the drive... but it will sit there and suck for two days just like md)
Back when I used desktop drives, I had several outages caused by a bad disk in a mirror. I log into the box and manually fail it out of the raid and we are good, but customers tend to get mad when their I/O hangs for hours on end.
It only takes one hung mirror in production, for me at least, to pay for a whole lot of 'enterprise' firmware drives.
I just bought a 1.5TB Samsung HD154UI for $99 that supports ERC. There are reports that P-model variants of the WD 15EADS don't support TLER anymore, but I have several S- and R-model variants of that drive that do, including one that was manufactured in March; so it's still possible to get WD consumer drives that support TLER/ERC, you just have to be picky about the variant.
I would also be concerned with the single point of failure with his fan. If that seizes up for any reason the temp in that thing is going to skyrocket very quickly.
Personally, after I outgrow my current RAID5 -- similar to the one I built here (http://www.linuxjournal.com/article/6558), but using SATA drives now -- I'm going to switch over to mirrored large drives instead of RAID5. As long as I don't start videotaping every second of the kid's lives I should be ok.
Looks nice :D
I hope he has better results with Raid5 then me tho. I have reverted back to a dual drive NAS for stability after having raid 5 servers for a long time.
Somehow IDE and SATA drives and raid cards don't cut it for me. I have lost my data so freaking often by corrupt arrays it's not fun anymore.
The first server I had used a Promise card (Supertrak SX6000) card (with 4 * 80 GB drives) that gave me nothing but problems (very poor performance and corrupt arays about once every 2 months). It was running on a standard non-server mobo with a decent but standard PSU, so one of the components might have caused problems tho.
The second was running on an Areca ARC-1210 card (with 8*320 Gb drives) on a tyan thunder dualxeon server mobo with an 850Watt Tagan server PSU and was put behind a 2200VA UPS. Even then the array seemed to dropout quite often and I have spend 100's of hours in total (updating drivers/firmware/OS'es, recovering lost data, replacing parts, cables, moving the server etc...) trying to get it to run decently.
Eventually I just sold the whole damn thing and bought me a simple COnceptronic CH3SNAS with 2 1,5TB drives for the cash I sold the old server for. I have ever never had a single problem in the whole year i've had it and am very happy with the purchase.
The performance is totally not comparable (600MB/Sec vs 20 MB/Sec), but even the slow NAS is sufficient enough for me (it streams HD movies just fine and my backups aren't TB's big anyways ;) ). Plus the power usage is literally 1/10th, it's very quiet and a lot easier to carry around and take with me if I need to.
Maybe I just fail at fileservers, maybe I just had bad luck with 2 bad cards, but I'm not convinced a simple affordable RAID-5 solution means your data is safe.
But I must admit I wouldn't mind having that self made NAS at my home tho, it looks cool :D
Maybe one day i'll try again.... ;)
Funny. I've used an ARC-1260 in a 10x500gb raid-6 array for several years and it has never gone down. I've RMAd probably 4 drives to WD under warranty.
I've also had 2 software 5x1tb raid-5 arrays for about 2 years and they've never even lost a drive. Apparently WD improved their RE drives between the 500gb and 1tb versions, the difference in failure rate is huge.
Have you had good luck with this? I'm running zfs on an open solaris server, but I hate solaris and would like to switch to linux. I never got the impression that zfs on fuse was stable enough to trust with all of my data.
I put my homedir under zfs-fuse for a while. When I migrated to jfs-on-raid5 (3x160G) some months later, a small but nontrivial fraction of my symlinks had turned into 0-length files with 0000 permissions. Various things sort of randomly broke, which was interesting. (Mostly unimportant things, such as ssh not being able to read authorized_keys.)
I use ZFS under FreeBSD 8-RELEASE in a 6x1.5tb raidz2 (dual-parity) and it is rock solid. The benefit of running ZFS under FreeBSD over Linux is that you don't need to use FUSE, as ZFS has been really well integrated with FreeBSD's filesystem layer.
Were you doing the RAID using the card's drivers (BIOS softraid/fakeraid), or using regular software RAID?
It seems odd that the controller would drop drives like that, but I would never trust the "RAID" features of low-end interface cards ... they're barely competent at exposing the drives as bare block devices to the OS as it is.
I've had decent enough luck with the real hardware-RAID cards from Dell, but they are expensive and if I were building a new server today, given the price of CPU cores, I'm not sure it would be at all worth the cost and SPOF risk. I've never had a card fail but if it did, that would suck. Back in the SCSI/PentiumII era they were fairly nice though -- I have a PERC still running in a closet, doing RAID5 across 5 74GB SCSIs. Probably about time to pull the plug on it though ... those five drives probably burn through their replacement cost in electricity every few months.
I purchased a NAS array from http://lime-technology.com/ and couldn't be happier with it. 15 drives in a standard form factor case running off of an internal USB key slackware custom distro. Only one drive is needed for parity and you can swap drive sizes as much as you want, just as long as the parity drive is as wide as your largest data drive.
It is running its own version of raid called unRAID, which is a RAID4 derivative. No data is striped across any of the drives so your data loss is contained to the single drive with issues if it is not repairable by parity. The servers are great for the price, check them out.
The video is quite amazing, but I think he made several crucial mistakes: never drill, file or arc solder anything with tiny and sensitive electronic parts around! A bit of metal in one of the PCBs is enough to ruin the day.
The other thing I noticed is that he used the welder without full protection in both hands: the arc gives out UV, besides the hot metal flying around, which is dangerous.
A funny thing is the construction/design as it goes along, I do that sometimes myself, without designing it first with a CAD.
I just bought one of these http://www.amazon.co.uk/gp/product/B001E03444/ and swap out the drives as I need them. + two Fractal 120mm fans (incredibly quiet yet powerful, highly recommended) from the sides to keep the heat away. I chose that instead of building a similar frame, since a frame for 25 drives (that's what I have :-) ) would need some serious additional cooling and power, whereas the swapping solution doesn't.
Nice job, though I shudder to think about the vibration noise made by 8 hard-mounted drives in an aluminum enclosure... I'd be surprised if it didn't sound like a mouse stuck in a coke can full of nuts (the threaded, metal kind, that is... ;-)
I wondered about that also. I was a little surprised to see that he didn't (appear to) put in any sort of shock mounts or buffers between the side plates and the drives themselves.
He seems to know what he's doing, though, so perhaps it was a purposeful tradeoff in order to use the sides as heat sinks? By mounting the drives without shockmounts, you can transfer heat directly from the drives to the enclosure chassis. In terms of preventing drive failure that might be more important than vibration reduction.
If the magnetic fields were strong enough to hurt a drive a few centimeters away, they would ruin the other platters in the same drive, and the data on the platter a few millimeters away.
Drives get as close as they can in ordinary servers and cases as well.
It would be more common to worry about heat issues, although there's a Google study floating around the net and their findings are that the role of heat in hard disk failure is much less significant than commonly thought.