He seems to have that factor under control though, 90 watts under load really isn't that bad.
Reminds me of this: http://www.clustercompute.com/ , no longer in service because it got replaced by a single gpu card.
I'm aware of what a white dwarf is, black dwarfs do not currently exist (we think) but are definitely a possibility, think of them as the burned out ashes of a white dwarf, which is an intermediary.
For variety there are also red, orange, brown and yellow dwarfs, it's quite a family.
Back when I used desktop drives, I had several outages caused by a bad disk in a mirror. I log into the box and manually fail it out of the raid and we are good, but customers tend to get mad when their I/O hangs for hours on end.
It only takes one hung mirror in production, for me at least, to pay for a whole lot of 'enterprise' firmware drives.
Until recently, you often needed a vendor-supplied MS-DOS executable to enable or modify ERC features on consumer drives, but smartmontools supports it as of v5.40, thanks to this guy:
I just bought a 1.5TB Samsung HD154UI for $99 that supports ERC. There are reports that P-model variants of the WD 15EADS don't support TLER anymore, but I have several S- and R-model variants of that drive that do, including one that was manufactured in March; so it's still possible to get WD consumer drives that support TLER/ERC, you just have to be picky about the variant.
Never again! We even had to stop using them as long term storage disks just in case... (though it appears heat was the main issue for them)
I would also be concerned with the single point of failure with his fan. If that seizes up for any reason the temp in that thing is going to skyrocket very quickly.
Personally, after I outgrow my current RAID5 -- similar to the one I built here (http://www.linuxjournal.com/article/6558), but using SATA drives now -- I'm going to switch over to mirrored large drives instead of RAID5. As long as I don't start videotaping every second of the kid's lives I should be ok.
It is running its own version of raid called unRAID, which is a RAID4 derivative. No data is striped across any of the drives so your data loss is contained to the single drive with issues if it is not repairable by parity. The servers are great for the price, check them out.
The other thing I noticed is that he used the welder without full protection in both hands: the arc gives out UV, besides the hot metal flying around, which is dangerous.
A funny thing is the construction/design as it goes along, I do that sometimes myself, without designing it first with a CAD.
He used dead drives during the construction, so this isn't a big deal.
The first server I had used a Promise card (Supertrak SX6000) card (with 4 * 80 GB drives) that gave me nothing but problems (very poor performance and corrupt arays about once every 2 months). It was running on a standard non-server mobo with a decent but standard PSU, so one of the components might have caused problems tho.
The second was running on an Areca ARC-1210 card (with 8*320 Gb drives) on a tyan thunder dualxeon server mobo with an 850Watt Tagan server PSU and was put behind a 2200VA UPS. Even then the array seemed to dropout quite often and I have spend 100's of hours in total (updating drivers/firmware/OS'es, recovering lost data, replacing parts, cables, moving the server etc...) trying to get it to run decently.
Eventually I just sold the whole damn thing and bought me a simple COnceptronic CH3SNAS with 2 1,5TB drives for the cash I sold the old server for. I have ever never had a single problem in the whole year i've had it and am very happy with the purchase.
The performance is totally not comparable (600MB/Sec vs 20 MB/Sec), but even the slow NAS is sufficient enough for me (it streams HD movies just fine and my backups aren't TB's big anyways ;) ). Plus the power usage is literally 1/10th, it's very quiet and a lot easier to carry around and take with me if I need to.
Maybe I just fail at fileservers, maybe I just had bad luck with 2 bad cards, but I'm not convinced a simple affordable RAID-5 solution means your data is safe.
But I must admit I wouldn't mind having that self made NAS at my home tho, it looks cool :D
Maybe one day i'll try again.... ;)
I've also had 2 software 5x1tb raid-5 arrays for about 2 years and they've never even lost a drive. Apparently WD improved their RE drives between the 500gb and 1tb versions, the difference in failure rate is huge.
You could try making a RAIDZ volume and see how it goes.
Are you using 64-bit? I heard ZFS loves 64-bit.
It seems odd that the controller would drop drives like that, but I would never trust the "RAID" features of low-end interface cards ... they're barely competent at exposing the drives as bare block devices to the OS as it is.
I've had decent enough luck with the real hardware-RAID cards from Dell, but they are expensive and if I were building a new server today, given the price of CPU cores, I'm not sure it would be at all worth the cost and SPOF risk. I've never had a card fail but if it did, that would suck. Back in the SCSI/PentiumII era they were fairly nice though -- I have a PERC still running in a closet, doing RAID5 across 5 74GB SCSIs. Probably about time to pull the plug on it though ... those five drives probably burn through their replacement cost in electricity every few months.
I just bought one of these http://www.amazon.co.uk/gp/product/B001E03444/ and swap out the drives as I need them. + two Fractal 120mm fans (incredibly quiet yet powerful, highly recommended) from the sides to keep the heat away. I chose that instead of building a similar frame, since a frame for 25 drives (that's what I have :-) ) would need some serious additional cooling and power, whereas the swapping solution doesn't.
He seems to know what he's doing, though, so perhaps it was a purposeful tradeoff in order to use the sides as heat sinks? By mounting the drives without shockmounts, you can transfer heat directly from the drives to the enclosure chassis. In terms of preventing drive failure that might be more important than vibration reduction.
Can you actually put one drive physically on top of another like that, long term, without any ill effects from one magnetic field to another?
Drives get as close as they can in ordinary servers and cases as well.
It would be more common to worry about heat issues, although there's a Google study floating around the net and their findings are that the role of heat in hard disk failure is much less significant than commonly thought.
Good design, not often a NAS actually looks great.
That's appalling for 8 drives !
It's fast enough.