Hacker News new | comments | show | ask | jobs | submit login
Homemade 16TB NAS - BlackDwarf (willudesign.com)
63 points by Ghost_Noname 2815 days ago | hide | past | web | favorite | 45 comments

There is something funny in the name here, given that a black dwarf is what results after a star has gone through a melt-down and I think that is a serious risk with that many drives in such a tight enclosure.

He seems to have that factor under control though, 90 watts under load really isn't that bad.

Reminds me of this: http://www.clustercompute.com/ , no longer in service because it got replaced by a single gpu card.

It's all about airflow. Those GP drives don't use more than 5W each so as long as you have good airflow around all of them it won't be a problem.

These stars are called white dwarfs.


I'm aware of what a white dwarf is, black dwarfs do not currently exist (we think) but are definitely a possibility, think of them as the burned out ashes of a white dwarf, which is an intermediary.

For variety there are also red, orange, brown and yellow dwarfs, it's quite a family.

Uh oh... I see Green Power HDD's. We used them for a while in big Raid configurations and their failure rate is bad.

Always use black HDDs for servers man (if using WD)!

Why not use the Raid Edition drives if you're at it? Supposedly better mtbf, longer warranty, and the to-me-unquantifiably-better TLER feature.

TLER is well worth the 50% premium, in many cases. With desktop firmware, my experience has been that fully half the time when a disk fails, it doesn't properly fail, it retries forever. Linux MD and 3ware hardware raid (the only hardware raid I tried) both hang up waiting for the drive to fail. (the 3ware keeps resetting the drive... but it will sit there and suck for two days just like md)

Back when I used desktop drives, I had several outages caused by a bad disk in a mirror. I log into the box and manually fail it out of the raid and we are good, but customers tend to get mad when their I/O hangs for hours on end.

It only takes one hung mirror in production, for me at least, to pay for a whole lot of 'enterprise' firmware drives.

Many (most?) new consumer drives have support for ERC (the standard term for what WD calls "TLER"), which is part of the ATA-8 standard. See here for the spec (ERC is described in section 8.3.4):


Until recently, you often needed a vendor-supplied MS-DOS executable to enable or modify ERC features on consumer drives, but smartmontools supports it as of v5.40, thanks to this guy:


I just bought a 1.5TB Samsung HD154UI for $99 that supports ERC. There are reports that P-model variants of the WD 15EADS don't support TLER anymore, but I have several S- and R-model variants of that drive that do, including one that was manufactured in March; so it's still possible to get WD consumer drives that support TLER/ERC, you just have to be picky about the variant.

huh. neat. where are you finding out what is and is not supported? user forms? or do drive data sheets actually have useful info?

It's purely anecdotal. HardForum is a good source of information because of its enthusiast user base. A large group of home NAS builders post there.

It was a side effect of over ordering (someone typed 100 instead of 10 for a weekly order...) a load for another part of our business.

Never again! We even had to stop using them as long term storage disks just in case... (though it appears heat was the main issue for them)

But mauve has more RAM

That a nice project and all, too bad he's likely to have it go TU on him when his first drive fails.


I would also be concerned with the single point of failure with his fan. If that seizes up for any reason the temp in that thing is going to skyrocket very quickly.

Personally, after I outgrow my current RAID5 -- similar to the one I built here (http://www.linuxjournal.com/article/6558), but using SATA drives now -- I'm going to switch over to mirrored large drives instead of RAID5. As long as I don't start videotaping every second of the kid's lives I should be ok.

Doesn't look like there's enough space for ventilation. Unless I'm missing something, heat could be a serious problem with this setup.

His other mods are actually more impressive to me (http://www.willudesign.com/CinematographTop.html and http://www.willudesign.com/TheDeskTop.html) especially.

Agreed. Storage, even big NAS, just does not do it for me anymore.

I purchased a NAS array from http://lime-technology.com/ and couldn't be happier with it. 15 drives in a standard form factor case running off of an internal USB key slackware custom distro. Only one drive is needed for parity and you can swap drive sizes as much as you want, just as long as the parity drive is as wide as your largest data drive.

It is running its own version of raid called unRAID, which is a RAID4 derivative. No data is striped across any of the drives so your data loss is contained to the single drive with issues if it is not repairable by parity. The servers are great for the price, check them out.

The Video of him building the machine is really worth watching:


The video is quite amazing, but I think he made several crucial mistakes: never drill, file or arc solder anything with tiny and sensitive electronic parts around! A bit of metal in one of the PCBs is enough to ruin the day.

The other thing I noticed is that he used the welder without full protection in both hands: the arc gives out UV, besides the hot metal flying around, which is dangerous.

A funny thing is the construction/design as it goes along, I do that sometimes myself, without designing it first with a CAD.

"never drill, file or arc solder anything with tiny and sensitive electronic parts around!"

He used dead drives during the construction, so this isn't a big deal.

Looks nice :D I hope he has better results with Raid5 then me tho. I have reverted back to a dual drive NAS for stability after having raid 5 servers for a long time. Somehow IDE and SATA drives and raid cards don't cut it for me. I have lost my data so freaking often by corrupt arrays it's not fun anymore.

The first server I had used a Promise card (Supertrak SX6000) card (with 4 * 80 GB drives) that gave me nothing but problems (very poor performance and corrupt arays about once every 2 months). It was running on a standard non-server mobo with a decent but standard PSU, so one of the components might have caused problems tho.

The second was running on an Areca ARC-1210 card (with 8*320 Gb drives) on a tyan thunder dualxeon server mobo with an 850Watt Tagan server PSU and was put behind a 2200VA UPS. Even then the array seemed to dropout quite often and I have spend 100's of hours in total (updating drivers/firmware/OS'es, recovering lost data, replacing parts, cables, moving the server etc...) trying to get it to run decently.

Eventually I just sold the whole damn thing and bought me a simple COnceptronic CH3SNAS with 2 1,5TB drives for the cash I sold the old server for. I have ever never had a single problem in the whole year i've had it and am very happy with the purchase. The performance is totally not comparable (600MB/Sec vs 20 MB/Sec), but even the slow NAS is sufficient enough for me (it streams HD movies just fine and my backups aren't TB's big anyways ;) ). Plus the power usage is literally 1/10th, it's very quiet and a lot easier to carry around and take with me if I need to.

Maybe I just fail at fileservers, maybe I just had bad luck with 2 bad cards, but I'm not convinced a simple affordable RAID-5 solution means your data is safe.

But I must admit I wouldn't mind having that self made NAS at my home tho, it looks cool :D Maybe one day i'll try again.... ;)

Funny. I've used an ARC-1260 in a 10x500gb raid-6 array for several years and it has never gone down. I've RMAd probably 4 drives to WD under warranty.

I've also had 2 software 5x1tb raid-5 arrays for about 2 years and they've never even lost a drive. Apparently WD improved their RE drives between the 500gb and 1tb versions, the difference in failure rate is huge.

Next time do try ZFS on Fuse (linux userspace filesystem) - I have a 1TB setup and it works quite well.

You could try making a RAIDZ volume and see how it goes.

Have you had good luck with this? I'm running zfs on an open solaris server, but I hate solaris and would like to switch to linux. I never got the impression that zfs on fuse was stable enough to trust with all of my data.

I use Nexenta (my ZFS build is described here: http://news.ycombinator.com/item?id=794640 and here: http://blog.barrkel.com/2009/03/zfssolaris-as-nas.html). The nice thing about Nexenta is that it's an Debian/Ubuntu-style userland (the default install is console only I think, but that's the way I like it).

I put my homedir under zfs-fuse for a while. When I migrated to jfs-on-raid5 (3x160G) some months later, a small but nontrivial fraction of my symlinks had turned into 0-length files with 0000 permissions. Various things sort of randomly broke, which was interesting. (Mostly unimportant things, such as ssh not being able to read authorized_keys.)

I use ZFS under FreeBSD 8-RELEASE in a 6x1.5tb raidz2 (dual-parity) and it is rock solid. The benefit of running ZFS under FreeBSD over Linux is that you don't need to use FUSE, as ZFS has been really well integrated with FreeBSD's filesystem layer.

Would you mind writing up a tutorial. I am interesting in ZFS under FreeBSD, I have 8-Release downloaded and ready to go. Still working on getting the hardware up and running for it though.

Are you using 64-bit? I heard ZFS loves 64-bit.

The problem wasn't really the filesystem I think. The card actualyl threw the disks out of the array and then refused to put them back in somehow.

Were you doing the RAID using the card's drivers (BIOS softraid/fakeraid), or using regular software RAID?

It seems odd that the controller would drop drives like that, but I would never trust the "RAID" features of low-end interface cards ... they're barely competent at exposing the drives as bare block devices to the OS as it is.

I've had decent enough luck with the real hardware-RAID cards from Dell, but they are expensive and if I were building a new server today, given the price of CPU cores, I'm not sure it would be at all worth the cost and SPOF risk. I've never had a card fail but if it did, that would suck. Back in the SCSI/PentiumII era they were fairly nice though -- I have a PERC still running in a closet, doing RAID5 across 5 74GB SCSIs. Probably about time to pull the plug on it though ... those five drives probably burn through their replacement cost in electricity every few months.

You should look at using software RAID. The performance is not as bad as it once was. Plus you don't get a corrupt controller which tend to be proprietary.

"Eventually I just sold the whole damn thing..."

Buyer beware.

Software Raid-6 FTW.

HN is very harsh!

Impressive, but I'll second the heating remark.

I just bought one of these http://www.amazon.co.uk/gp/product/B001E03444/ and swap out the drives as I need them. + two Fractal 120mm fans (incredibly quiet yet powerful, highly recommended) from the sides to keep the heat away. I chose that instead of building a similar frame, since a frame for 25 drives (that's what I have :-) ) would need some serious additional cooling and power, whereas the swapping solution doesn't.

Nice job, though I shudder to think about the vibration noise made by 8 hard-mounted drives in an aluminum enclosure... I'd be surprised if it didn't sound like a mouse stuck in a coke can full of nuts (the threaded, metal kind, that is... ;-)

I wondered about that also. I was a little surprised to see that he didn't (appear to) put in any sort of shock mounts or buffers between the side plates and the drives themselves.

He seems to know what he's doing, though, so perhaps it was a purposeful tradeoff in order to use the sides as heat sinks? By mounting the drives without shockmounts, you can transfer heat directly from the drives to the enclosure chassis. In terms of preventing drive failure that might be more important than vibration reduction.

At 4 platters per drive, I bet the room doesn't need heating in the winter.

Can you actually put one drive physically on top of another like that, long term, without any ill effects from one magnetic field to another?

If the magnetic fields were strong enough to hurt a drive a few centimeters away, they would ruin the other platters in the same drive, and the data on the platter a few millimeters away.

Drives get as close as they can in ordinary servers and cases as well.

It would be more common to worry about heat issues, although there's a Google study floating around the net and their findings are that the role of heat in hard disk failure is much less significant than commonly thought.

Well, he's 1/10 of the way to filling it already. (The video project made while building the NAS is 1.5TB)

Good design, not often a NAS actually looks great.

The Black Dwarf looks exactly like a plastic 5" floppy storage box I once had. But it'll store a magnitude more data!

> 88MB/s write and 266MB/s read

That's appalling for 8 drives !

It's hardware RAID running on a single cheap controller. Use case is network-access only, so speed was not a concern for this build.

Assuming it's connected via GigE, it's 70% wirespeed write and 213% wirespeed read, and that's without accounting for any sort of network overhead.

It's fast enough.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact