
Homemade 16TB NAS - BlackDwarf - Ghost_Noname
http://www.willudesign.com/BlackDwarfTop.html
======
jacquesm
There is something funny in the name here, given that a black dwarf is what
results after a star has gone through a melt-down and I think that is a
serious risk with that many drives in such a tight enclosure.

He seems to have that factor under control though, 90 watts under load really
isn't that bad.

Reminds me of this: <http://www.clustercompute.com/> , no longer in service
because it got replaced by a single gpu card.

~~~
ntoshev
These stars are called white dwarfs.

~~~
jacquesm
<http://en.wikipedia.org/wiki/Black_dwarf>

I'm aware of what a white dwarf is, black dwarfs do not currently exist (we
think) but are definitely a possibility, think of them as the burned out ashes
of a white dwarf, which is an intermediary.

For variety there are also red, orange, brown and yellow dwarfs, it's quite a
family.

------
ErrantX
Uh oh... I see Green Power HDD's. We used them for a while in big Raid
configurations and their failure rate is bad.

~~~
invisible
Always use black HDDs for servers man (if using WD)!

~~~
lutorm
Why not use the Raid Edition drives if you're at it? Supposedly better mtbf,
longer warranty, and the to-me-unquantifiably-better TLER feature.

~~~
lsc
TLER is well worth the 50% premium, in many cases. With desktop firmware, my
experience has been that fully half the time when a disk fails, it doesn't
properly fail, it retries forever. Linux MD and 3ware hardware raid (the only
hardware raid I tried) both hang up waiting for the drive to fail. (the 3ware
keeps resetting the drive... but it will sit there and suck for two days just
like md)

Back when I used desktop drives, I had several outages caused by a bad disk in
a mirror. I log into the box and manually fail it out of the raid and we are
good, but customers tend to get mad when their I/O hangs for hours on end.

It only takes one hung mirror in production, for me at least, to pay for a
whole lot of 'enterprise' firmware drives.

~~~
dhess
Many (most?) new consumer drives have support for ERC (the standard term for
what WD calls "TLER"), which is part of the ATA-8 standard. See here for the
spec (ERC is described in section 8.3.4):

[http://www.t13.org/Documents/UploadedDocuments/docs2007/D169...](http://www.t13.org/Documents/UploadedDocuments/docs2007/D1699r4a-ATA8-ACS.pdf)

Until recently, you often needed a vendor-supplied MS-DOS executable to enable
or modify ERC features on consumer drives, but smartmontools supports it as of
v5.40, thanks to this guy:

<http://www.csc.liv.ac.uk/~greg/projects/erc/>

I just bought a 1.5TB Samsung HD154UI for $99 that supports ERC. There are
reports that P-model variants of the WD 15EADS don't support TLER anymore, but
I have several S- and R-model variants of that drive that do, including one
that was manufactured in March; so it's still possible to get WD consumer
drives that support TLER/ERC, you just have to be picky about the variant.

~~~
lsc
huh. neat. where are you finding out what is and is not supported? user forms?
or do drive data sheets actually have useful info?

~~~
dhess
It's purely anecdotal. HardForum is a good source of information because of
its enthusiast user base. A large group of home NAS builders post there.

------
bcl
That a nice project and all, too bad he's likely to have it go TU on him when
his first drive fails.

[http://www.zdnet.com/blog/storage/why-raid-5-stops-
working-i...](http://www.zdnet.com/blog/storage/why-raid-5-stops-working-
in-2009/162)

I would also be concerned with the single point of failure with his fan. If
that seizes up for any reason the temp in that thing is going to skyrocket
very quickly.

Personally, after I outgrow my current RAID5 -- similar to the one I built
here (<http://www.linuxjournal.com/article/6558>), but using SATA drives now
-- I'm going to switch over to mirrored large drives instead of RAID5. As long
as I don't start videotaping every second of the kid's lives I should be ok.

------
MikeCapone
Doesn't look like there's enough space for ventilation. Unless I'm missing
something, heat could be a serious problem with this setup.

------
aeontech
His other mods are actually more impressive to me
(<http://www.willudesign.com/CinematographTop.html> and
<http://www.willudesign.com/TheDeskTop.html>) especially.

~~~
mitchellhislop
Agreed. Storage, even big NAS, just does not do it for me anymore.

------
res0nat0r
I purchased a NAS array from <http://lime-technology.com/> and couldn't be
happier with it. 15 drives in a standard form factor case running off of an
internal USB key slackware custom distro. Only one drive is needed for parity
and you can swap drive sizes as much as you want, just as long as the parity
drive is as wide as your largest data drive.

It is running its own version of raid called unRAID, which is a RAID4
derivative. No data is striped across any of the drives so your data loss is
contained to the single drive with issues if it is not repairable by parity.
The servers are great for the price, check them out.

------
sdfx
The Video of him building the machine is really worth watching:

<http://www.youtube.com/watch?v=BatakM9iAik>

~~~
slug
The video is quite amazing, but I think he made several crucial mistakes:
never drill, file or arc solder anything with tiny and sensitive electronic
parts around! A bit of metal in one of the PCBs is enough to ruin the day.

The other thing I noticed is that he used the welder without full protection
in both hands: the arc gives out UV, besides the hot metal flying around,
which is dangerous.

A funny thing is the construction/design as it goes along, I do that sometimes
myself, without designing it first with a CAD.

~~~
chunkbot
"never drill, file or arc solder anything with tiny and sensitive electronic
parts around!"

He used dead drives during the construction, so this isn't a big deal.

------
StarLite
Looks nice :D I hope he has better results with Raid5 then me tho. I have
reverted back to a dual drive NAS for stability after having raid 5 servers
for a long time. Somehow IDE and SATA drives and raid cards don't cut it for
me. I have lost my data so freaking often by corrupt arrays it's not fun
anymore.

The first server I had used a Promise card (Supertrak SX6000) card (with 4 *
80 GB drives) that gave me nothing but problems (very poor performance and
corrupt arays about once every 2 months). It was running on a standard non-
server mobo with a decent but standard PSU, so one of the components might
have caused problems tho.

The second was running on an Areca ARC-1210 card (with 8*320 Gb drives) on a
tyan thunder dualxeon server mobo with an 850Watt Tagan server PSU and was put
behind a 2200VA UPS. Even then the array seemed to dropout quite often and I
have spend 100's of hours in total (updating drivers/firmware/OS'es,
recovering lost data, replacing parts, cables, moving the server etc...)
trying to get it to run decently.

Eventually I just sold the whole damn thing and bought me a simple
COnceptronic CH3SNAS with 2 1,5TB drives for the cash I sold the old server
for. I have ever never had a single problem in the whole year i've had it and
am very happy with the purchase. The performance is totally not comparable
(600MB/Sec vs 20 MB/Sec), but even the slow NAS is sufficient enough for me
(it streams HD movies just fine and my backups aren't TB's big anyways ;) ).
Plus the power usage is literally 1/10th, it's very quiet and a lot easier to
carry around and take with me if I need to.

Maybe I just fail at fileservers, maybe I just had bad luck with 2 bad cards,
but I'm not convinced a simple affordable RAID-5 solution means your data is
safe.

But I must admit I wouldn't mind having that self made NAS at my home tho, it
looks cool :D Maybe one day i'll try again.... ;)

~~~
sandGorgon
Next time do try ZFS on Fuse (linux userspace filesystem) - I have a 1TB setup
and it works quite well.

You could try making a RAIDZ volume and see how it goes.

~~~
pielud
Have you had good luck with this? I'm running zfs on an open solaris server,
but I hate solaris and would like to switch to linux. I never got the
impression that zfs on fuse was stable enough to trust with all of my data.

~~~
enneff
I use ZFS under FreeBSD 8-RELEASE in a 6x1.5tb raidz2 (dual-parity) and it is
rock solid. The benefit of running ZFS under FreeBSD over Linux is that you
don't need to use FUSE, as ZFS has been really well integrated with FreeBSD's
filesystem layer.

~~~
silasb
Would you mind writing up a tutorial. I am interesting in ZFS under FreeBSD, I
have 8-Release downloaded and ready to go. Still working on getting the
hardware up and running for it though.

Are you using 64-bit? I heard ZFS loves 64-bit.

------
DCoder
Impressive, but I'll second the heating remark.

I just bought one of these <http://www.amazon.co.uk/gp/product/B001E03444/>
and swap out the drives as I need them. + two Fractal 120mm fans (incredibly
quiet yet powerful, highly recommended) from the sides to keep the heat away.
I chose that instead of building a similar frame, since a frame for 25 drives
(that's what I have :-) ) would need some serious additional cooling and
power, whereas the swapping solution doesn't.

------
lutorm
Nice job, though I shudder to think about the vibration noise made by 8 hard-
mounted drives in an aluminum enclosure... I'd be surprised if it didn't sound
like a mouse stuck in a coke can full of nuts (the threaded, metal kind, that
is... ;-)

~~~
Kadin
I wondered about that also. I was a little surprised to see that he didn't
(appear to) put in any sort of shock mounts or buffers between the side plates
and the drives themselves.

He seems to know what he's doing, though, so perhaps it was a purposeful
tradeoff in order to use the sides as heat sinks? By mounting the drives
without shockmounts, you can transfer heat directly from the drives to the
enclosure chassis. In terms of preventing drive failure that might be more
important than vibration reduction.

------
ck2
At 4 platters per drive, I bet the room doesn't need heating in the winter.

Can you actually put one drive physically on top of another like that, long
term, without any ill effects from one magnetic field to another?

~~~
jodrellblank
If the magnetic fields were strong enough to hurt a drive a few centimeters
away, they would ruin the other platters in the same drive, and the data on
the platter a few millimeters away.

Drives get as close as they can in ordinary servers and cases as well.

It would be more common to worry about heat issues, although there's a Google
study floating around the net and their findings are that the role of heat in
hard disk failure is much less significant than commonly thought.

------
lukeqsee
Well, he's 1/10 of the way to filling it already. (The video project made
while building the NAS is 1.5TB)

Good design, not often a NAS actually looks great.

~~~
Luyt
The Black Dwarf looks exactly like a plastic 5" floppy storage box I once had.
But it'll store a magnitude more data!

------
wendroid
> 88MB/s write and 266MB/s read

That's appalling for 8 drives !

~~~
sp332
It's hardware RAID running on a single cheap controller. Use case is network-
access only, so speed was not a concern for this build.

