
36TB FreeNAS Home Server Build - sengork
https://ramsdenj.github.io/server/2016/01/01/FreeNAS-Server-Build.html
======
KaiserPro
There is a lot of cargo cult about this.

Firstly the claim that freebsd has wider testing is utter trash. In terms of
TBs installed ZFS on linux >> freenas/freebsd

the amount of money behind ZoL is now surprisingly large.

Also, extensive memtest of ECC ram is pointless, ECC ram has checksumming
built into the chip which will tell you if there is a memory error and correct
it if possible.

As for most drive fail in the first hours:
[https://www.backblaze.com/blog/how-long-do-disk-drives-
last/](https://www.backblaze.com/blog/how-long-do-disk-drives-last/) evidence
says other wise. Infant mortality is a thing, but its a matter of months, not
hours. (internal QA grabs most of the ones that die within a few hours.)

The best way to combat simultaneous failure is to mix hardrive types, this
makes it much less likley that a single fault class will be triggered on all
disks.

~~~
Amezarak
> Also, extensive memtest of ECC ram is pointless, ECC ram has checksumming
> built into the chip which will tell you if there is a memory error and
> correct it if possible.

Yes, but if the chip is getting lots of memory errors, correctable or
uncorrectable, I probably don't want it. I don't want to build my system and
put it into production and then find out.

~~~
KaiserPro
you'd pick this up straight away, ECC ram detects memory errors in real time.
The reason why you run memtest is to systematically go through all the bits
and flip them in a known pattern.

You can check that known pattern to see if the bit changed properly.

With ECC this is redundant, as each byte(or word, or some unit of memory) has
a hardware checksum. So using ECC ram in normal operation will indicate if
there are errors.

~~~
Amezarak
Yes, I understand that.

The reason to run memtest on ECC memory is to verify that every bit of memory
is good _before_ you finish building the system, installing an OS, and putting
it into production. If you have, say, 32GB of memory, there's no guarantee
you'll hit those bad bits for ECC to log an error until you're doing something
memory intensive two weeks later.

The primary use case of ECC is to handle random bit flips from cosmic rays and
whatnot, not to mitigate bad hardware. If you have bad memory, replace it,
don't rely on ECC. The only way to test memory is to run something like
memtest, where you read and write from every bit in memory.

If you're OK with finding out a month later that ECC is logging a hundred
uncorrectable errors an hour whenever you do anything memory intensive, then
sure, don't bother memtesting. If you would rather deal with it beforehand,
then run memtest.

~~~
KaiserPro
but you have hardware monitoring linked to an alerting framework right?

~~~
Amezarak
Yes, but again, you may not get that alert and find out that memory module is
bad until months from now. It all depends on where the chip is bad at, how
much is bad, how much load is usually on the system, etc.

I (and I would guess most people) don't want to find out a month (or more,
potentially much more) from now that the memory module is bad, I want to know
_now_ so I can RMA it and get it over with without having to take a running
system down and apart.

That's the problem that memtest solves, ECC does not solve the same problem at
all.

------
pfarnsworth
I used to build my own stuff, but now I buy Synology. It's really convenient,
has great UI and support, a bunch of different packages you can install, etc.

Although I ran into a problem a couple of months after setting it up where the
NAS became so slow, it was unusable. I saw that my IOWait was 100%, so I
figured it was a disk, but nothing indicated a problem. Eventually I was able
to get to a log that showed that one of the drives had some weird error
messages, so I bought a new drive from Amazon that came the next day, pulled
the drive, and replaced it and instantly everything was okay again.

I would have expected something on Synology's status programs to show a
problem, but they were all green, so that was annoying.

~~~
emsy
The prices for the larger ones are ridiculous though. 800$ for a 5-bay NAS
without HDDs is a lot. You could easily build 2 or more NASes with the same
specs for that money.

Also, having used a Synology I think it's a niche product. It's too powerful
for people who simply want to put files and backups on their network. But for
people who like to tinker and control, it's too restricted. You can basically
only install software that has been ported to Synology. Getting it to do
automated tasks the way you want to can be annoyingly complicated. I would
recommend it to an advanced customer, who needs features like Owncloud or VNP
without having the burden to maintain a system.

~~~
briffle
I was playing with the parts in this article, and it was up near $2500 from
amazon. $250MB motherboard, 4x$60 8GB dimms, etc.

~~~
emsy
I was talking about the 5bay Synology model.

------
esaym
And the word 'watt' is no where in the article. I'd be curious to the yearly
power costs.

I've been the eying the Lenovo Thinkserver TS-140:
[http://amzn.com/B00FE2G79C](http://amzn.com/B00FE2G79C) with a Xeon
E3-1225v3.

Some comments state that it has an idle draw of less then 40 watts. Which is
hard to believe. My Dual core intel atom box idles at about 50 watts (of
course there is no difference between idle and full load draw with the atom,
just super slow either way...)

~~~
sosuke
I must assume I'm wrong in this calculation but 40 watts seems pretty cheap in
cost.
[http://www.wolframalpha.com/input/?i=40+watts+in+kwh+per+mon...](http://www.wolframalpha.com/input/?i=40+watts+in+kwh+per+month)
at a 3.3 cents per kWh rate is just under a $1 per month right? On the top
tier of my local utility it would still be under $4 per month.
[http://www.austinenergy.com/wps/portal/ae/rates/residential-...](http://www.austinenergy.com/wps/portal/ae/rates/residential-
rates/residential-electric-rates-and-line-items)

~~~
esaym
I think most Americans pay between $0.20 and $0.40 per kWh.

I am lucky though that here it is only $0.06. But in summer months it goes to
$0.14 for every kWh over 600. Staying under 1000 kwh's a month is hard to do.

~~~
slavik81
Speaking of summer, if you're trying to cool the house with an air
conditioner, you can multiply the effective wattage of the computer by 3.

~~~
esaym
True, that is a good point.

------
psophis
There's a very active subreddit[0] that discusses a lot of stuff like this.
Worth checking out if you've considered having a server in your home.

[0] [https://reddit.com/r/homelab](https://reddit.com/r/homelab)

~~~
TheCowboy
This guy also does his own periodic FreeNAS system builds:
[http://blog.brianmoses.net/2015/01/diy-
nas-2015-edition.html](http://blog.brianmoses.net/2015/01/diy-
nas-2015-edition.html) [http://blog.brianmoses.net/2015/05/diy-nas-
econonas-2015.htm...](http://blog.brianmoses.net/2015/05/diy-nas-
econonas-2015.html)

~~~
agumonkey
Oh the irony. I have an HP54L microserver that I used to boot BSD from the
exact same usb key (sandisk cruzer fit). One day boot hanged badly, cold
reboot fubared some data, I had to make a spare boot key which didn't have the
ZFS disk config. I failed to reimport the ZFS pool properly, wiping the root
nodes off the drive. 3TB of mostly inaccessible data sleeping. I still hope
that I find the time and brain resources to write a program to reconstruct the
metadata from the fs nodes still on disk. ZFS sources analysis gave some hint
about magic numbers and other patterns that could help scan and infer node
positions.

Anyway, as always, be cautious. And, when too many things are down, don't
fiddle.

------
voltagex_
I'm regretting building my NAS. It's expensive, the storage is small (16TB of
drives gave me ~7.9TB of usable space) and any question or doubt about your
configuration prompts responses like: [http://blog.ociru.net/2013/04/05/zfs-
ssd-usage#comment-17223...](http://blog.ociru.net/2013/04/05/zfs-ssd-
usage#comment-1722341810)

~~~
windowsworkstoo
> the storage is small

What did you expect here? 50% is typical trade off these days in disk arrays.

~~~
benjaminl
You go from 16 terabytes to 14.5 tebibytes right out of the gate. And then if
you use Raid-Z2 (Raid 6), which uses 2 disks for parity, on a four disk set,
half of your drives will be used for parity. This was of course the right
call, because with the size of current drives Raid-Z1 (Raid 5) is really
risky.

Personally on my ZFS based home built server I use mirroring, due to the well
publicized issue with the extreme difficulty of increasing the size of ZFS's
VDEVs. Which has the same 50% usable space reduction.

~~~
voltagex_
Yep, I've gone to a pair of vdevs too. I'm calming down now, even if I had to
format an external drive to ZFS just to get it to mount (yes, yes, I know, USB
is terrible and I'm going to lose all my data). I would _not_ recommend
FreeNAS for a home user.

~~~
benjaminl
You are making me very glad that I went with Ubuntu rather than a NAS oriented
OS like FreeNAS.

~~~
fsckin
My first NAS build used FreeNAS. For a simple Samba box, it's just about
perfect. However, I wanted my NAS to run a few more services than the ones
that were easily available and pre-built "blessed" plugins.

I had a low spec CPU in the NAS and went through hell and back to
build/install/jail a few packages from source (along with their dependancies).
It was a steep learning curve and wasn't terribly fun.

When it came time to add a second NAS, I chose Ubuntu and ZFS on Linux. It's
been running like a champ for well over a year without a single hiccup. I
don't think I've even built a single package ever. Best of both worlds, in my
opinion.

------
dr_ick
If you are into this type of thing, check out the storage form on [H]ardForum:
[http://hardforum.com/forumdisplay.php?f=29](http://hardforum.com/forumdisplay.php?f=29)

Specifically, the showoff thread:
[http://hardforum.com/showthread.php?t=1847026](http://hardforum.com/showthread.php?t=1847026)

------
click170
This post covers everything except the cost. The cost will vary from region to
region and country to country, but it would have been nice to get a ballpark
figure for what this cost.

~~~
skeletonjelly
And power consumption!

~~~
Viper007Bond
I have 3 NASes but my largest is a similar build with 7 drives (6x6TB WD Red +
an old WD Black for the OS). It pulls about 70 watts.

Full specs here:
[https://pcpartpicker.com/user/Viper007Bond/saved/xwP9TW](https://pcpartpicker.com/user/Viper007Bond/saved/xwP9TW)

------
colindean
The author probably bought all of those hard drives at once, from the same
vendor. They're very likely in the same batch.

What if something goes bad with a drive? Well, ZFS to the rescue. Maybe even
two.

What if the whole batch was bad?

I've built my NAS with not quite as much storage but more drives using drives
from different vendors and different batches from different manufacturers.

~~~
rsync
It is good to be concerned about batching disk drives, as you imply.

However, given that the problems that arise with a bad batch are physical
ones, properly burning in the drives does, I think, alleviate those concerns.

We burn ours in for 5-6 days[1] before we put them into production and history
shows this weeds out the bad ones. If there was an entirely bad batch, we
would catch it that way.

In my opinion, far, far more likely to find yourself with fake-new drives than
with an actual "bad batch". We see that all the time from amazon sellers that
claim brand new drives, but SMART says otherwise...

[1] with a zero tolerance policy for even the tiniest deviation from normal in
the SMART output. Even a blip and that drive is out.

~~~
rgbrenner
_If there was an entirely bad batch, we would catch it that way._

You're a very optimistic guy. That test would not have caught IBM deathstars
or the more recent Seagate 3tb barracuda failures.

I lost 6 of the 8 drives in my NAS due to that last one.. luckily not all at
the same time.

------
lewisl9029
I personally think SilverStone's DS380 would make for a better case for
something like this. 8x hot-swap 3.5in bays + 4x 2.5in bays in M-ITX form
factor.

[http://www.silverstonetek.com/product.php?pid=452](http://www.silverstonetek.com/product.php?pid=452)

It's what I'm using right now for my server and I love it. I have it filled up
with 6 drives and haven't had any issues with heat so far. Can't say the same
about the ASUS P9A-I motherboard I'm using it with though...

~~~
adamst85
I second the DS380, had a HD failure the other day and it was easy to hot
swap. +1 for the case -1 for Seagate Barracuda's

------
sosuke
$2,870.15 from Amazon right now which isn't as bad as I expected. That is a
great build you've put together. After losing a 3TB drive of thankfully
replaceable data I have been eyeing a similar setup but not as intense.

[https://amzn.com/w/MHNNS9EDAORX](https://amzn.com/w/MHNNS9EDAORX)

Side note, I would love to have a list or something on Amazon because the wish
list isn't right. A purchase list perhaps? It doesn't include the quantity by
default when clicking Add to Cart. I had thought about adding in the Amazon
Associates code but I've never actually had that make any money.

------
jimmcslim
Looking forward to FreeNAS 10 when it is available. Thinking about rebuilding
my HP N54L microserver currently running Windows Server 2012 R2 with a
'virtual' NAS, Ubuntu + ZFS, running under Hyper-V (yes, this is unnecessarily
complicated).

It would be great if whatever virtualisation is built into FreeNAS supports
the AMD Turion II the N54L uses but support for AMD virtualisation sometimes
seems a bit spotty (not supported in SmartOS for example).

~~~
voltagex_
I found the FreeNAS community to be a bit difficult to work in - less than
32GB of ECC RAM? Using something other than RAIDZ2 with 6+ high performance
drives? You're terrible, and you'll lose all your data!

~~~
Amezarak
The FreeNAS community is definitely not very friendly and they'll jump on you
if you have any problem whatsoever and you didn't follow one of their
recommendations, but let's not misquote them. The FreeNAS forum folks say 8GB
minimum ECC memory and don't suggest "high performance" drives, merely drives
with firmware hypothetically optimized for NAS applications.

RAIDZ-2 is recommended on lots of drives for the same reason RAID6 is, so
that's not something unique to the FreeNAS community.

[https://forums.freenas.org/index.php?threads/hardware-
recomm...](https://forums.freenas.org/index.php?threads/hardware-
recommendations-read-this-first.23069/)

------
voltagex_
I wonder if the author has run into many problems with FreeNAS, or the
terrible community. I've seen posts on the forum where the problem was with
Samba not authing correctly but the first response is "You don't have enough
RAM".

At least they're patching the SSH CVE from today, but it's not just a _pkg
upgrade_ , it's a tarball that upgrades the whole root drive.

------
jonathankoren
When I built my home NAS, I started with FreeNAS, but it was a pain. The
instance of ECC RAM, the lack of USB support[ _], the community that seemed
more focused on office solutions exclusively were killers, and a hermetically
sealed distribution were all killers. I switched to Linux based
OpenMediaVault[1] and all my problems went away. It uses the same UI as
FreeNAS, but defaults to EXT4, supports USB backups, and lets me do anything I
want on it. It 's great.

[_] The answer to multiple independent requests about backing up the NAS to a
USB enclosure, and met with a refrain of "USB drives are crap, so you're
stupid for using them. You should back up your NAS to another NAS, that you
never move." Fuck you. I know the limits of my failure model.

[1] [http://www.openmediavault.org/](http://www.openmediavault.org/)

------
bryanlarsen
I know that ZFS is cool and LVM isn't, but I literally just finished repairing
my LVM-based home NAS, and it left me with a good feeling about LVM. Overall,
a stack of md, LVM and XFS is a lot more complicated than ZFS, but each piece
is more understandable in isolation.

~~~
skeletonjelly
ZFS is a filesystem which does waaaay more than LVM does as a container. It's
not just that it's "cool". Rapid filesystem snapshots, checksums, dedupe, do
some research and you'll see why it's recommended.

~~~
bryanlarsen
LVM does snapshots & checksums. dedupe and compression require huge amounts of
memory and cause problems with databases. I've used ZFS before, and I switched
back.

~~~
voltagex_
I have a ~8TB NAS on an Atom C2758 system. I've loaded less than 4TB so far
and FreeNAS is really getting on my nerves. Am I "protected" from a single
drive failure with LVM? Can I grow the storage size by swapping out one drive
at a time? If so, I might just go to Debian and be done with it, although
FreeBSD 10's bhyve hypervisor looks really good.

~~~
benjaminl
No, LVM doesn't provide any protection, you need to layer mdadm below LVM to
provide any sort of protection.

Even with the use of mdadm, it doesn't provide near the sort of protection
that ZFS does. Due to pervasive checksuming of data, ZFS handles the bit-rot
and corruption that a dying disk does much better than the traditional raid
that mdadm provides. For example if you have your disks mirrored or RAIDed, if
the disk doesn't provide a read error, mdadm will pass the data back to the
OS. Since the data isn't checksumed, there is no way for it to know if it
needs to read from the mirrored disk or the parity drives.

~~~
bryanlarsen
\- as of RHEL 6.3, LVM supports raid4/5/6 without mdadm. It has supported
raid1 (mirroring) and raid0 (striping) for much longer.

\- any LVM or mdadm mode with parity contains a functional checksum. To use it
for data integrity, do a regular scrub. You should be doing a regular scrub
with ZFS anyways, so ZFS's checksum on read doesn't add much except for
slowing things down.

~~~
Ded7xSEoPKYNsDd
> any LVM or mdadm mode with parity contains a functional checksum. To use it
> for data integrity, do a regular scrub.

That doesn't work. Scrubbing the RAID can detect errors, but when they occur,
the block layer has no idea which copy is the correct one. I haven't verified
for LVM, but at least for mdraid, Linux explicitly does not make any attempts
at recovering a 'correct' block even in cases where there is more than one
copy. It just randomly picks a winner and overwrites the other versions. You
still want to scrub for the error detection.

~~~
bryanlarsen
I've seen it work. It knows which block has the error because the disk
reported the error. It then rewrites the sector with the correct data, the
disk moves the sector, you see "read error detected, corrected" or some such
in your kernel logs.

------
grubles
ECC RAM should be the minimum requirement if using ZFS.

~~~
colechristensen
>If your system supports it and your budget allows for it, install ECC RAM.

There are some threads of overstatement of the necessity of ECC RAM or the
opposite, but the above is the best advice. It might be more wise to invest
your money more efficiently in backup resources than more expensive RAM.

~~~
grubles
The aforementioned quote should be: "If you value your data, you should use
ECC RAM. Especially so, if you use a checksumming filesystem such as btrfs or
zfs."

There is no use investing money in offsite backups if the onsite backup is
corrupted already from faulty RAM.

------
PhantomGremlin
What I would like is more discussion of choosing FreeBSD vs FreeNAS.

The author was inexperienced and so chose FreeNAS for "ease of use". But what,
other than a GUI, does FreeNAS really provide? I've never read a detailed
explanation. The forums on freenas.org don't seem to address this fundamental
question. Everything seems to be predicated on the choice already having been
made, nothing helps people make the choice in the first place.

Perhaps FreeNAS is more aggressive than FreeBSD about patching storage related
bugs?

Can anyone point to a detailed discussion about choosing vanilla FreeBSD vs
FreeNAS?

~~~
devonkim
FreeNAS is distributed and supported as an all-in-one appliance primarily. The
boot process is generally expected to be run off of USB flash drives because,
similar to how VMware ESXi works, is typically expected to be run on systems
where installation to a local disk is not only a waste of space but
potentially dangerous (mounting your USB device with write wears it out faster
than if it was read-only with RAMdisks mounted). I generally build my file
servers so that each disk is part of the data pool and the boot device is a
USB flash drive that either shipped with the computer by the OEM (HP, Dell,
Cisco, etc.) or one I imaged myself and put onto the available USB port on the
motherboard (not the rear or forward ports typically in server hardware for
security reasons at least).

It comes with a lot of features in the web interface that any decent FreeBSD
admin could install and manage, and many out of the box settings are optimized
for situations that are common for SOHO file servers. This buys a bit of time
and makes it easier for others to maintain that may not be FreeBSD gurus
necessarily.

There are a few tunings (sysctl stuff) and customized options specific to ZFS
servers that FreeNAS offers as well. For example, most ZFS users don't have an
encrypted scratch partition created on each drive in their ZFS vdevs, but
FreeNAS creates them for you by default as a strong recommendation unless you
explicitly turn it off with a slightly obscure setting in the web GUI.

~~~
PhantomGremlin
Thank you for your comments. That helps.

I'm putting together a home NAS and am leaning toward FreeNAS. Someone else
here mentioned waiting for FreeNAS 10, but it probably won't have any "gotta
have it" features above what FreeNAS 9.3 already has.

The FreeNAS Mini (not mentioned in the article) seems a bit underpowered (Atom
processor) but it's a turnkey solution for $1000 plus disks. I might go that
way rather than trying to screw together a box by myself.

------
grawlinson
I'd rather have a rack-mount, rather than that case. In my opinion, it would
be a bit easier to replace faulty hard drives.

Otherwise, it seems quite neat!

~~~
wandererer
As someone with a similar build (Freenas, 8x3tb raidz2 main array, 3tb
scratch/hot spare, 2tb scratch, and 500gb ssd for jails)

I would love a rackmount for drive accessibility, but I can't justify 10x the
case cost and additional engineering in making it as quiet and clean as my
fractal design R4. Rackmount stuff just isn't designed to be either of those,
high static pressure fans and expectation of pre-filtered air. (That said I'd
totally be willing to pay ~500$ if such a case existed)

~~~
SirMaster
10x the cost?

I picked up a 24 bay supermicro case on eBay for $265. This came with 24
hotswap bays and even a SAS2 expander backplane so I only have to connect 1
cable from the motherboard SAS controller to the case, and all 24 disks just
work.

As far as making it quiet. Well that consisted of buying a $11 fan wall,
installing 3 120mm Noctua PWM fans inside it and writing a simple script to
govern the fan speeds based on the HDD temperatures to keep them in check with
the minimum fan speeds necessary.

It's sitting in my bedroom next to my bed and I can hardly hear it.

I will concede the dust issue, but I have a normal air filter in my room and
don't normally have dust issues with my computers. Maybe I clean them out once
a year in the spring if they need it.

------
post_break
This is all great info. The only thing I shudder at is that there is one huge
single point of failure. I've learned one thing when building huge dumb
storage devices, build two and mirror them. I've got 32TB of storage mirrored
so if one hits the fan I've got an exact copy.

~~~
beachstartup
i would recommend you also do snapshots. mirroring will just replicate a bad
fuckup.

as for OP, i share your concerns - i'm skeptical of the reliability of
consumer gear in an application like this, especially in the absence of an
actual backup solution (maybe he doesn't care).

he calls it a backup server, but it's actively serving files.

~~~
pfarnsworth
Yes, I agree, mirroring will help when a disk goes back, but it won't help if
you accidentally delete data. This is the one that people tend to forget a
lot, which is ironically the original reason for having backups.

~~~
foobarian
I have two sets of drives on my home server and keep them mirrored using a
nightly rsync. It's infrequent enough that I can recover from accidents.

~~~
XorNot
This is why people should use ZFS (or BTRFS) - snapshots. My home server runs
a minutely, hourly, daily and monthly snapshot job. Cryptolocker could have a
field day and the most likely scenario is at worst I'd lose a day of data -
more likely maybe 15 minutes worth.

------
gravypod
What I hate about FreeNAS is that it is not permissive of what kind of disks
you put in. I bought a Drobo5N just because I was able to slap any drives I
wanted, in any size or configuration and it would just work.

When FreeNAS can handle that, automatically and on the fly, I'll switch to
that.

------
vitriol83
interesting article. a few more points which may be of interest

\- in addition to raid it's worth having automated off-site backup. the best
solution i could find is duplicity as its encrypted and supports a bunch of
backends.

\- freebsd supports full disk encryption using geli. with some work its
possible to make it boot (only) from a usb key, so some protection if the
server is stolen. I believe newer versions of Intel Atom support hardware AES
acceleration, so this isn't a large overhead.

\- if the memory requirements of ZFS are too large (which to be honest for a
soho application they are!), then you can use UFS together with freebsd
software raid1 (gmirror)

------
legulere
But why?

I'll assume the media mentioned in the article that's stored is illegal. From
the cost of that home server you could very likely legally watch everything
and actually support financing the creation of new stuff. Even if that's not
true, how many of the movies you watched do you watch more than once? And 24
TB? How do you find time to watch that much stuff?

~~~
alternize
_> you could very likely legally watch everything_

this might be true if you live in the US, but in some other countries it is
still hard to actually buy movies or tv shows.

as for the legality itself: in some countries downloading media itself for
personal use is not illegal.

for example in switzerland, there is a media tax/fee included in the price of
every device that _potentially_ could store pirated materials. the income from
this tax is then distributed to media producers and artists. in return,
copying copyrighted material is tolerated for private use and consumption...
this excludes redistribution and uploading.

~~~
legulere
> this might be true if you live in the US, but in some other countries it is
> still hard to actually buy movies or tv shows.

I actually live in Germany, and you also can't get everything here, so this is
true for me as well. Sometimes you simply have to accept that something is not
available. It's not like there's not enough media out there.

> in some countries downloading media itself for personal use is not illegal.

But offering the download you use is still illegal. But that's all just
semantics and doesn't matter that much. What I find more important is the
moral issue.

> for example in switzerland, there is a media tax/fee included in the price
> of every device that potentially could store pirated materials. the income
> from this tax is then distributed to media producers and artists. in return,
> copying copyrighted material is tolerated for private use and consumption...

We have a very similar fee in Germany and it's there to allow normal copying
in personal use (something that would be called fair use in the US). I guess
the swiss fee is there for the same reason and is not there to allow people to
get the majority of their media for no additional money. The fee most likely
is not enough to finance the media producers and artists. Why spend it on
hardware when you can give it to the people that produce what you enjoy?

------
sqldba
Didn't seem to list the total price.

------
xupybd
Wow what on earth are you doing (at home) that you need that much space backed
up with that level of reliability?

~~~
jvolkman
Having fun.

~~~
foobarian
Two words: "kids" "videos".

Between me and my wife we have our phones, a SLR, and then there are other
people's phones. Whenever they fill up I dump them onto my file server and
delete from the phone; it's amazing how fast the terabytes get filled up. Nice
problem to have I guess.

~~~
FraKtus
Same here and shooting picture of kids in raw format needs space :-)

------
unethical_ban
The OP bought 6x6TB drives. I truly hope that they didn't configure it to be a
36TB zpool. That should be RAIDZ2 at the very least. Heck, I have 4x2TB drives
and I am running RAIDZ2. a 2TB drive took 24 hours to rebuild the data on the
replacement disk. I would probably be linear, so 72 hours to rebuild a 6TB
drive, during which time the other drives are doing tons of reads.

~~~
daurnimator
The article says they're using RAIDZ2

~~~
unethical_ban
I stand corrected; I see that now. The HN submission is in error, then.

~~~
SirMaster
Why is it in error? It's 6x6=36TB. Parity is still storing data. Who says the
data storage total number has to be for only user data?

My ZFS pool is 12x4TB in RAIDZ2 and when I query zpool list I get: name size
alloc free tank 43.5T 25.3T 18.2T

The size reported by ZFS itself is 43.5TiB which is close to the 48TB that
12x4TB is.

~~~
unethical_ban
Eh. I suppose. When any of my friends discuss NAS size, we quote it in "usable
space". So you have a 40TB usable NAS. I have a 4TB usable NAS with 4x2TB in
RAIDZ2.

