
How I Store My 1's and 0′s: ZFS + Bargain HP Microserver = Joy - mocko
http://mocko.org.uk/b/2012/06/17/how-i-store-my-1s-and-0s-zfs-bargain-hp-microserver-joy/
======
zdw
I have this same hardware. A few notes:

\- For best performance with ZFS, you want a lot of RAM, and this unit will
take 8GB of ECC RAM. You want ECC for data integrity in memory, as ZFS does
nothing to prevent in-memory data corruption (there's an article on this here
[pdf]: [http://research.cs.wisc.edu/wind/Publications/zfs-
corruption...](http://research.cs.wisc.edu/wind/Publications/zfs-corruption-
fast10.pdf) )

\- You probably want a few mirrors, not RAID-Z, if performance is an issue.

\- You're better off with FreeBSD or Illumos kerneled distros (which have run
ZFS for years, and have it in their mainline kernels), rather than Linux
(which never will have ZFS in mainline for licensing reasons), for stability
alone.

\- You can get an IPMI card for this unit if you want remote manageability.

\- There's an internal USB port if you want to boot off of that. It's kind of
handy.

~~~
dedward
Good points - people do need to remember that ZFS was designed for going big.
IT does nothing to prevent memory corruption by design.... it was assumed
(required?) that your server would use ECC ram.

You also need to ensure you have adequate ram for various cacahing, and CPU
power as well for checksum calculation, among other things. It's not a
lightweight FS. (in-memory corruption is an issue with all filesysems - just
somewhat more-so with ZFS because it was designed specifically to assume you
had reliable ram. You want ram to store the hash cache or whatever too....

One could chuck an SSD in there for cache, if memory is a limit, that should
speed things up drastically.

And as with all raid-like systems - you want an appropriate number of hot-
spares, cold spares, and a system that monitors it and acts appropriately,
especially if you're going with huge drives on slow busses.

You want regularly scheduled scrubs, not too many snapshots, probably disable
atime (noatime) to speed up those scrubs, and compression probably off....

dedup I'm still on the fence about - I leave it off, I can only see specific
situations where it would be truly useful (dedup+verify)

~~~
zdw
Unless you're using enterprise class SSD's with SLC flash and ultracaps to
guarantee that the writes make it in cases of power loss, you're risking your
data if using this as ZIL.

Cheap consumer class SSD's are generally fine for L2ARC.

------
kijin
I've been hearing awesome things about ZFS for years now. Unfortunately, it
can never be part of the Linux kernel due to licensing issues, so we're stuck
with "your 1's and 0's are being held by a pre-1.0 version of a filesystem
invented by a dead company". How far has btrfs come in its support of ZFS-like
features? Still need a few more years? I'll be switching as soon as it's
marked stable.

I wonder if the performance might be better if a good 16GB USB stick was used
for the OS drive instead of an old laptop drive? The OS needs a lot of random
access, but doesn't take up much space.

I also wonder why the author went with Ubuntu 10.04 LTS instead of 12.04 LTS,
which would give him two more years of peace of mind. It's been a few weeks
since 12.04 came out, so it's pretty stable. It does get kernel updates more
often that I'd prefer, though, and GNOME 2 is gone.

~~~
sho_hn
I find your assessment of the Linux filesystem situation to be inaccurately,
and surprisingly, negative. "Stuck with" implies little potential for change,
when there are several filesystems which are being developed at a swift pace,
solving tough problems with vigor and ingenuity. btrfs is coming along just
fine. I can't think of any place with more filesystem development going on
than Linux.

I mean, let's just not leave it at idle claims:

btrfs changes in 3.4:
[http://kernelnewbies.org/LinuxChanges#head-556161b206bf626d6...](http://kernelnewbies.org/LinuxChanges#head-556161b206bf626d6c84f9973dbdc3c8ef15bd07)
3.3:
[http://kernelnewbies.org/Linux_3.3#head-1f03b4babafb1049bea3...](http://kernelnewbies.org/Linux_3.3#head-1f03b4babafb1049bea35793c2c0fb91fae48cd4)
3.2:
[http://kernelnewbies.org/Linux_3.2#head-f0a922e9c0ce6f48810d...](http://kernelnewbies.org/Linux_3.2#head-f0a922e9c0ce6f48810dbe204f89a69eab8034eb)
3.0:
[http://kernelnewbies.org/Linux_3.0#head-3e596e03408e1d32a7cc...](http://kernelnewbies.org/Linux_3.0#head-3e596e03408e1d32a7cc381d6f54e87feee22ee4)

RAID5/6 barely missed 3.5, but should be in the pull request for 3.6 (other
changes for 3.5: <https://lkml.org/lkml/2012/6/1/160>).

What's going on in XFS: <http://lwn.net/Articles/476263/>

Coverage from the 2012 filesystem/storage summit, day 1:
<http://lwn.net/Articles/490114/> Day 2: <http://lwn.net/Articles/490501/>

2011 summit, day 1: <http://lwn.net/Articles/436871/> Day 2:
<http://lwn.net/Articles/437066/>

Ext4 is still pretty active for a "done" filesystem, too: The last 12 months
saw work on online resizing, support for bigger block sizes, a cleanup of
mount options, ...

~~~
kijin
Sorry if I came across as negative. The "stuck with" was a reference to ZFS's
status as a pre-1.0 PPA, not a reference to Linux filesystems as a whole.

On the other hand, the only in-kernel Linux filesystem that can match ZFS's
feature set (for example, resizing a RAID array while the filesystem is
online) seems to be btrfs, which probably won't be marked stable for at least
another year or two. The latest developments to XFS and ext4, though
interesting, aren't particularly relevant if you're looking to build a server
like what the article describes.

Still, thanks for the many interesting links!

~~~
maccam94
I'll just note that it is the implementation of ZFS that is pre-1.0, not the
filesystem itself. The ZFS filesystem itself has been production-ready for
several years now. That said, I might use the current Linux implementation for
my non-critical data, but for anything important I'd stick to
Solaris/OpenIndiana. There's also a decent implementation on FreeBSD, but I'm
not a fan of that OS.

~~~
gcb
All the bugs happen in implementation.

~~~
maccam94
Agreed. My point was simply that you shouldn't write off the filesystem
entirely just because some of the implementations aren't mature.

------
mrb
Agreed, ZFS rocks.

I started using OpenSolaris fileservers at home in 2007. I went through
multiple upgrades since ten: 5x500GB, 7x750GB, and currently 7x1.5TB (all
raidz).

I am just about to upgrade to my 4th server with 6x3TB in raidz2, which will
be running FreeBSD this time.

Due to the sheer number of drives and continuous 24/7 operation, I experienced
4 drive failures over the years. Everytime ZFS has handled the failure
gracefully, and I was able to swap the drive and rebuild the array without a
hitch.

I take daily rotating snapshots - incredibly useful when you accidentally
stuff.

I also run weekly scrubs, which have allowed me to witness 2 or 3 silent data
corruptions which were automatically self-healed by ZFS (CKSUM column in zpool
status).

I mostly use the file server to share videos & photos via NFS, and to store
encrypted backups of my laptop. It has become so useful and practical that I
started to use it as an iSCSI server as well to boot some diskless Windows
test machines for GPU projects.

All in all, ZFS deserves all the praise you hear.

------
steve8918
I used to build my own file servers but 2-3 years ago I bought a ReadyNAS
ProBusiness at home and I love it. Granted it cost me about $1500 at the time,
but it has made my life so much easier. I'm at the stage of my life where I'd
rather pay extra and save time.

It supports almost everything out of the box, and there's very little
configuration. It has 6 hot-swappable bays, and it allows for automatic
expansion using their proprietary system, X-RAID 2. I currently have 4 500 GB
drives, and 2 1 TB drives, and if I want to expand it, I just buy another 1 TB
drive and swap out a 500 GB drive.

It also supports streaming protocols, including ReadyDLNA so I can play movies
directly off my PS3. It also seamless supports TimeMachine for my Mac laptops.
I really do love this thing.

~~~
taybin
How do you back it up?

~~~
chrishas35
Not sure about the ReadyNAS ProBusiness, but my NV+ has a USB port that I can
plug an external drive in to, push the backup button, and in just a little bit
I'll have a drive suitable for taking off-site. The FrontView web interface
has a simple way to configure what the backup button does. Very simple.

It also runs Linux under the hood, and I've been able to configure CrashPlan
for off-site backups of some key files too.

~~~
steve8918
I use the USB port as well and connect a USB drive to it. But I am a bit more
paranoid, and I do a bit-by-bit comparison of every file that I copy, after I
do the backup.

------
colione
I have about the same setup, but i built my server from scratch have 6 _2TB
and I run FreeBSD 9 (plus SSD for os). OS reliability wise and ZFS maturity
FreeBSD > Ubuntu. You'll have virtualisation through virtualbox if you'd like
to too. But I prefer to separate the storage and virtualisation platforms (do
one thing etc). In some ways _BSD is easier and more logical in it's setup and
administration, plus you'll have better documentation on the site and higher
signal-to-noise ratio in the forums, if you need help.

~~~
forgotusername
Despite being a heavy computer user I'm positively struggling to fill 1TB even
after 2 years of hoarding HD rips, so I'd love to know what you use 62TB for.
Also wondering about idle power consumption.

~~~
colione
bah, formatting error. 6 times 2TB not 62TB. Everything runs of an atom card,
with modded cooling for the bridge and cpu (fan less, really awesome, copper
heat sinks). The discs are the WD Caviar Green, low power, and they are
attached with rubber to silence them. There are two (90 cm) slow spinning fans
in front of the disks, one in the quitet zalman psu (120 cm) and one (90 cm)
at the back. I built it in an htpc chassis.

------
apaprocki
Serious question.. does no one use SmartOS[1] for this? I wouldn't feel
entirely secure running ZFS on Linux when I could just as easily run SmartOS
and get the "real" ZFS.

[1] <http://smartos.org/>

~~~
zdw
I run OpenIndiana (and OpenSolaris before that) on mine, and I tend to agree -
Linux ZFS is unlikely to ever be as mature as either the Illumos kerneled
distros (OpenIndiana, Nexenta, SmartOS), or FreeBSD (and derived systems like
FreeNAS), both of which ship ZFS in the kernel as a primary filesystem.

~~~
sciurus
How many of the Illumos or FreeBSD developers are working on ZFS? I'm curious
if those implementations actually have any more manpower than the ZFS on Linux
project.

~~~
bcantrill
The companies that are betting on ZFS -- Delphix, Nexenta and Joyent (and a
bunch more that are less public about their work) -- are overwhelmingly
(indeed, exclusively, to the best of my knowledge) on illumos and FreeBSD. Of
these, Delphix in particular is of note because of the original ZFS core team
members working there: Matt Ahrens (the co-inventor of ZFS), Eric Schrock and
George Wilson -- not to mention important ZFS contributors like Adam Leventhal
and Chris Siden.[1]

So illumos remains the repository of record for ZFS -- with a close
relationship with those working on ZFS on FreeBSD. While the Linux port is
certainly a Good Thing, it does not reflect shift in the epicenter of ZFS
development...

[1] <http://www.youtube.com/watch?v=-zRN7XLCRhc#t=43m54s>

------
rryan
This is almost identical to my backup server except I have 3x2TB drives in a
RAID-Z pool. I agree with all the author's "reasons this is awesome" except my
#1 reason is data integrity.

ZFS with RAID-Z does block-level checksumming and automatic healing as you
access your data. Combine that with a weekly scrub (touches every block so any
silent bit flips are healed) and I can do away with my fears that my precious
bits are rotting away.

~~~
Gibheer
ZFS uses block-level checksumming for every block. The auto healing can be
used when there is a second copy or another way to rebuild that block, read
Raid-Z. So you get the auto healing already when using mirror.

~~~
dedward
as long as you have ECC ram. If not, you're at increased risk than most other
filesystems (which are also at risk) - ZFS was designed with the requirement
that RAM is reliable.

------
conradev
I have been using FreeNAS, which is a slimmed down version of FreeBSD meant to
run off of a USB stick. It has a web interface to set up and manage ZFS, and
can take regular ZFS snapshots, among other things. It also includes Netatalk
and other software to share the disks over the local network.

I use it as a Time Capsule for my MacBook. Hooked up to gigabit ethernet when
docked, backups are a breeze.

~~~
windexh8er
Ditto - FreeNAS is FTW. A new RC of 8.2 was just released in the past week and
it's awesome. The web interface was moved over to Django with the 8.x line
from 7.x. It's minimal, but feature rich and "just works".

For my home setup I actually run FreeNAS in an ESXi environment. I have 4
different physical disks that I carve up space on and allocate to the FreeNAS
VM. This allows me to snapshot upgrades on the base OS and when I'm testing an
upgrade I can disconnect the ZFS pool - validate the upgrade went fine, and
then reconnect the pool for a ZFS upgrade (if needed). The nice thing about
this approach is you can physically move around system very easily if all you
need to do is ship out the disk store - and since I have one beefy box for
virtualization at home I have my storage system contained within which makes
power and space a bit more efficient.

My suggestion to those who are considering this is if you stick about
$500-$1000 into a BYOD system you can generally get a high end quad core
system with 32GB of RAM and 3-5TB of disk space (with an SSD boot). At that
rate I would carve up a few TB for backup and SAN (FreeNAS) and the rest would
be for on-box VMs. The FreeNAS VM doesn't need more than 1 proc and about
2-4GB of RAM if you're dealing with a lot of file transfer. You can easily get
away with 2GB.

Long story short: ESXi + FreeNAS = 1-box solution for most at home geeks. My
motivation was that I was starting to have "box sprawl" and power consumption
was getting a bit out of hand. I also run PFSense on my box as well - but in
that regard I also have a low power physical system that acts as the primary
gateway device in my network. But there's a PFSense running as a VM as well
for failover when I do upgrades. Far better than any SOHO gear you can buy for
far too much $$$.

------
aes256
I have a couple of the HP ProLiant MicroServers, one of which is set up as a
NAS server with a RAID-Z array on 5x 2TB drives (running Oracle Solaris 11
Express in order to remain at the cutting edge of ZFS development).

At ~£150 after rebate (was around £120 when I bought mine) the MicroServer is
an absolute steal, and ZFS is a dream to administer. Truly a match made in
heaven.

------
ajtaylor
The ability to add new, differently sized disks with RAID-Z is the killer
feature of ZFS. I wonder how Linux ZFS performance & stability compare with
the FreeBSD ports? In the past, I've considered using something like FreeNAS
for my home storage needs but the ZFS support wasn't ready last time I looked
(1+ years ago?).

~~~
StavrosK
I also wonder the same. I was considering using BTRFS because it's better
integrated, but if ZFS is more stable/mature, I'd go with that without even
thinking. Has anyone used it for some amount of time?

~~~
blinkingled
I have used zfs-native on Ubuntu and Fedora a year or so ago for less than a
month and found it to be unusable at best - I couldn't even copy my data from
source to the backup ZFS disks - just went into a loop with high CPU. That may
have changed a bit with later releases but I just don't think getting ZFS to
scale and run reliably on different OS is going to happen anytime soon given
how much effort and skills it would take.

What I am looking at doing is getting/building Solaris compatible box for my
backup needs - that is a daunting task. But if I could do that I can run one
of the OSS variants of Solaris - Joyent SmartOS, Nexenta etc..

~~~
StavrosK
Ah, there go my hopes, dashed. Thanks for the input, I'll try it on a disk I
don't need and see if it's unusable, thanks again.

~~~
rewtraw
I'm currently running a backup box nearly identical to the OP, and haven't had
a problem yet (built it over a year ago). ZFS runs like a dream on Ubuntu,
even with a slew of oddly sized disks (1TB + 2x2TB + 3TB) at 90% capacity.
I've had my SATA card come loose, and ZFS just locks down the FS to r/o so no
damage is done. And unlike many other file systems, ZFS's checking utility
actually gives you human readable results if there is an error (eg. "/foo/bar
is corrupt", not just cryptic messages), and outputs exactly what you should
do to repair data.

I recommend at least trying ZFS out in a VM, I guarantee you'll be impressed
by the versatility.

~~~
StavrosK
I'll definitely give it a shot, thanks. Does anyone know what btrfs lacks
compared to zfs, other than stability?

~~~
jpadkins
For a file system that is a big 'other than'.

~~~
StavrosK
Sure, but I already know ZFS is stable and btrfs is less so.

------
trvrprkr
From the article: "Why not Debian or CentOS? Cool, go that way if you prefer
them. But personally I am in luuuuuurve with the Ubuntu ZFS PPA."

With Debian, you can use the PPA as-is. This requires adding that to your
/etc/apt/sources.list and manually adding the signing key with apt-key.

Something else the author doesn't directly address is that ZFS on Linux is
really only usable on 64-bit systems. Funny things may happen if you use the
32-bit version, such as OOPSing when doing simple things such as ls -a.

I've had nothing but great experiences with running this on my home NAS.

~~~
Wilya
Note that the 32bit thing isn't completely a Linux issue. ZFS seems pretty
much designed with 64bit in mind.

On the Freebsd side, there is whole section on tuning [0] for i386 users. Some
of it might transpose to linux (at least the concepts and things to watch).

[0] <http://wiki.freebsd.org/ZFSTuningGuide>

------
joshu
The last non-server HP products I have purchased have died. A desktop for my
mother, a laptop for the inlaws, and a small PC for myself.

I am terrified by the quality of their non high-end server gear.

~~~
currywurst
Great to know that someone else shares this apprehension. I cannot believe how
callous HP is in developing their consumer products.

It first started with our HP laptop that died multiple times over the warranty
period. And then some friend's machines as well .. the central issue in almost
all the cases : bad thermal management.

~~~
icebraining
Mine runs hot, but it's going strong after almost three years, even though it
has an AMD CPU with a particularly high TPD compared to others in the same
segment (Atoms).

------
dantheta
Just a quick note on those HP microservers - I have one, and I'm fond of it,
but the e-SATA port doesn't support hotplug and the inbuilt ethernet doesn't
support jumbo frames (at least, under versions of Linux that I've tried).

The e-SATA thing probably doesn't matter too much, but for anyone looking to
run iSCSI or even higher-throughput NFS, the absence of jumbo frames may be a
more important consideration.

------
fiatmoney
As you scale up your storage, I'd recommend you switch away from RAID-Z to a
pool of mirrors (essentially RAID10). It becomes easier to add or upgrade
pairs of disks with differing capacities (eg, a pair of 1TB, a pair of
2TB...), and in the event of failure you have more than a snowball's chance of
being able to rebuild the array before you have another disk go.

~~~
huggah
I bet a snowball _would_ have difficulty rebuilding the array before another
disk went. I like your version better.

------
swdunlop
Love ZFS, hate having to use it in FreeBSD or Illumos, so I like seeing
someone successfully using it in Linux even if licensing concerns keep me from
doing it. When I saw that he was using "WD Caviar Green" hard drives, though,
I cringed.

These "green" hard drives tend to be very aggressive about parking heads and
spinning down the platters. While this is fine if a disk is going to sit idle
for a long period of time, in cases like an OS partition and memory buffering,
these drives start destroying themselves spinning down and cranking up several
times a minute.

We had 4 out of 16 fail one week after burn-in in a raidz configuration. We
had plenty of hot spares for various reasons of paranoia, and they didn't go
all at once, so we recovered and replaced the entire batch with
Constellations.

The only positive note was that we are now firmly in love with zpools and
zfs.. It made egregious hardware failure a manageable problem.

~~~
luser001
Interesting.

Can you please point me to a more authoritative source for this? I also had a
"green" drive fail surprisingly soon. How did you confirm that they spin up
and down "several times a minute" by themselves. Are you sure it wasn't your
misconfigured OS telling them to do that?

Since green drives are also slower, I just assumed they traded off energy for
data transfer speed.

~~~
swdunlop
You can look for yourself using smartctl -- they still respond after the
mechanical failure. Aside from pasting internal emails (not happening :)),
you'll have to google for yourself on this one. We didn't do anything novel
with these drives at a hardware level to make them fail, and we've got dozens
of Constellations in an identical configuration that have had no failures
after months of load.

It's not transfer speed that they seem to trade off for, it's access times.
That may have been part of the problem -- our reads are extremely random over
a very broad range of sectors. This isn't exactly a log or video server usage
model. :)

------
filmgirlcw
We have almost the exact same setup except we use FreeBSD instead of Ubuntu --
mostly because when I setup this server, it was before ZFS support was
reliable in Linux.

I can't remember the exact specs on our box, but it was a similar microserver
that we've got 4 drives RAIDed in. It started at 1tb per drive but we've since
upgraded to 2TB drives.

This replaced a FreeNAS setup that I ran out of the closet in my home office
for years when I lived in Atlanta. That server was great but was loud, ran so
hot the closet was seriously 20 degrees hotter than the office, and an
electricity glutton. When we moved to New York City last year, we decided to
consolidate to a small unit for size/heat/power.

Highly reccommended to anyone who needs a media server, general file server
and fast-access VM/local enviornment.

ZFS is absolutely the way to go.

------
rapind
I've been using an unRAID setup (on an old workstation with plenty of bays)
for a couple years now and I've been happy so far. No failures yet though so I
haven't really put it to the test. The HP microserver definitely looks nice.

I just use it for storage though. No VMs etc. And I'm not overly concerned
about access speeds so long as I can play HD video off it (which I can).

<http://lime-technology.com/>

------
cheeseprocedure
Be extremely cautious when using WD Green drives in a RAID configuration.

They may be quiet/inexpensive, but they also don't ship with Time-Limited
Error Recovery enabled (<http://en.wikipedia.org/wiki/Time-
Limited_Error_Recovery>). This makes it much more likely they'll drop out of
an array. Some models can have this option flipped on; some cannot.

~~~
luser001
Thanks a lot! This is the most useful thing I've read in this thread. I own
two green drives.

I installed two Caviar Black (b/c of the 5-yr warranty) drives in a RAID array
a few months ago. The RAID edition drives aren't even significantly more
expensive. I wish I'd known this earlier.

This is a bummer.
[http://wdc.custhelp.com/app/answers/detail/a_id/1397/p/227,2...](http://wdc.custhelp.com/app/answers/detail/a_id/1397/p/227,283/session/L3RpbWUvMTMyMTQzOTc4NS9zaWQvdVhvYmpmSms%3D)

Edit: But on another page, WD says that Caviar drives can be used in consumer
RAID configs. Hmm.
[http://wdc.custhelp.com/app/answers/detail/a_id/996/related/...](http://wdc.custhelp.com/app/answers/detail/a_id/996/related/1/session/L2F2LzEvdGltZS8xMzQwMDM1MzExL3NpZC9yZFJZaC0taw%3D%3D)

------
X-Istence
I've got a similar setup, but a custom build. 2 IDE hard drives in zmirror,
this is my OS install, I am running OpenIndiana (I absolutely love the
stability of the OS) and then 5 hard drives in a raidz with a 6th drive
sitting in stand by (when i set this up raidz2 wasn't available yet) to
automatically take over in case of a failure.

This machine has now been chugging along for a long time. It stores about 4 TB
of personal backups (all my machines back up to it over the network), and
various other things such as projects, media files, photos. ZFS is rock solid.
I've had drives fail, and the backup drive take over without noticing a single
thing.

I've got 4 GB of memory in this machine and I can get write speeds over the
network of 80 MB/sec using consumer grade drives, and read speeds over the
network of around 120 MB/sec (I easily saturate my Gbit network).

I wouldn't store my backup bits on any other file system, I've had failures
with various Linux based raids/file systems that were nonrecoverable, I've
used UFS in the past from FreeBSD and had data be silently corrupted, end to
end checksumming is absolutely fantastic!

~~~
luser001
What is the network file system to make ZFS available to your other computers?
NFS and Samba?

------
jamesu
I grabbed a HP Microserver a few months ago after my old slow NAS died. One of
the best purchases i've made this year.

Initially i tried using Ubuntu Server on it, but there were a few problems
with it. I also tried Freenas but beyond the basic "share files and use ZFS"
it didn't really offer much in the way of customisation.

So Instead i decided to just put Windows Home Server on it, since all i wanted
to do was share files, use basic RAID, and run virtual machines for testing
with the minimum of fuss. Windows RDC works fantastically out of the box. For
VMs i just used Virtualbox. I stuck Freenas in a VM and left it running for
Time Machine - perfect.

I also stuck an SSD i had lying around in the optical bay, so i have 4 drives
available in the bays dedicated to storing data.

Despite not using Linux or ZFS, i'm quite happy with this current setup.

------
__alexs
> Normally we fear expanding a home server because resizing a RAID always
> means copying terabytes of data off somewhere else, wiping the lot and
> constructing a new one including the original disks.

This is just wrong. Only last week I resized my RAID5 array from 2 drives
(basically RAID1) to 4 drives. Reshaping and resizing the ext4 partition was
all done online with minimal performance impact. The only bit that took a long
time was the reshaping, and even then it managed to add a new 2TB drive in
only a few hours.

Even upgrading the entire array to use a larger size of disk is pretty easy.
You do need all the disks to be the same size though :(

~~~
jrk
It's also wrong on the opposite front: ZFS cannot change geometries on the
fly. I am a fairly big fan of ZFS, but the ability to mix disk sizes and
change numbers of disks in a RAIDZ set is not why--it effectively cannot do
either.

------
jalada
Best takeaway for me from this: RAID-Z on Linux is now ready. Good to know!

------
Havoc
Cool. I plan on doing something similar in future. One thing though:

Running the FS off a USB stick is a bad idea. No write leveling so its only a
question of time before it bombs out.

~~~
insn
Mount it read-only, put temporary files into ramdisks.

~~~
bengl3rt
Sadly, flash also suffers from read-disturbs. USB sticks can die just from
being read a lot.

Plus, sometimes you want logs for troubleshooting... small SSDs are cheap.

------
ZeWaren
Well, being able to send incremental backups of ZFS tanks between systems is
IMHO one of the best features of the system. Also, snapshots are very handy.

------
newman314
Does anyone have recommendations for similar hardware with 5 or 6 drives?

On a different note, the number of possible distributions is a tad confusing.
Can someone recommend (say FreeNAS or Illumos) the distribution with the most
up to date support of ZFS along with ongoing updates? I'm primarily looking
for something that is hassle free to maintain and update. FreeBSD or Solaris
based is okay.

~~~
Auguste
Not sure if money's an option for you, but I don't think you'll find anything
with 5-6 drives at this price. At AUD $280, it's cheaper than a lot of 2-drive
NAS' out there.

------
cpg
I'm biased (I started Amahi), but try the Amahi server <http://www.amahi.org>

We looked into using ZFS, as there has been some demand for it, but all the
licensing and the people around it were hard to deal with (non-responsive, to
be precise). I would be cool, though.

~~~
mrb
Expand the _first_ mention of the HDA acronym on the homepage.

You could have based your product on FreeBSD to use ZFS. There would have been
no licensing issue or people to deal with since the BSD license allows
modification and redistribution.

------
cpg
Oh, and BTW, there is also Greyhole, which we integrated in Amahi. It makes a
large redundant store out of a JBOD, with replication on multiple selectable
spindles, etc. etc. <http://greyhole.net>

------
madrona
I am looking to replace my Acer Windows Home Server box with something else,
so this article is very timely. Thanks for posting it.

Is the HP Microserver the best computer in its class, or are there other good
competitors?

------
res0nat0r
I've got 12TB in my unRAID box which has been working perfect for me for a
couple of years now.

<http://lime-technology.com/>

------
0x0
That's interesting, I've got two QNAP TS-419p boxes running Debian on armel.
Had no idea that similar, AND x64-64 based hardware was available for much
cheaper!

------
s800
nas4free. FreeBSD 9 based fork.

------
mathnode
I shall stick to XFS until BTRFS has made enough improvements. That's right
everyone XFS has been here for a long time and doing great in production.

------
LoneWolf
Am I the only one having trouble reading the font of the post? On a 24 inch
monitor running at 1920x1080 W7 Chrome its not exactly easy to read.

------
guelo
What surprised me when I first started messing with home file servers was that
even with the server plugged into the WiFi router's switch I still couldn't
directly play high-resolution video on my laptops. You end up having to try to
get some clunky streaming transcoding solution which only works for some file
formats and requires a heavy duty CPU.

~~~
mrb
That's because your laptop's wireless connection quality is sub-par. The
fileserver might be on the LAN, but the laptop's wireless is going to be your
bottleneck.

------
rcthompson
So, just how stable is ZFS on Linux these days? Anyone else with experience
care to share?

------
gringomorcego
I've heard that the perormance of linux zfs is terrible, but I think it was a
phoronix report...

Also, does anyone know how well dtrace was ported to linux? Last thing I
remember reading said it was half-assed

~~~
mrsteveman1
The old FUSE driver used to perform pretty poorly but I havent used it in a
while.

I've been using the native linux kernel driver (obv not in-tree but easy to
install) and it's fantastic, I see no slowdowns at all. Whatever performance
hit is there if any is worth the benefits :)

------
drivebyacct2
Wow. This looks like an incredible deal. After planning to buy a new ultrabook
soon, I'm thinking of ditching my monsterous desktop and going back to a
laptop + server setup. This looks perfect!

... No international shipping? I'm gonna cry.

------
gcb
No redundant power supply?

------
rabiyh
Rabiyh

------
bobowzki
Incredibly annoying blue lines...

------
kba
Please stop using fonts with serifs for the web. It's a strain on the eye.
Great article, though.

