I've learned the hard way that the 'R' in traditional RAID truly does stand only for "redundant" and not "reliable". Reliability in traditional RAID is predicated an complete, catastrophic failure of a drive such that it is either working wholly and completely or failing wholly and completely.
In a traditional RAID, for any failure mode in which a drive or its controller starts to report bad data before total failure, the bad data is propagated like a virus to the other drives. The corruption returned by a failing drive is lovingly and redundantly replicated to the other drives in the RAID.
This is the advantage of ZFS (or BTRFS). Blocks of data are checksummed and verified and corruption isolated and repaired. Yay for reliable data.
Unfortunately, the three-drive filesystem lasted two weeks before it became unmountable. The only thing that let me mount it was finally running btrfsck. I was left with 57 unrecoverable errors, and lots of lost data.
I would not recommend running BTRFS in RAID5 or RAID6 just yet. Stick with mirroring, if you want to use it, and rebalance to RAID5/6 later on when it's more stable.
e: To anyone not up on btrfs, its features are closely tied to the kernel version it's used with. For example, raid56 scrub and device replace, and recovery and rebuild code were not available prior to kernel 3.19.
I also believe the only way to use 5/6 modes before they were stable was to explicitly compile with them enabled. It wasn't just something you could accidentally do.
I didn't have much data to submit. No kernel panics, no useful error messages, nothing beyond it saying it wouldn't mount. One could read the tea leaves from the filesystem as it sat, but such data spelunking could take a while on an 8TB partition, and I wanted to get the disks back into use.
I didn't notice the corruption until after I had unmounted it, so scrubbing it wasn't an option.
I recently got myself a new home server and decided to use FreeBSD and ZFS, so the question is settled for me for the moment. I still hope Btrfs gets there in the not-too-distant future, though.
As far as I can tell, the only consumer NAS maker that offer BTRFS is Netgear.
What I do is have my ZFS in two "layers" (each of them 4 disks in raidz2, i.e. resilient against any two failures), and replace a whole layer at a time. So I started with 4x500GB drives for 1TB of capacity. Then I added 4x1TB drives, total capacity 3TB. Then I replaced the 500GB drives with 2TB drives, total capacity 6TB (and throwing away the 500GB disks, so "losing" 1TB). I'm shortly going to replace the 1TB drives with 4TB drives in the same way.
I use a HP microstation gen 8, it's lovely hardware; and I use nas4free on a usb stick in the internal port as software.
: by manually I mean the machine that use obnam to backup use one or the other group as a destination, depending on the day.
If you go your own route, look at this server case silverstone cs-ds380b. I think paired with a mini-itx server board with the most sata ports, maybe get some ECC ram, server rated intel processor. I wish I knew the best software route to go. I've considered Amahi in the past and of course FreeNAS. I supect even a couple commercial packages could be sweet to have.
My Setup: OpenSUSE (BTRFS with their Snappy is awesome)
Two drives that are redundant.
CrashPlan for off site backup
I also run my IRC (Weechat) on the box and glowing-bear.org as the front end and this is the BEST thing ever. Run my printer from the server and have a personal R-Studio and Juypter Notebook server running on it. I love it.
If you haven't had a failed disk yet, consider yourself lucky! :-)
This way you get the expandability of RAID with the checksumming and snapshots of ZFS.
I haven't seen it done, it's just a theory, hence why I asked. I'm just not sure if zfs needs to see actual disks, or if it can work on top of any block device, like an md RAID.