Hacker News new | comments | show | ask | jobs | submit login

> Other software RAID solutions like Linux MDADM lets you grow an existing RAID array with one disk at a time.

His issue isn't with ZFS, it's that most parity raid (raidz, raidz2, raid5, raid6, etc) doesn't support safely rebalancing an array to a different number of disks.

With mirrors, the things he describes aren't an issue, especially in a home server. You can start with one disk, mirror it when you're ready; then add additional vdevs of mirrored pairs extending your pool as necessary. Or upgrade two disks to grow a vdev.

http://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-...




The article you link to is disingenuous (IMHO), it talks about performance during a rebuild, but then says this: "When you replace and resilver a disk in a mirror vdev, your pool is again minimally impacted – you’re doing simple reads from the remaining member of the vdev, and simple writes to the new member of the vdev." If you're only doing "simple reads" then your storage array is not serving any writes, in which case why are you concerned about performance at all?

Each "Home NAS" is going to have very different requirements (to each his own). If you are concerned about reliability and want to be protected against a dual-drive failure, you're better off with RAIDZ2 than with a bunch of striped mirrors, because with RAIDZ2 you can have any two drives fail, where as with an array of mirrors you still have this nagging chance that the second disk in your degraded VDEV will fail and you will loose the entire pool.

Chances are that your home NAS does not have a 4-hour disk replacement SLA.

Also, just blindly recommending to mirror everything may not be the best option for a lot of home users that don't want to loose half of their raw capacity to parity.


Linux mdadm supports re-striping raid5 devices in place. I've done it, it's fun. If you're raid doesn't support that, it's deficient.


The reason that ZFS has such trouble with this notion is that a lot of its internal architecture depends on "block pointers" (offsets of data into a given vdev) being immutable - this is, among other things, why CoW snapshots are so cheap. And attempting to rewrite all of these without having any gaps in consistent on-disk state is...challenging, to say the least.

Sun apparently had internally done most(?) of the work on BP rewrite (as it's referred to), but performance sucked rather badly, and the work has not been shipped by Sun/Oracle, let alone anything that came out before the F/OSS code sharing ceased. [1]

(Performance sucked, in particular, for similar reasons to why dedup performance has such a high penalty on ZFS - they end up storing a lookup table that gets hit for every block that gets written after you turn on dedup, and so if that table stops fitting in RAM, it's a bad time - let alone the issue of making sure the disk backing the DDT storage is sufficiently performant...)

[1] - http://www.listbox.com/member/archive/182180/2012/01/sort/th...


> If you're raid doesn't support that, it's deficient.

Nowadays, RAID5 is deficient.


That's another topic entirely. I'm just saying if you're going to support it, support it correctly.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: