
3-Way Disk Mirrors with ZFS on Linux - bhoey
http://www.bhoey.com/blog/?p=27
======
psophis
A much easier way to test ZFS configurations is to use files instead[0]:

    
    
      $ mkfile 128m /home/user/disk1
      $ mkfile 128m /home/user/disk2
      $ mkfile 128m /home/user/disk3
      $ zfs create tank mirror /home/user/disk1 /home/user/disk2 /home/user/disk3
    

[0]:
[https://flux.org.uk/tech/2007/03/zfs_tutorial_1.html](https://flux.org.uk/tech/2007/03/zfs_tutorial_1.html)

~~~
bhoey
Author here, thanks for the heads up. It doesn't look like mkfile is included
with the Jessie base system I was using to test. For those wishing to use this
method it looks like the xfsprogs package contains xfs_mkfile which operates
the same as parent's example.

~~~
psophis
mkfile is standard on Solaris. On Debain use fallocate:

    
    
      $ fallocate -l 128MB /home/user/disk1

~~~
NeutronBoy
Is there an advantage of using fallocate over say, dd to pipe random or zeroes
to a file?

~~~
asimilator
fallocate doesn't do any IO; it's much faster.

From the fallocate man page:

> For filesystems which support the fallocate system call, preallocation is
> done quickly by allocating blocks and marking them as uninitialized,
> requiring no IO to the data blocks. This is much faster than creating a file
> by filling it with zeros.

------
voidz
One thing to note: the '-o ashift' option is a per-drive (vdev) attribute
according to the manual. So, when replacing a vdev, be sure to specify this
option again if you do not use the default:

zpool replace -o ashift=12 old new.

~~~
jlgaddis
I realize this article is aimed at Linux, but on FreeBSD you can simply set

    
    
      vfs.zfs.min_auto_ashift=12
    

in _/ etc/sysctl.conf_ to handle this automatically.

------
radiowave
Another nice property of ZFS mirroring is the read performance. Since data
integrity is verified by checksums rather than by reading the data back from
all disks and comparing it, the disks within a ZFS mirror are able to serve
read requests in parallel.

------
booi
Also, standard linux raid (mdadm) will allow 3 or more disks in RAID 1 and
will read from all of them for better performance. ZFS has a lot of overhead
compared to mdadm+LVM but it depends on your specific use case.

