"If you're looking for raw, unbridled performance it's hard to argue against a properly-tuned pool of ZFS mirrors. RAID10 is the fastest per-disk conventional RAID topology in all metrics, and ZFS mirrors beat it resoundingly—sometimes by an order of magnitude—in every category tested, with the sole exception of 4KiB uncached reads."
However, test rig is using rusty spindles and is generally very low-fi. There's no hardware RAID controller and only SAS 6G 7200rpm 12TB Seawolfs are used. I'd hardly call this a relevant test of ZFS vs. every other RAID in that sense. A comparison rig with a RAID controller and some tiered storage with DRAM cache etc would be a little more fair.
The other big issue IMO is that every scenario one would consider high-performance storage (cloud architecture, HPC, specialized heavy write DB apps, to name a few), Linux is probably running with an app stack heavily invested in some distro of it, which does not have native drivers for it (ZFS) due to Oracle's inherited licensing from Sun which is not GPL.
FreeBSD is the next obvious choice since its ZFS implementation supposedly kicks ass but there are significant development and architecture issues there.
Doing storage (or any) performance metrics is kind of a black art and this is like a cute dalliance with some SOHO stack you'd find at your mom's local accounting firm.
Still, I love ZFS and the BSDs but I think this article... well, sucks.
Wasn't FreeBSD rebasing onto ZfsOnLinux ?