
Why did Apple drop ZFS? - chaostheory
http://storagemojo.com/2009/08/31/why-did-apple-drop-zfs/
======
iigs
Perhaps the reason is that ZFS is actually not a big win for Apple's core
product market(1): laptops and single user workstations (even that is
grudging). From the original article:

 _Manage storage, not disks. You can put all your disks in a pool and specify
the redundancy level. ZFS takes care of the rest._

Essentially all of Apple's products are single disk. There is virtually no win
there.

 _No more silent data corruption._

The checksumming is a nice feature and desperately needed as disk capacity
continues to explode but bit errors remain the same, but in practice probably
the biggest risk is bugs in FS code:

[http://www.techcrunch.com/2008/01/15/joyent-suffers-major-
do...](http://www.techcrunch.com/2008/01/15/joyent-suffers-major-downtime-due-
to-zfs-bug/)

 _Easy snapshots._

This is slick and gives the UNIX guy in me happy dreams but Time Machine
really stole the thunder on this for regular users. It's just easier to
explain to an end user "go to your backup and drag the files back" instead of
the way that snapshots work in ZFS.

 _High performance software RAID built-in._

Again, one disk devices can't benefit from this

 _Transparent compression on the fly._

I challenge anyone to come up with a big win for this for the common Mac case.
Most common or data-intensive file formats are compressed because it's just
easier for any computer to deal with.

(1) Considering Apple's server products, xSAN is actually very slick, and in a
lot of cases the flexibility of SAN storage outweighs the benefits you would
get from ZFS's multi-device integration. The JBOD model demonstrated by
Thumper + ZFS can be incredibly inexpensive but doesn't align practically with
many workloads, including (I would expect) the video processing business,
which is probably Apple's biggest server market.

~~~
spydez
_Essentially all of Apple's products are single disk. There is virtually no
win there._

Well, if you're data-paranoid, like me, you can set your ZFS filesystem to
keep 2 copies of every file on different parts of the disk. Then if your
single disk has a sector or two go kaput, ZFS will notice the bad CRC on read
and get the other copy.

I've also heard of other improvements for single-disk, and SSD specifically,
but alas, I can't find citations right now.

 _Time Machine really stole the thunder on this for regular users._

Time Machine's "snapshots" suck. Entirely new file every time a bit changes?
Really? My 500 gig external disk just filled up. And it's only backing up a
250 GB drive. And I only started using Time Machine a few months ago. Virtual
machines & TrueCrypt volumes that change often are a royal pain with TM. ZFS,
on the other hand, can store 4 months worth of backups of a 1TB RAID array on
my fileserver in way south of a gig.

Also, ZFS send and ZFS recv are awesome ways to stream a filesystem snapshot
over a network to, say, a Time Capsule. If you were so inclined.

~~~
voidmain
Actually, it's worse than that. Time machine's "snapshots" aren't snapshots -
they don't represent a point in time at all. A time machine backup can
complete and nevertheless be unrestorable. On filesystems that support
snapshots - including Windows' - you can actually do a reliable backup, but
not on OSX.

------
st3fan
Apple just announced a press event on 9/9/9. Which is 6/6/6 upside down. This
is an important number in UNIX filesystem design. I feel something ZFS will
happen on that date.

~~~
mikedouglas
Don't give the analysts any ideas.

------
philwelch
Simplest explanation: couldn't be finished in time to a sufficient level of
quality, therefore it was deferred for a later release.

Second simplest explanation: licensing issues.

Both of these explanations are very very boring, and I haven't seen anything
to suggest that any other plausible explanation is very interesting.

~~~
rbanffy
The simplest explanation doesn't hold much water. I never heard of someone
losing data on ZFS but there are whole businesses based on the certainty
people will lose data on HFS+. It could be because all the people using ZFS
today know very well what they are doing and the same can't be said about the
average Mac user, but, still, ZFS seems rock solid.

~~~
philwelch
ZFS the file system, probably. Putting a reliable and fast implementation of
ZFS in the OS X codebase may be a different story, as is working out a good UI
for migrating users. ZFS is pretty complex, I wouldn't shrug off the challenge
of implementing it well.

------
nathanb
Well, the BSD port of ZFS has the reputation of being less reliable and
difficult to administer due to lack of certain features and tool support, so
possibly Apple ran into similar problems. The NetApp lawsuit can't help,
either.

~~~
ajross
Really? I hadn't heard that. What are the problems (especially reliability
ones) with ZFS outside Solaris? I always assumed it was more or less a
straight port. ZFS doing its own RAID implies to me that it sits relatively
tightly on top of a simple block device layer.

The NetApp issue, though, does seem plausible to me. At the very least, you'd
want to see some level of indemnity from either NetApp or Oracle before
shipping this on millions of devices.

~~~
iigs
For a while my FreeBSD 7 ZFS FS would be empty every time I rebooted -- as if
it was reformatted in the boot scripts. That was a pretty big bummer and put
me off of using it seriously for a few years. The Solaris ZFS bug was enough
FUD for me to wrap it up entirely.

------
jsz0
Apple tends to stay away from half baked ideas. It probably just wasn't ready
and the microscopic marketshare of OSX Server would have made a major shift in
priorities hard to justify. They never planned to bring ZFS to OSX client in
10.6 so this doesn't really effect many people. I would expect an updated SL
compatible ZFS read/write beta on ADC at some point before 10.7

