
FreeBSD and ZFS - liuw
http://freebsdfoundation.blogspot.com/2016/02/freebsd-and-zfs.html
======
lomnakkus
Yes, happily the ZFS license (CDDL) and (star)BSD licenses seem to be
compatible. Yay for the BSDs of the world!

Unfortunately, most of the OSS/Free Software world is running a Linux kernel
whose license (GPLv2) _isn 't_ compatible with the CDDL according to current
thinking by most of the people who know about these things. End of story,
AFAICT. (I mean there's a theoretical possibility of relicensing the Linux
kernel, but given the contributor profile it's probably impossible in any
practical sense. AFAICT it would be _far_ more likely/practical for ZFS to be
re-licensed under a GPL-compatible license if only the corporate overlords
were so inclined.)

EDIT: Just a minor edit: I didn't mean that it's "unfortunate" that most of
the OSS/Free Software world is running Linux. It works pretty damn well. My
"unfortunately" remark was merely about the fact that licenses may be (or are)
incompatible.

~~~
noinsight
> it would be far more likely/practical for ZFS to be re-licensed under a GPL-
> compatible license if only the corporate overlords were so inclined.

I feel like that ship has sailed since there's Btrfs for Linux too - which
also originated at Oracle.

I've always been curious why they kept working on Btrfs after acquiring Sun
since they could've just relicensed ZFS to GPL/whatever.

Then again, I've also always been curious why people still keep glamoring over
ZFS for Linux when Btrfs exists now, which, as far as I understand, is
comparable to ZFS and a fresh implementation (for what that's worth).

~~~
usefulcat
Bear in mind that many of those who are interested in ZFS are interested in it
specifically because they care a lot about _not losing data_.

With that in mind, let's see what the btrfs wiki has to say about the
stability of btrfs:

"Is btrfs stable? Short answer: Maybe."

[https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable....](https://btrfs.wiki.kernel.org/index.php/FAQ#Is_btrfs_stable.3F)

Maybe it's just me, but I'm more than a little concerned by the fact that
btrfs has been in development for over 8 years and still doesn't seem to be
considered safe to use for storing anything I really care about.

------
random_upvoter
A while back I was considering Freenas and in the documentation/forums I read
dire warnings about running ZFS with less than 8 gigabyte of ram. Apparently,
this could actually put your data at risk.

I was a bit nonplussed by this. The least I would expect from a file system is
that it degrades gracefully. I wonder if anyone knows more about this?

~~~
the_ancient
I would have to see proof that is was low ram that caused the data loss.

The Recommendations for higher amounts memory comes from the way ZFS Caching
works. There are 2 caches, ARC and L2-ARC. L2-ARC has to be defined on a SSD
and is optional. ARC will always exist and ZFS will use every bit of free
memory you have (unless you change the configuration to limit it) to build the
cache.. why ram is faster that is why.

So the only scenario I could conceive of that would result in data loss is if
you are writing files to the ARC faster than ZFS can commit them to disk but
that seems unlikely and ZFS should not allow this anyway.

for Enterprise work loads the Recommend amount of Memory is 1GB per TB of
storage. This is for best performance, however my home server is 35TB and ran
for about 2 years with 8GB, and recently upgraded the system to a new proc,
mainboard and 16gb a memory, have had no issues with either. I have about 2-3
Movies streaming from it with about 3-4 devices/computers connected for
network file storage.

The more RAM you give ZFS the better performance you get (to a point) but I
believe this issue is massively over talked about and hyped.

~~~
leonroy
> The more RAM you give ZFS the better performance you get (to a point) but I
> believe this issue is massively over talked about and hyped.

Definitely not hyped as you said in the enterprise space - think some home
users see that warning of 1GB per 1TB though and get put off.

In my experience we were running multiple vSphere NFS volumes on several large
16x bay FreeNAS systems. We started with 16x 1TB disks and 16GB of RAM. It
wasn't until we installed 32GB of RAM before performance became acceptable.
Ideally we should really have put in 64GB of RAM but alas Intel's damn Ivy
Bridge E3 range maxed out at 32GB.

I should stress we were only running 20 virtual machines in total so nothing
massively onerous, but that extra RAM made all the difference.

~~~
the_ancient
>We started with 16x 1TB disks and 16GB of RAM.

Type of Drives, did you have a L2 ARC and ZIL...

There are alot of reasons why you may have needed the additional ram. If you
were using off the Shelf 7200 (or worse 5400) NAS drives then I am not
shocked, these drives are for storage not for VM's. need atleast 10K drives
for VM storage

>Definitely not hyped as you said in the enterprise space - think some home
users see that warning of 1GB per 1TB though and get put off.

It still kinda is, but 90% of the questions I see posted in various places are
about Home and Test Labs, not Enterprise Deployments.

you do not need 32GB of RAM for your home file server that is likely serving
less than 10 clients.

------
ksec
Offtopic, the next post on the blog is about FreeBSD and RISC-V, what i find
interesting is they are both from UC Berkeley.

------
cm3
FWIW HammerFS dedup doesn't require much RAM.

------
web007
This feels very self-serving.

There's no need for BSD to comment on their "longstanding relationship" with
ZFS if they're only saying "we have it and it works within our framework"
versus adding something constructive to the dialog on its licensing in Linux.
It's awesome that the BSD and ZFS licenses are compatible and that it works
well. It just has nothing to do with the headlines they are addressing.

~~~
Sanddancer
No, the FreeBSD foundation is showing that different entities, with different
licenses, can cooperate. As the SFLC mentioned in their original posting,
combining ZFS in the kernel feels little more than damnum absque injuria. In
order for any judgement to be laid, there has to be shown some sort of injured
party. Who is injured exactly by integrating a kernel module with primarily
CDDL code with the Linux kernel, which itself already includes many binary
only files where the licensing is questionable?

~~~
pilif
_> Who is injured exactly by integrating a kernel module with primarily CDDL
code with the Linux kernel_

Oracle as the sole copyright holder of zfs might be because thy lose potential
Solaris sales when zfs is distributed with Linux this way. At least they could
argue that before some court

~~~
cyphar
Oracle was the original copyright holder before the OpenSolaris fiasco and
Illumos was forked.

~~~
pilif
There still are a lot of lines in a lot of files that were written by Sun
employees which are now owned by Oracle. So the same way a kernel developer
has a potential standing to sue canonical over their potential GPL violation,
Oracle has the same potential standing to sue canonical over their potential
CDDL violation.

It's just that oracle is ever so much more likely to sue and actually win
(see: the google api's are copyrightable case)

