Hacker Newsnew | past | comments | ask | show | jobs | submit | mphalan's commentslogin

Apple | C/C++ Engineer | Full-Time | ONSITE (Prague, CZ) | apple.com Our Always On Platform (AOP) team in CoreOS is looking for a new member. We manage the runtime environment of Apple's AOP cores and associated sensors on Apple's SoCs. Our ideal condidate would have strong Systems experience and good knowledge of C/C++. The job is on-site in our Prague office (Apple offers a hybrid work model). The official job posting can be found here: https://jobs.apple.com/cs-cz/details/200588940/


In England (and Ireland). One tends to say “The 4th of July”. “July 4th” is an Americanism.


Both "the 4th of July" and "July 4th" are common in American English


Yes, and only "the 4th of July" is common in UK English.


Only because that's a holiday. The 23rd of March sounds a bit formal here compared to March 23rd, and sounds like it refers to an event or holiday. Wedding invitations might sometimes use that style, for example.


“Independence Day” is the relevant Americanism.


I've not followed closely but got any references for that ("the more advanced filesystem")? From what I can see Oracle's ZFS still has some features that OpenZFS lacks (on disk encryption support being one of the most obvious) and there's been lots of development in the last few years. https://blogs.oracle.com/zfs/entry/welcome_to_oracle_solaris...

Disclaimer: I work for Oracle on Solaris (but not ZFS).


You can see the full list of (larger) improvements since the Oracle ZFS/OpenZFS split here: http://open-zfs.org/wiki/Features

But basically tons of improvements to the L2ARC and volume management (async destroy is a god send for example).

Lack of encryption and no clustering support is a bit of a bummer but both are easily worked around, the performance improvements in the OpenZFS branch however are very dramatic and well worth running it over the Oracle branch where possible.

There is also all the inflight stuff here on the main page: http://open-zfs.org/wiki/Main_Page


"You can see the full list of (larger) improvements since the Oracle ZFS/OpenZFS split here: http://open-zfs.org/wiki/Features"

I searched that page (and this HN discussion) for the word "defrag" and got nothing. That's a problem.

Does the Oracle version of ZFS have defrag, or have it in the pipeline ?

It is not reasonable to expect folks to just never, ever, exceed 85% pool utilization. Further, most of those folks don't realize that there's no coming back from it - exceed 85% even for a day and you have a performance penalty forever.

"Oh it's no problem, just export the pool and recreate it"

Sure ... let's buy another $20k of equipment and duplicate a 250 TB pool just because we had a usage overrun (which wasn't really a usage overrun at all) one weekend.

Finally: think of the economics of this unofficial limit on your pools ... you probably aren't running a pure stripe, right ? Maybe you're running raidz3 ? So you already gave up three drives for data protection ... did your cost accounting also subtract another 15-20% from available storage space ?

ZFS needs defrag.


ZFS defragmentation is a very difficult topic, mainly because it's very hard to do transactionally.

A lot of what makes ZFS good is it's CoW implementation, that implementation is simplified by the concept that a block will never change. However the main underlying feature that would allow defragmentation is referred to as "block pointer rewrite. Effectively it would allow you to copy a block and apply any other transformation to it and then transactionally update all pointers to that block. This is very hard when you factor in all the things that could possibly point to a block including many snapshots/clones etc.

So the long and the short of it is the situation isn't great. Will we ever see BPR? Maybe. Is it still a really good filesystem even with this limitation? Definitely.


Just how bad is this "permanent" performance hit? Do you have numbers?

I've seen pools that have climbed above 90%, and maybe they've suffered permanent speed degradation, but they still run fast enough to mostly saturate a 10GbE connection, so... not a problem for me?


We (rsync.net) do not have any numbers, but we know what happened when we broke 85% on a zpool that contained a single vdev ... things went to shit. Luckily, we expanded that zpool with two other vdevs and it sort of balanced things out and rescued it ... meaning, enough IO happens on the other two vdevs to make the zpool viable.

However, the effects scared us enough such that we will never let it happen again ... and that is a fairly severe economic and provisioning penalty (ie., chop off 15% of your inventory on top of the three drives per vdev you already lost for parity).


Oracle's ZFS encryption is susceptible to watermarking attacks: http://lists.freebsd.org/pipermail/freebsd-hackers/2013-Sept...

The "more advanced" claim is certainly disputable but OpenZFS has a larger and rapidly growing user base. The ZFSonLinux and OpenZFSonOSX ports in particular are bringing loads of new users to the table, and that means more testing, more contributors, and in the long run more features. (I've also become an occasional ZoL contributor that way.)


>on disk encryption

Given you could just run LUKS on top of (open)ZFS, it's probably the better security position to run an audited, established encryption product, than to consider a layer thrown on top of ZFS a feature.


There are major benefits to be had from moving the encryption layer on top of the volume management/storage EDAC layers which ZFS provides: in particular, it'd be nice to be able to scrub a locked dataset. I think (but haven't seen this firsthand) that Oracle's implementation offers that benefit.


You can still scrub a dataset with GELI (https://www.freebsd.org/cgi/man.cgi?geli%288%29)

GELI creates a virtual block device that works great with ZFS, you get it all, self-healing, checksumming etc.


It looks like GELI goes below zfs just like dm-crypt. That doesn't allow you to scrub a locked dataset (one where the key is not in memory, possibly unknown). You could use zvols, of course, but that loses some (but not all) of the full-stack zfs benefits.


No, you create virtual block devices that needs to be "unlocked" before you can start using ZFS. If you need to scrub the dataset, you need to unlock all the virtual block devices, start ZFS and then run the scrub command.

The whole point is to run ZFS on top of virtual block devices which are encrypted with GELI and "unlocked".


This is kind of a poor defence. ZFS built-in encryption would be a great feature, but since the source got leaked implementing it is a bit of a legal minefield.


I thought FreeNAS uses OpenZFS and that definitely gives me encryption -am I missing something, please?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: