- you can have different keys for all your "partitions" without having to pre-allocate them
- you can send encrypted incremental snapshots to a host that doesn't have the key (for backups for instance)
- it's using authenticated encryption (as opposed to LUKS with XTS)
> you can have different keys for all your "partitions" without having to pre-allocate them
I imagine its possible to create multiple encrypted directory hierarchies with different keys as they are needed.
> you can send encrypted incremental snapshots to a host that doesn't have the key (for backups for instance)
You can just rsync the encrypted directory hierarchy.
> it's using authenticated encryption (as opposed to LUKS with XTS)
From my quick search, do I understand that "authenticated encryption" is basically hashing or signing the ciphertext or plaintext to make sure that it wasn't altered?
Possibly, but ZoL/ZFS offers an integrated solution without any setup, etc. That's very appealing for most people.
As an admin I really don't want to have to write a big pile of scripts to do what I could trivially do with ZFS by just setting some zpool or zfs options.
EDIT: I don't know why, but people who haven't "lived" it keep underestimating just how much the convenience of "integration" means. This is not a slight, I just think that it's one of those things that you have to experience to appreciate just how much of a difference it makes.
Storage is unforgiving and you absolutely want the simplest thing to set up - for data safety.
The problem is that when you go to do recovery, maybe years after the thing was set up, it turns out the software has moved on and works differently and your storage media are in a state where at least one has failed and maybe more are dodgy, and if you have set something up with a few different interacting layers it is super easy to do the wrong thing and lose data.
I’ve lost data to “clever” storage setups before and now I stick to the “happy path” (only run the most common, well tested configurations).
I don't have that much experience with ZFS/ZoL (truly-)edge cases, but it's very reassuring that they had the right mindset when designing the tools.
Isn't this possible already by encrypting the incremental snapshots with a key of your choice, and then just storing that on the remote host?
Has anyone tried it out more recently? Any difference?
Also along that line, there's also work going on to make L2ARC persistent across reboots (which would be wonderful for starting up VMs hosted on zvols).
SSD's don't rely on trim, so you don't need them. It has still impact on performance and longlivety of the SSD. Depending on the usage patterns, the effect can be large or nonexistent.
> 5.0 removes the ability from non-GPL modules to use the FPU or SIMD instructions
Optionally, we may expect distros like Ubuntu simply reverse that change to not ruin their own packages.
> My tolerance for ZFS is pretty non-existant.
Greg seems like a really pragmatic, nice and agreeable chap. Bet he’s a big hit at parties.
If they want to earn the ire of the Linux kernel developers, sure. But they won't do that, especially if they employ kernel developers. That's a scummy thing to do.
It could have a neat name too, like “gplbridge” or whatever (all associations with License-trolls only partially intended). That way they wouldn’t have remove optimized code for Linux 5+ users.
Dirty, utterly pointless, but it would get the job done, right?
Personally I think one has a better time arguing that kernel modules simply interface with the kernel rather than creating a derivative work, thus making any module independent from the license of the kernel. The case law around "interfacing" is conflicting, but at least a few cases has gone in favor of the accused.
(IIRC, the original title of this submission had " with Linux 5.0 support" at the end, indicating why this release is newsworthy, but I'm not seeing it anymore.)
Prior to that we used proprietary RAID hardware that, while fast, did not have checksumming and was frankly a nightmare to manage.
If I had it to do over, I'd choose ZFS again.
ZFS works well for basic data storage and snapshots, but the deduplication has a ton of overhead and is extremely slow on hard drives. And there is no way to make a plain copy-on-write copy of a file without using the full deduplication mode.
I haven't tried either on windows directly, I've accessed them as a file share via a VM.
ZFS is more stable, has the downside of not allowing expansion of volumes, and has the issue of not being upstreamed.
BTRfs has the option to expand volumes, is native to Linux, but has a wiff of unstability. Most notably with the persisting raid 5 write hole.
I haven't found a good choice yet. Though I think I'd go btrfs at the moment.
What it doesn’t allow is for raid sets to be expanded by replacing with larger disks one at a time. This can however be done with mirrored sets. Add new larger discs to the mirror set, wait for resilvering, remove older smaller discs, done.
0.8 will also bring the ability to remove vdevs so effectively shrink a pool through a similar (but slightly different) process.
(Unless this is a change that will be brought about in 0.8, I haven’t followed that closely).
No longer true. I’ve expanded volumes on Ubuntu with ZoL.
I imagine either should work fine over SAMBA (if that's what you have in mind for Windows). I know there's some work being done to get ZFS working on Windows, and I don't believe there's anything of the sort for btrfs.
They have their own RHEL derivative, would it make life easier for them as well (having an external community committed to keeping ZFS in the kernel tree)?
I really hate to say this, but I wish IBM had bought Sun instead of Oracle. Even better I really wish Sun had managed to pivot and stay an independent company.
Oracle will now only support current LTS versions of the Oracle JDK (as opposed to the OpenJDK), for something like 2 years. Basically requiring people to migrate to LTS releases relatively quickly.
If you can’t (larger orgs and others with requirements that delay those moves), then you’ll need to pay Oracle for a license.
Java (Oracle/Sun JDK) has never really been free, specifically, while I never worked with it, embedded JDK has always required a purchased license for distribution. The OpenJDK, on the other hand is a free OSS licensed distribution of Java.
So for them, it's moving from a freely available JDK supported for many years, to a JDK that's changing every 6 months.
And the dust is far from settling on which alternative OpenJDK build is stable, reliable and free, long term.
I know that companies should pay for it, but things don't really work like that, see the OpenSSL fiasco.
On the contrary, they've open sourced the entire JDK for the first time ever, and are now offering the same JDK either completely free or with paid support, rather than the mixed free/commercial JDK as before. (I work at Oracle on OpenJDK, but speak only for myself)
Now if OpenJDK is kept at parity with the OracleJDK that could work.
Didn't VLC do this? It's not impossible.
Oracle Linux uses Btrfs, and Oracle as a company puts more resources into Btrfs than they do ZFS.
They've been committed to Btrfs for almost a decade now, and they're continuing to improve that filesystem and make it solid for their use-cases and their customers.
Basically, when serving NFS from a Linux-ZFS the client can read/write/seek fine, but on-demand paging sometimes fails (the same way it is supposed to fail when the file has been deleted server-side, but without any activity on the server).
So for example if you /usr is NFS emacs startup results in a bus error. The error rate increases with server uptime.
I tested ext2fs with the same linux kernel on the server and same clients - problem goes away. I use about the same ZFS version serving the same files from FreeBSD-current, error goes away.
I cannot be the only one seeing this?
I'm guessing from the way you're spinning things that you're not ok with this switch? Personally I don't care where FreeBSD pull their ZFS sources from as long as it's stable - and I have full confidence in them that it will be.
I absolutely loooove ZoL and run it on all cluster nodes. Proxmox works beautifully with it. I really hope ZFS for Linux and the kernel can get along, because ZoL greatly enriches Linux usability in storage needs.
And secondly version numbers are often a meaningless gauge of when something is production ready. In any production environment you'd need to thoroughly test any new technology (or even point releases of existing tech) before deploying to prod because regression bugs and undocumented behaviour is a real thing.
As an aside, I've been running ZFS since shortly after it's release and have ran it across 4 distinct operating systems in that time. It's honestly been one of the best pieces of engineering I've used in that time. To the extent that ZFS has saved me from total data loss (ignoring, for the moment, backups) on at least two separate occasions.
I use ZFS.
I am mocking open source developers' terror of calling something a 1.0 release.
At the same time they say release numbers aren't an indication of quality their actions tell us that release numbers matter a lot and that's why they are scared of calling something a 1.0 release.
Given that ZoL doesn't support reflink (ficlone/ficlonerange; see also https://github.com/zfsonlinux/zfs/issues/405), and that ZFS is a CoW filesystem, that's pretty good reason not to have 1.0 release.
In other words, there is no way no use a key feauture, without enabling deduplication (that's exactly that feature, that requires heaps of RAM).
This ”scared” and ”terror” talk is a bit unnecessary as the only one interested in numbers in a version no. is someone selling it.
Linus is mocking this very fact by arbitrarily raising the kernel version.
Wow time flies! Feels like only yesterday I was reading about the inception of that project.