
A detailed look at Ubuntu’s new experimental ZFS installer - c89X
https://arstechnica.com/information-technology/2019/10/a-detailed-look-at-ubuntus-new-experimental-zfs-installer/
======
e12e
Somewhat related, I recently watched:

"By the numbers: ZFS Performance Results from Six Operating Systems and Their
Derivatives"
[https://www.youtube.com/watch?v=HrUvjocWopI](https://www.youtube.com/watch?v=HrUvjocWopI)

Slides: [https://callfortesting.org/log/vBSDcon-2019-Dexter-
BytheNumb...](https://callfortesting.org/log/vBSDcon-2019-Dexter-
BytheNumbers.pdf)

And was surprised that zfs for windows (and hyper-v core) appear to be alive
and comming along.

Anyone using zfs on windows? Are they also going off of zfsonlinux - and if so
how's support for encryption comming?

Finally having shared, read write, encrypted home that works on both Linux and
windows safely would be great!

~~~
kop316
I used zfs on Linux with encrypted / for a while on Debian. It works really
well, but unfortunately when I upgraded from Jessie to Buster, it did not
recognize how I set it up and broke the system. I ended up having to
reinstall.

~~~
oarsinsync
Did you take a snapshot before performing the upgrade, and did you try to
restore the previous snapshot before the reinstall?

Curious if snapshots can protect you even as far as OS upgrades gone wrong.
I'd assume they can, if all filesystems are ZFS + snapshotted.

~~~
kop316
To be honest I didn't feel like tinkering with it, I just reinstalled. Right
now it is on ext4.

My issue was that for some reason Linux no longer could find the encrypted
partition to decrypt, so I couldn't get it up to a point where I could even
work with it.

~~~
oarsinsync
Ah that's a shame. If you'd had a bootable USB stick or some pre-boot
environment to use, you probably could have rolled back from a snapshot. At
least, that's my understanding of the theory.

Your experience has definitely not discouraged me from wanting to ZFS my root
partitions though. If only to provide the potential / theoretical ability to
roll back a snapshot if necessary!

~~~
kop316
To be honest, it was laziness on my part, I could have probably figured it out
and fixed it. The snapshot ability was incredibly nice, don't get me wrong. I
really liked it on my system. I may switch back in the near future.

I can post my "script" when I get home if you want to try it. I use script
very liberally, it was my notes of the commands I used in order in order to
get ZFS/dmcrypt working.

------
abrowne
Does anyone have a good idea if ZFS is something I might want on a
(workstation or home) laptop, including power usage? Or more for a
desktop/server?

~~~
wil421
Why would you put ZFS on a laptop? How many drives can you fit into a laptop?

I’m running FreeNas at home with 6x4TB WD Red drives. All my computers back up
to FreeNas. My Plex server reads/writes all its media from it as well.

I’m going to switch FreeNas to a Linux server with ZFS at some point.

~~~
pm7
> Why would you put ZFS on a laptop? How many drives can you fit into a
> laptop?

Checksums, compression, snapshots (so easy and fast incremental backups using
send/receive).

ZFS is much more then RAID. Also, even with only one drive we can increase our
chances a bit using copies=2.

------
chronogram
I think it's neat that having the possibility of a ZFS root on Ubuntu exists,
but I just don't see myself using it. Will anyone, and if so for what purpose?

I'm not in the home NAS building sphere, but wouldn't you rather have your
root be a SquashFS image on an SD card or similar, with the storage section
being ZFS?

As for my laptop, I guess the snapshots are nice. I've never had a situation
with my laptop where I wished I had snapshots, and even if something did break
I only care about the files I have, which I back up. Then the snapshots are an
extra safety net I suppose.

~~~
solatic
> but wouldn't you rather have your root be a SquashFS image on an SD card or
> similar, with the storage section being ZFS?

Read-only root filesystems are one of those things that everybody ought to do,
yet are usually impractical. Unfortunately, most software doesn't really
support the Linux Filesystem Hierarchy Standard, so you have software that
modifies itself under /usr (supposed to be read-only), ever-growing logs and
caches not stored under /var/log and /var/cache so they can't be managed
automatically, etc.

You generally find squashfs roots where the distro is specialized and not
general-purpose - for a recent example, see Talos.

~~~
AnIdiotOnTheNet
The OSs that insisted upon scattering application files all over the hierarchy
only have themselves to blame for this situation. At some point in the past 30
years thought could have been given to the idea that applications are separate
from the platform they run on, but UNIX just doesn't think that way.
Everything must be part of the same gigantic Goldberg-esq state so everything
can catch fire and burn down at once as soon as there's a conflict of any
kind.

Anyway, the fix is pretty obvious: put each application in its own namespace,
thus its view of "/usr" isn't the same as the immutable OS "/usr" and it is
free to write whatever it wants without causing any trouble.

~~~
solatic
> Everything must be part of the same gigantic Goldberg-esque state so that
> everything can catch fire and burn down at once as soon as there's a
> conflict of any kind.

Well, that's exactly the job of competent package managers, to make sure that
packages fully stated their dependencies (so that the package manager would
forbid you from getting into an illegal state wherein a required dependency
was not installed or the wrong version of the dependency was installed), and
installed correctly according to the LFHS so that there would be no conflicts.

/opt isn't new. Plenty of people decided that they could throw everything into
a folder under /opt, including copies of all dependencies, with no references
outside of that folder, and that would be enough. And it might have been
enough if it wasn't for sysadmins in charge of uptime tearing their beards out
over /opt constantly filling up and causing multi-product outages when it did.

The whole point of the LFHS is that it represented a contract along the
boundary between developers (who needed to bundle libraries, save state, have
access to a cache, etc.) and operators (who worried about disks and associated
costs - why would you ever spend a dime backing up /var/cache? Why would you
ever use a fast disk for /etc?). When this contract was followed, systems
worked splendidly. When the contract was broken, the people who broke the
contract shot themselves in the foot just about the same as anybody writing C
code knows how many footguns are in the C language.

Of course, today, with the price of resources having crashed through the
floor, and containerization having more or less reached maturity, this is all
more or less an academic, historical, moot point. But dynamically linked
libraries weren't stupid, they were an artifact of an age where statically
linking libraries was horrendously uneconomical and the idea of namespacing
anything was enough to get laughed out of the room by people wondering how you
would expect to pay for any of it.

~~~
AnIdiotOnTheNet
> Well, that's exactly the job of competent package managers, to make sure
> that packages fully stated their dependencies (so that the package manager
> would forbid you from getting into an illegal state wherein a required
> dependency was not installed or the wrong version of the dependency was
> installed), and installed correctly according to the LFHS so that there
> would be no conflicts.

Package managers are an over-engineered solution to a problem that only really
exists if you rely on package managers. Ever had your apt database break on
you? It's a joy, let me tell you. Nothing ever gets installed or removed
again. Blowing up your system and requiring your OS to be reinstalled is
ridiculous. Especially since that was a problem that didn't even used to
exist. OSs were once stored on _roms_.

> /opt isn't new. Plenty of people decided that they could throw everything
> into a folder under /opt, including copies of all dependencies, with no
> references outside of that folder, and that would be enough.

Sure, as long as you never move it from opt. UNIX devs' addiction to hard-
coding fixed paths is difficult to break.

> And it might have been enough if it wasn't for sysadmins in charge of uptime
> tearing their beards out over /opt constantly filling up and causing multi-
> product outages when it did.

Is that really a problem outside of Linux, which famously defined its base-
system as "the kernel" and therefore necessitated that any such install would
need to bring along all its own dependencies because it couldn't rely on
_anything_ existing except the kernel?

> Of course, today, with the price of resources having crashed through the
> floor, and containerization having more or less reached maturity, this is
> all more or less an academic, historical, moot point. But dynamically linked
> libraries weren't stupid, they were an artifact of an age where statically
> linking libraries was horrendously uneconomical and the idea of namespacing
> anything was enough to get laughed out of the room by people wondering how
> you would expect to pay for any of it.

Yet we still live with this bullshit today anyway and the Linux community
especially eschews any attempt to move on. This despite some of the same
people who created UNIX creating Plan 9, which rejected dynamically linking
and embraced namespacing over _twenty years ago_!

------
dual_basis
My biggest complaint with ZFS is the lack of flexibility. If you setup two
disks in a mirrored configuration, that's the end of it - those disks will be
mirrored forever. If you get another drive and want to expand storage while
still having redundancy, i.e. RAID5 type setup, you can't. You can increase
storage with a pair of drives, by mirroring them first and then adding them to
the zpool, but this is not an option configuration from a safety standpoint.

I loved the flexibility of BTRFS in this regard - you could change the
duplication level at any point. BTRFS seemed to abstract the storage away
beautifully - just give us your drives, any combination of sizes, tell us how
many copies of your data you want to keep and we'll handle the rest. It wasn't
perfect, but I loved this approach, and I feel like it could go even further,
eg. calculate risk of failure probabilities and then just tell it how low you
want your risk. If it can't do it with your current drive capacities it could
tell you what it needs to accomplish it, and then rebalance the data once you
add the drives. Got an SSD? Throw it in the pool and let it optimize for
cacheing. I know bcachefs has that particular aspect covered, but there's a
lot of other basic features that still need to be implemented.

~~~
lathiat
They finally added this to Oracle Solaris ('closed source' zfs) in 11.4
([http://blog.moellenkamp.org/archives/50-How-to-remove-a-
top-...](http://blog.moellenkamp.org/archives/50-How-to-remove-a-top-level-
vdev-from-a-ZFS-pool.html))

And there is work on it for OpenZFS:
[https://www.youtube.com/watch?v=Njt82e_3qVo](https://www.youtube.com/watch?v=Njt82e_3qVo)

for zfsonlinux As far as I understand you can currently add and remove
striped/mirror devices, but you cannot remove raid-z vdevs (ref:
[https://github.com/zfsonlinux/zfs/issues/9129](https://github.com/zfsonlinux/zfs/issues/9129))
- so there is definitely work left to be done in this area.

as an aside mdadm has surprisingly fantastic features in this area and can
reshape pretty much any array type to any other. i was impressed by that.

------
aasasd
Strangely, rather old ZFS keeps popping up in the news, while I never hear of
Btrfs these days. It had some hype five or so years ago―what became of it?

~~~
jdhawk
Synology, which is linux based, is really high on btrfs, otherwise I have not
seen a lot of use cases.

~~~
u02sgb
Doesn't use many of the advanced features though e.g. RAID is done using
standard Linux mdraid with a btrfs partition on top of it.

~~~
pnutjam
btrfs pools can use different sized disks, which can be useful.

------
drudru11
It would be great to know if I could boot with mirrored and encrypted disks.
What happens if one of the disks fails? Is there an easy way to boot from the
other disk (I.e. do they replicate the MSDOS/UEFI) boot partition?)

