
Understanding ZFS storage and performance - jimmcslim
https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/
======
beagle3
This is an excellent article. I've been following ZFS for a long time and
haven't had a chance/reason to use it yet.

Nothing in this article was new to me, but it is so far the best intro to ZFS
that I've seen.

~~~
karatestomp
I'm using on my home fileserver now, as of about 6 months ago. First time I've
used it.

Impressions:

1) if anything goes wrong "how do I get to my files" is worrisomely complex.
If not for how many times I've read praise of it from people who seem to know
what they're doing, I'd have written it off as a great way to work really hard
at eventually losing all my stuff.

2) The UI feels like using git. Not in a good way. It works but instead of
writing the command to do the thing you want, you always seem to be writing
three commands to do the things that (not-at-all-obviously) do the thing that
you want. Nothing at all about it feels like managing disks or filesystems
traditionally.

3) It doesn't like external disks very much. I'm using it with some anyway but
it's not a great situation. Notably if they change designation (say, you
switch USB ports) it freaks out. There are ways around this but all the
examples you see on the web have you doing it the "wrong way" (at least on
Linux) so if you set it up your mirroring and such with those then you're
stuck with a working, but fragile setup, and the option to break all of it
with unclear consequences in order to do it "right" (see 1 & 2 for why I
hesitate to do this)

The data integrity assurance, checksumming, and auto-repair from a second
source is great. I... kind of hate everything else about it. If I could get
_just_ those things from a more traditional filesystem that'd be really nice.
I imagine it's one of those things that are fine if you live in it, but
touching it every now and then on a hobbyist basis is something I'll happily
abandon as soon as I can get the parts I care about more simply somewhere
else.

~~~
aleph-
For 3, how are you adding the disks to the pool? Not as /dev/sdX right?

There are some easy symlinks under /dev/disk/ you can use.

I ten to use either by-uuid/ or by-label/ myself.

~~~
karatestomp
Yeah UUID would be the way to go. That it doesn't do that anyway when you
address them as /dev/sdX, like most of the examples online do, seems odd. I
was pretty surprised to find it was tied to the system-ordering if you add
them the user-friendly way—you've gotta do the translation to UUID for it,
when adding the disk.

Since restoring a disk that had changed its system-ordered address meant re-
silvering the whole damn thing (even though the data was already the same?) I
was reluctant to do that a third time to fix it by changing to UUID. For now I
just don't touch the disks. Eventually I'll replace the externals with
internals and then I'll take care of it.

EDIT: that's exactly what I mean—examples strongly favor /dev/sdX (or similar)
but _in fact_ you want to use UUID and the zfs command line tools don't warn
you, let alone default to transparently using UUID instead unless told
otherwise, either of which would be a clear improvement.

