From the first part alone, much of the 'sin' is likely non-unique inode numbers across a pool of BTRFS devices. I could understand partitioning ranges of new numbers for performance; the non-uniqueness seems just silly.
Ranges aren't possible because there aren't enough bits in a 64-bit inode.
ZFS doesn't have this problem because each dataset (the equivalent of subvolume) is treated as a separate mountpoint. This can be annoying with NFS because you need to export and mount each dataset separately.
> This can be annoying with NFS because you need to export and mount each dataset separately.
I use ZFS on multiple servers, and appreciate this approach. The problem is -- you're going to feel pain one way or the other. In that case, make sure that the pain is obvious and predictable.
If you're using subvolumes/datasets, you will have to deal with a problem at some point. Either you have to manually export multiple NFS volumes (ZFS), or potentially have inode uniqueness issues (Btrfs).
I'd much rather have the problems make excruciatingly obvious. I can script generating a config file for exporting many datasets (and I have done so). I can't really deal with non-unique inodes in the same manner.
I overall agree, but on our build-server the person who can add new branches is not the same person who can add new entries to the autofs map, so we don't use one dataset per branch, which would make other things a lot easier.
That’s true. For me, I made a dataset for each user as a home directory. Each home directory was then exported over NFS.
In order to automate the export, I have a single script that can be executed with sudo on that system to manage both the creation of the dataset and the export.
So, there are still ways, if you’re able to use sudo.
My LDAP and NFS servers are separate too. I control both, so it’s a bit different, but I still have it setup so that account creation and home directory creation/export are handled in one script.
The account creation script (on the LDAP server) makes an SSH call to the NFS server to run the home directory creation/export update script. That specific SSH certificate is password-less and restricted to running only that single command. It could be root or another user that calls sudo.
So, some of what you’re looking for can be done with some SSH tricks… but only if both sides are comfortable with that setup. The benefit is that each side can manage their own scripts. The NIS admin can write the script on their side and NFS on their side. You just need to establish the workflow that works best for you.
Our cluster is completely self contained, so with sudo and SSH restrictions, I’m not concerned about security issues with this setup.