Added a small clarification:
"The intuition here is that directories are a shared resource for their direct children and must therefore serialize concurrent directory-modifying operations, causing contention. In brief, file creation or deletion cannot occur at the same time within one directory."
The NVMe protocol has extremely deep queues [1] and leaving them empty means leaving performance on the table. I think you'll find it surprisingly difficult to saturate the PCIe bus with just one core: PCIe 7 will support 512GB/s. Assuming a single core can produce 64 bytes (an entire cache line!) per cycle running at 5GHz, you're still only at 320/512=62.5% saturation. This napkin math is a little BS, but my point is that individual cores are quickly going to be outpaced by bandwidth availability.
> and totally unclear how directories represent an "interference" boundary
To add a bit more color here, it depends on how your file system is implemented. I belive Windows stores every file's metadata in a global database, so segmenting operations by directory yields no benefits. On the other hand, Unix FSs tend to store file_name to inode mappings per directory, so creating a new mapping in one directory doesn't interfere with another directory.
> The NVMe protocol has extremely deep queues [1] and leaving them empty means leaving performance on the table. I think you'll find it surprisingly difficult to saturate the PCIe bus with just one core: PCIe 7 will support 512GB/s
is disk IO bottlenecked by NVMe/PCIe limits, or by disk iops limits?
> Unix FSs tend to store file_name to inode mappings per directory, so creating a new mapping in one directory doesn't interfere with another directory.
again, you're handwaving on what "interfere" means
do "inode mappings" represent resources with no shared resource constraints?
is reading one "inode mapping" as fast as you can with one core independent from reading a different "inode mapping" as fast as you can with a separate core?
> is disk IO bottlenecked by NVMe/PCIe limits, or by disk iops limits?
Note that I'm out of my depth here, so this is all speculation.
Until we hit hardware limitations (which will be PCIe 6 if I had to guess), I'm pretty sure those are the same thing. One read/write iop = 4KiB. If your PCIe bandwidth is limited to N GB/s, then there are only so many iops you can physically send/receive to/from the SSD, regardless of how many iops the SSD could be capable of processing. So currently we're bottlenecked by PCIe, but I doubt that will continue to be the case.
> again, you're handwaving on what "interfere" means
It depends on how the file system is implemented, but my guess would be that a lock on the inode or block cache entry is acquired.
> do "inode mappings" represent resources with no shared resource constraints?
Those are the contents of the directory. The problem is not reading them, but changing them.
when you interact with a file or directory, the mapping of that logical thing to disk inodes (which are the physical 4kb whatevers you speak of) is managed for you by the fs cache
that cache coalesces and serializes access to disk, it does all of this "locking" you're referring to, and it's very smart
it seems like you're writing code assuming this intermediating layer does not exist?
> The intuition here is that directories are a shared resource for their direct children and must therefore serialize concurrent directory-modifying operations, causing contention.
why do you think this is true?
i've never heard of anything like it
directories are inodes on a file system, they are in no way "shared resources for their direct children", and there is no concept of a "directory-modifying operation" which contends with operations on any file (or directory) which is a "child" (subdir, sub-file) of that directory
your claim is totally bizarre to me, afaik it's nonsensical, but i guess i could be mistaken
A directory is a file like anything else that contains a map of names to inodes. If you're trying to add or remove mappings (create or delete files), then clearly some synchronization must occur or the contents of the file will contain garbage. In theory you could get away with a very small critical section that says "lock bytes N through M" of the file, but then how do you deal with disk block alignment (i.e. two pairs of n-m bytes are on the same disk block, so they need to take turns anyway) and how do you deal with I/O errors (the first n-m bytes fail, but the second n-m succeed, now you have a hole with garbage).
Also no need to theorize: run the benchmark I linked for yourself. It clearly shows a massive advantage to having each thread work with its own directory.
> A directory is a file like anything else that contains a map of names to inodes. If you're trying to add or remove mappings (create or delete files), then clearly some synchronization must occur or the contents of the file will contain garbage.
this synchronization is handled for you by the fs, specifically the fs cache
inode alignment and errors are managed by this intermediating layer
your benchmarks are not demonstrating what you think they are demonstrating
Plus accompanying benchmark: https://alexsaveau.dev/blog/projects/performance/files/fuc/f...
---
> file operations are (almost always) io bound
This is a common misconception. It was presumably true a decade ago, but PCIe is getting exponentially faster every 3 years: https://arstechnica.com/gadgets/2022/06/months-after-finaliz...
The NVMe protocol has extremely deep queues [1] and leaving them empty means leaving performance on the table. I think you'll find it surprisingly difficult to saturate the PCIe bus with just one core: PCIe 7 will support 512GB/s. Assuming a single core can produce 64 bytes (an entire cache line!) per cycle running at 5GHz, you're still only at 320/512=62.5% saturation. This napkin math is a little BS, but my point is that individual cores are quickly going to be outpaced by bandwidth availability.
> and totally unclear how directories represent an "interference" boundary
To add a bit more color here, it depends on how your file system is implemented. I belive Windows stores every file's metadata in a global database, so segmenting operations by directory yields no benefits. On the other hand, Unix FSs tend to store file_name to inode mappings per directory, so creating a new mapping in one directory doesn't interfere with another directory.
[1]: https://en.wikipedia.org/wiki/NVM_Express#Comparison_with_AH...