Fair point, though I'm not sure I agree. MPMC channels underpin pretty much every general task scheduler (take a peek inside tokio or rayon for example). And SPSCs are quite useful for designing custom pipelines. Though I agree that MPMC channels are silly.
As noted by other commenters, the point I was trying to get across is that the way we implement lockless channels is suboptimal and could be made faster from a theoretical standpoint.
In my benchmarks[1], the average processing time for an element is 250ns with 4 producers and 4 consumers contending heavily. That's terrible! Even if your numbers are correct, 100ns is a bit faster than two round trips to RAM while 33ns is about three round trips to L3 and ~100x slower than spamming a core with add operations. That's slow.
Thanks for sharing, I had not! It sounds like "processor sharing" would be the expected mode of operation for lockless queues. But see my comment to the parent, this is not how they work.
In principle there is an infinite number of possibilities for service disciplines, since it is an arbitrary function to select an element from sets. Service disciplines can also include randomness. I only know the basics of the mathematics of queue theory, but my introduction to it gave me the impression that almost anything but the most basic of disciplines didn't have closed form solutions to most questions. Like many models, you can introduce some things as approximations to get close enough answers. I personally prefer simulation, but that has its own problems.
This is actually a great analogy because it exemplifies the misconceptions people have about lockless queues.
In the example with multiple counters, in real life each counter could shout out a number and have people approach their respective counters in parallel. But this is not how lockless queues work. Instead, the person at the head of the queue holds a baton and when multiple numbers are called, everybody waiting in the queue goes up to the counter of the person holding the baton. Once that head-of-the-queue has made it to the counter, they give the baton to the person behind them who then drags everybody along to their counter. And so on.
The article was arguing for a lockless channel implementation akin to your interpretation of a queue with parallel access to the counters.
I made something similar a while back where you can set your own start and end time for the "day": https://alexsaveau.dev/10hrday
The point was to be able to divide the day into nicely sized bites for getting stuff done. I chose 10 minutes in an hour for that reason: you can get a small task done in one "minute."
Added a small clarification:
"The intuition here is that directories are a shared resource for their direct children and must therefore serialize concurrent directory-modifying operations, causing contention. In brief, file creation or deletion cannot occur at the same time within one directory."
The NVMe protocol has extremely deep queues [1] and leaving them empty means leaving performance on the table. I think you'll find it surprisingly difficult to saturate the PCIe bus with just one core: PCIe 7 will support 512GB/s. Assuming a single core can produce 64 bytes (an entire cache line!) per cycle running at 5GHz, you're still only at 320/512=62.5% saturation. This napkin math is a little BS, but my point is that individual cores are quickly going to be outpaced by bandwidth availability.
> and totally unclear how directories represent an "interference" boundary
To add a bit more color here, it depends on how your file system is implemented. I belive Windows stores every file's metadata in a global database, so segmenting operations by directory yields no benefits. On the other hand, Unix FSs tend to store file_name to inode mappings per directory, so creating a new mapping in one directory doesn't interfere with another directory.
> The NVMe protocol has extremely deep queues [1] and leaving them empty means leaving performance on the table. I think you'll find it surprisingly difficult to saturate the PCIe bus with just one core: PCIe 7 will support 512GB/s
is disk IO bottlenecked by NVMe/PCIe limits, or by disk iops limits?
> Unix FSs tend to store file_name to inode mappings per directory, so creating a new mapping in one directory doesn't interfere with another directory.
again, you're handwaving on what "interfere" means
do "inode mappings" represent resources with no shared resource constraints?
is reading one "inode mapping" as fast as you can with one core independent from reading a different "inode mapping" as fast as you can with a separate core?
> is disk IO bottlenecked by NVMe/PCIe limits, or by disk iops limits?
Note that I'm out of my depth here, so this is all speculation.
Until we hit hardware limitations (which will be PCIe 6 if I had to guess), I'm pretty sure those are the same thing. One read/write iop = 4KiB. If your PCIe bandwidth is limited to N GB/s, then there are only so many iops you can physically send/receive to/from the SSD, regardless of how many iops the SSD could be capable of processing. So currently we're bottlenecked by PCIe, but I doubt that will continue to be the case.
> again, you're handwaving on what "interfere" means
It depends on how the file system is implemented, but my guess would be that a lock on the inode or block cache entry is acquired.
> do "inode mappings" represent resources with no shared resource constraints?
Those are the contents of the directory. The problem is not reading them, but changing them.
when you interact with a file or directory, the mapping of that logical thing to disk inodes (which are the physical 4kb whatevers you speak of) is managed for you by the fs cache
that cache coalesces and serializes access to disk, it does all of this "locking" you're referring to, and it's very smart
it seems like you're writing code assuming this intermediating layer does not exist?
> The intuition here is that directories are a shared resource for their direct children and must therefore serialize concurrent directory-modifying operations, causing contention.
why do you think this is true?
i've never heard of anything like it
directories are inodes on a file system, they are in no way "shared resources for their direct children", and there is no concept of a "directory-modifying operation" which contends with operations on any file (or directory) which is a "child" (subdir, sub-file) of that directory
your claim is totally bizarre to me, afaik it's nonsensical, but i guess i could be mistaken
A directory is a file like anything else that contains a map of names to inodes. If you're trying to add or remove mappings (create or delete files), then clearly some synchronization must occur or the contents of the file will contain garbage. In theory you could get away with a very small critical section that says "lock bytes N through M" of the file, but then how do you deal with disk block alignment (i.e. two pairs of n-m bytes are on the same disk block, so they need to take turns anyway) and how do you deal with I/O errors (the first n-m bytes fail, but the second n-m succeed, now you have a hole with garbage).
Also no need to theorize: run the benchmark I linked for yourself. It clearly shows a massive advantage to having each thread work with its own directory.
> A directory is a file like anything else that contains a map of names to inodes. If you're trying to add or remove mappings (create or delete files), then clearly some synchronization must occur or the contents of the file will contain garbage.
this synchronization is handled for you by the fs, specifically the fs cache
inode alignment and errors are managed by this intermediating layer
your benchmarks are not demonstrating what you think they are demonstrating