> I am loading the file from disk. However, my disk has bandwidth higher than 1.2 GB/s, and the file is small enough to end up in cache. Thus we are already limited by the processor. And we are not yet doing any parsing!
I really wish the author would show some proof for claims like this. io is a complicated beast and it's not certain that the subsystem could go faster with the patterns the code is doing or not. Plus there's no indication of what else is going on simultaneously on the system, so perhaps a background job spin up and the disk is io bound. We don't know and it appears that neither does the author.
Not every blog post written in the author’s free time needs to be manuscript-worthy. Informal benchmarks can be often really helpful in solving real world problems where it’s not worthwhile to try to squeeze out every last nanosecond of performance.
As I started my comment out with, I wish, not demand, not expect, just wish there was a little more technical meat on an article written by an absolute expert in the performance computing field. This is the author's day job. They've written a number of books on high performance and is currently an professor teaching high performance.
That said, I apologize if I came off more critical than I intended.
There’s a lot of overhead as soon as you involve a filesystem rather than a block device, even on a dedicated disk, particularly with btrfs. I don’t know if the same is true with MacOS and APFS; this isn’t the area I usually work in. However copy-on-write file systems (which I believe apfs is) are somewhat predisposed to fragment files as part of the dedup process; I don’t know if apfs runs it online in some way so it could have affected the article’s author’s results.
The standard library implementation details can also have a huge impact, eg I observed with Rust for a prior project when I started fiddling with the read buffer size:
The other issue that I see is that their I/O is implicitly synchronous and requires a memory copy. They might see better performance if they can memmap the file, which can probably solve both issues. Then if C# allows it, they can just parse the CSV in-place; with a language like Rust, you can even trivially do this in a zero-copy manner, though I suspect it’s more involved with C# since this requires setting up strings / parsing that point at the memmaped file.
At that point, the OS should be theoretically able to serve up the cached file for the application to do some logic with, without ever needing to copy the full contents again into separate strings.
C# has an abstraction for memory-mapped files. You can always use raw pointers and directly call the corresponding OS APIs with interop too.
However, the fastest-performing implementations in 1BRC challenge that were written in C# ended up with inconclusive results whether using memory-mapping over RandomAccess.Read API (which is basically a thin wrapper over read/pread calls) is faster or not: https://github.com/noahfalk/1brc/?tab=readme-ov-file#file-re...
You can relatively easily do 2 GiB/s reads with RandomAccess/FileStream as long as sufficiently large buffer size is used. FileStream default settings already provide a quite good performance, and make it use adaptive buffer size under the hood. Memory-mapping is convenient but it's not a silver bullet (in this context) and page-faulting then mapping the page and filling it with data by performing the read within kernel space is not necessarily cheaper than passing a pointer to a buffer to read into.
The challenges in Rust and C# are going to be very similar in this type of task since C# can just pin the GC-allocated arrays to read into, call into malloc or 'stackalloc' the temporary buffer inline, and the rest of implementation will be subject to more or less identical constraints. C# is probably the closest* "high-level" language in feature set to Rust, even if this sounds strange. There's a sibling submission that covers an another angle to this: https://news.ycombinator.com/item?id=41963259
* have not looked through Swift 6 changes in detail yet
It's really not that hard, I think you are making this trivial benchmarking task sound complicated.
The link to source code is there. It uses BenchmarkRunner class, which handles warmup and multiple runs. I am assuming author ensured that stddev was small enough that the raw numbers is valid. And with 11MB file size, it will certainly be cached between runs - even if something else evicts the cache, it will show up as high stddev and then presumably author would rr-run it again on quiter systems.
I really wish the author would show some proof for claims like this. io is a complicated beast and it's not certain that the subsystem could go faster with the patterns the code is doing or not. Plus there's no indication of what else is going on simultaneously on the system, so perhaps a background job spin up and the disk is io bound. We don't know and it appears that neither does the author.