I have basically the opposite problem. I've been looking for a filesystem that maximizes performance (and minimizes actual disk writes) at the cost of reliability. As long as it loses all my data less than once a week, I can live with it.
Don't you want a RAM disk for this? It'll lose all your data (reliably!) when you reboot.
You could also look at this: https://rwmj.wordpress.com/2020/03/21/new-nbdkit-remote-tmpf... We use it for Koji builds, where we actually don't care about keeping the build tree around (we persist only the built objects and artifacts elsewhere). This plugin is pretty fast for this use case because it ignores FUA requests from the filesystem. Obviously don't use it where you care about your data.
Have you tried allowing ext4 to ignore all safety? data=writeback, barrier=0, bump up dirty_ratio, tune ^has_journal, maybe disable flushes with https://github.com/stewartsmith/libeatmydata
overlayfs has a volatile mount option that has that effect. So stacking a volatile overlayfs with the upper and lower on the same ext4 could provide that behavior even for applications that can't be intercepted with LD_PRELOAD
You should be able to do this with basically any file system by using the mount options `async`(default) `noatime` disabling journalling, and massively increasing vm.dirty_background_ratio, vm.dirty_ratio, and vm.dirty_expire_centisecs.
It all depends on how much reliability are you willing to give up for performance.
Because I have the best storage performance you'll ever find anywhere, 100% money-back guaranteed: write to /dev/null. It comes with the downside of 0% reliability.
You can write to a disk without a file-system, sequentially, until space ends. Quite fast actually, and reliable, until you reach the end, then reliability drops dramatically.
Trouble is you can't use /dev/null as a filesystem, even for testing.
On a related note, though, I've considered the idea of creating a "minimally POSIX-compliant" filesystem that randomly reorders and delays I/O operations whenever standards permit it to do so, along with any other odd behavior I can find that remains within the letter of published standards (unusual path limitations, support for exactly two hard links per file, sparse files that require holes to be aligned on 4,099-byte boundaries in spite of the filesystem's reported 509-byte block size, etc., all properly reported by applicable APIs).
Yeah, I've had good experience with bypassing fs layer in the past, especially on a HDD the gains can be insane. But it won't help as I still need a more-or-less posixy read/write API.
P.S. I'm fairly certain that /dev/null would lose my data a bit more often than once a week.
"once a week" was maybe a too extreme example. For my case specifically: lost data can be recomputed. Basically a bunch of compiler outputs, indexes and analysis results on the input files, typically an order of magnitude larger than the original files themselves.
Any files that are important would go to a separate, more reliable filesystem (or uploaded elsewhere).
On top of other suggestions I've seen you get already, raid0 might be worth looking at. That has some good speed vs reliability tradeoffs (in the direction you want).
Can confirm. When I can't work off my internal MacBook storage, my working drive is a RAID0 NVME array over Thunderbolt. Jobs setup in Carbon Copy Cloner make incremental hourly backups to a NAS on site as well as a locally-attached RAID6 HDD array.
If the stripe dies, worst case is I lose up to one hour of work, plus let's say another hour copying assets back to a rebuilt stripe.
There are so many MASSIVE files created in intermediate stages of audiovisual [post] production.