"Over time, the number of bugs does not diminish, but rather remains a constant in a file system’s lifetime."
"We observed that similar mistakes recur, both within a single file system and across different file systems. "
Essentially a completely different approach to file system development is necessary to produce something more reliable. But there is no market for it and no right people to do it. And I have doubts such people can even exist, because it requires both understanding reliability well enough to have a decent approach, but not well enough to be blind to many risks and think pursuing such project is even a good idea. So kernel file systems are sort of always written by people who should not be writing file systems.
Anyway, relying on the lower layer was never a good idea to begin with.
You have to start out small, get something minimal and basic not just working but robust enough that you understand it, rewriting it as many times as it takes to get it right. Then you add the next feature or enhancement, and then the next, all the while rewriting shit when you discover your understanding some part of the system was flawed.
The reason filesystems are hard is because they're hard to develop incrementally - you need a huge number of different moving parts before you have a real filesystem. And no one wants a toy filesystem, there's huge pressure to get something really sophisticated done quickly because anything less is going to be uninteresting compared to what we have now.
That's really most of it, if you want to produce high quality code you have to resist the urge to go too quickly and you have to go back and fix your mistakes as soon as you find them, even when it's not very fun.