Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> why the hell we need those ultracomplicated > can-do-everything filesystems?!

We don't but some people do. Or think they do, but that's the same. It's not like I'm losing something because I can always use the simpler filesystems myself. I went with ReiserFS for over a decade: never really had to think about my file systems which is what I want, after all. Once you find a good file system, you stop thinking about file systems.

For me, this means two things.

I want my file system to be safe and transactional: either my change gets in or it doesn't, but I don't want to find my file system in some degraded in-between state. Ever. I'm willing to pay for that with cpu time or i/o speed, that's like the first 90% of my requirements.

The second criteria is that it's generally lean and doesn't do anything stupid algorithmically. It should support big files, provide relatively fast directory lookups (so that 'find' will run fast), and have some decent way of packing files onto the disk that doesn't defragment the allocations too badly, and ideally do some book-keeping during idle i/o so that I don't really have to run a defragmenter, ever. But these are kind of secondary requirements that aren't worth anything unless the file system keeps my files uncorrupted and accessible first and foremost.



Very reasonable requirements. Same here. I just want it to store files, retrieve files, have decent performance, and never screw up in a way that prevents recovery. I'd hope the baseline Is that so much to ask in 2015? ;)


As someone who has lost data on _EVERY_ single linux filesystem listed in this thread, I can say what I want out of a filesystem is code that hasn't changed for years. Once the "filesystem experts" move on to the latest code base, then I start to feel confident about the stability of the ones they left behind. As other said, what I want first out of a fileystem is "boring". I would much rather be restricted to small volumes/files/slow lookup times/etc, than discover a sectors worth of data missing in the middle of my file because the power was lost at the wrong moment 6 months ago.


Making data smaller, slower, etc. doesn't solve the problem. Good design and implementation are what it takes. Wirth's work shows simplifying the interfaces, implementation, and so on can certainly help. Personally, I think the best approach is simple, object-based storage at the lower layer with complicated functionality executing at a higher layer through an interface to it. Further, for reliability, several copies on different disks with regular integrity checks to detect and mitigate issues that build up over time. There are more complex, clustered filesystems that do a a lot more than that to protect data. They can be built similarly.

The trick is making the data change, problem detection, and recovery mechanisms simple. Then, each feature is a function that leverages that in a way that's easier to analyze. The features themselves can be implemented in a way that makes their own analysis easier. So on and so forth. Standard practice in rigorous, software engineering. Not quite applied to filesystems...


What filesystems would you say provide you with high stability confidence?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: