Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Like many other things, the file system is a layer that, after some passing of time, is proving a bit raw in the abstraction it provides. And thus layers are created above it.

The same thing is happening pretty much everywhere in computer engineering. Our network protocols provide higher level abstractions (encryption, RPC calls, CRUD, …). Higher level graphics rendering libraries proliferate. Programming languages provide additional layers and safety guarantees.

> What would it look like to go higher level?

File systems aren't "bad" and don't need to be changed or replaced — we just need to use the higher abstractions that already exist much more. Use a proper database when it's appropriate. Or some structured object storage system, maybe integrated into your programming language.

Ultimately, accessing the file system should more and more become akin to opening a raw block device, a TCP socket without an SSL layer, or drawing individual pixels. Which is to say: there are absolutely good reasons to do so, but it shouldn't be your default. And it should raise a flag in reviews to check if it was the appropriate layer to pick to solve the problem at hand.

(added to clarify:)

This all is just to say: it's more helpful to proliferate existing layers above the file system, than to try to change or extend the semantics of the existing FS layer. Leave it alone and put it in the "low-level tools" drawer, and put other tools on your bench for quick reach.



> File systems aren't "bad" and don't need to be changed or replaced

The complaint is specifically about UNIX filesystem. And yeah, it's bad. It needs to be replaced, not wrapped in another layer of duct tape.

The examples of layers you described come not from any kind of sound design that evolved to be better. It was bad, design without foresight and much design at all. Historically, it won because it was first to be ready, and the audience was impatient. And it stayed due to network effect.

The consistency guarantees, the ownership model, the structure of the UNIX filesystem objects are conceptually bad. Whatever you build on top of that will be a bad solution because your foundation will be bad. The reason these things aren't replaced is tradition and backwards compatibility. People in filesystem business knew for decades that the conceptual model of what they are doing isn't good, but the fundamental change required to do better is just too big and too incompatible with the application layer to ever make the change happen.


> This all is just to say: it's more helpful to proliferate existing layers above the file system, than to try to change or extend the semantics of the existing FS layer. Leave it alone and put it in the "low-level tools" drawer, and put other tools on your bench for quick reach.

Yes! But it's easier said than done when one of these things is in the stdlib and the other isn't.


> one of these things is in the stdlib and the other isn't

Oh it's much worse than that. One of these things is what the user sitting in front of their computer has a nice integrated UI to view and search… if a photo editing applications starts storing my photos in a database that I can't easily and simply copy some photo out of, I'll be rather annoyed. And each application having their own UI to do this isn't the solution either, really.

[EDIT: there was some stuff here about shell extensions & co. It was completely besides the point. The problem is that the file system has become and unquestionably is the common level of interchange for a lot of things.] …didn't Plan 9 have a very interesting persistence concept that did away with the entire notion of "saving" something — very similar to editing a document in a web app nowadays, except locally?

Either way I don't know jack shit about where this is going or should go. I'm a networking person, all I can tell you for sure is to use a good RPC or data distribution library instead of opening a TCP socket ;).


> didn't Plan 9 have a very interesting persistence concept that did away with the entire notion of "saving" something — very similar to editing a document in a web app nowadays, except locally?

Though I find that to be an absolute mismatch when I'm opening a document for reference purposes only, which means that any edits I might make are either accidental or otherwise only meant to be temporary (like if I'm opening a DWG drawing of a plan and am drawing some additional auxiliary lines to take some measurements or something like that). Automatically saving a safety copy to guard against program crashes or something like that is fine, but automatically overwriting the master file with my changes definitively isn't the right thing in that case…


> …didn't Plan 9 have a very interesting persistence concept ...

I think that was Oberon which influenced the Plan 9 Acme editor.


It was, you can see it on an Oberon emulator.

https://schierlm.github.io/OberonEmulator/


Disagreed, the UNIX FS model is as bad as you can get.

It "works" only because 99.99% of all programs don't try to poke into other files and directories where their fingers don't belong. That's the only reason things are not in a complete chaos.

We need DB-like features in FS-es not just yesterday, but 10 years ago.

Some of the most successful projects I've seen and participated in were making a heavy usage of SQLite which does solve the FS deficiencies quite well. Though it does require a buy-in that's rarely there for most teams.

We don't need more abstractions on top of stuff. We need new ways of interacting with the old existing stuff while hiding the existing stuff forever.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: