Hacker News new | past | comments | ask | show | jobs | submit login
Rethinking the filesystem as global mutable state, the root of all evil (devever.net)
103 points by stargrave 4 days ago | hide | past | web | favorite | 88 comments





Every piece of hardware on a computer is a piece of global mutable state. There are many good reasons to hide that fact behind an abstraction, but I can't help but think hiding global mutability is better handled at the application level than the OS level, because there are too many cases where the abstraction becomes extremely limiting.

As an example, most people would want to be able to import an image into a word processor regardless of where that image is located (local drive, network drive, floppy disk, etc.). To support that, most end user programs would want to be offered access to the entire filesystem. The moment two applications do this at once, you have all the shared mutable state problems we do now.


It doesn't have to be that way. Consider the following scenario:

1. Every file is owned by application.

2. A file-store application is the custodian of files meant to be used by multiple applications.

3. An application that needs to edit a file must take ownership of the file for the duration of the editing.

4. An application that needs to read a file must borrow the files.

5. A file can be borrowed by multiple applications, but owned by only one.

6. Applications can provide the ability for shared ownership, but those would by specialized applications capable of handling and merging simultaneous changes.

This is a straw-man solution, and I am sure multiple problems will have to be solved before such a system can become a reality.

> As an example, most people would want to be able to import an image into a word processor regardless of where that image is located (local drive, network drive, floppy disk, etc.). To support that, most end user programs would want to be offered access to the entire filesystem. The moment two applications do this at once, you have all the shared mutable state problems we do now.

I don't see why the custodian of photos can't be a photo management application. In fact, a filesystem is the lowest common denominator. It is possible to build higher level abstractions like PhotoStore, MusicStore, MovieStore, CodeStore etc. which accommodate and make use of the properties of individual data types to offer an enhanced experience.


Your example would fail to accommodate the most basic application of files: logging and reading file logs at the same time.

Who says logs have to be written to a file — at least as far as the user is concerned? Logs are a series of events — a LogStore will let you read events as they are appended. And even if you do want to store logs in “files”, the solution proposed here here let’s you borrow them for the owning application.

My question goes more like this: “why the hell do you want to re-approach the idea of a filesystem if the new approach can’t even accommodate something as simple as storing log files?”.

It may be time for something completely different. Just stop calling it a filesystem.


Again, why do you care about files? Logs have nothing to do with files. Files are just the interface used by filesystem to show the event stream that is logs so you can use file based tools to access the.

I am not calling it a filesystem. Let’s call it a data bank?


It already exists in the form of databases.

First that comes to mind is etcd.


Where do you plan to store the logs then?

On the disk, as usual. Files are, after all, an interface.

That's handled by point 6. You would be able to have shared ownership of the log files, but you wouldn't be able to just write to it willy-nilly. In this case, the specialized application would be trivial because it would just need to ensure that writes are append-only.

> Applications can provide the ability for shared ownership, but those would by specialized applications capable of handling and merging simultaneous changes.


Agreed.

Surely my logging application would "own" the read-write handle, while my tail application merely "borrowed" a handle for read-only.

How do those semantics differ from multi-writers which can all write in transactional blocks?

2 just sounds like reinventing the file system, now with more bugs.

No, it is not reinventing the filesystem -- it's more like a file-vault or a file-bank. Anyone looking to operate on the files must either borrow (to read) or take ownership of (to write). The difference is that while a filesystem unlimited access to all files, the app in question is meant to be more sandboxed -- handling only the files it is asked to -- e.g. shared documents that can be edited by multiple apps. Each user (on a multi-user system) will have their file-vault/file-bank and files must be explicitly transferred for shared access.

Finally, as I said, it is a straw-man proposal -- I haven't spent time working out all the kinks.


> No, it is not reinventing the filesystem -- it's more like a file-vault or a file-bank. Anyone looking to operate on the files must either borrow (to read) or take ownership of (to write).

So flock() and file permissions then?

There are improvements that could be made here -- app-level permissions in addition to user-level permissions for example. But it's still fundamentally a filesystem.


All of this can be implemented on top of a file system, of course. What do you think iPadOS does?

We are talking about what the user/application sees and is capable of accessing.


> We are talking about what the user/application sees and is capable of accessing.

But how is that different than a filesystem?

Suppose we add application-based ACLs to file permissions. Then the app does open("/path/to/file", O_RDONLY) as ever. If the app has permission to the file, it gets the new fd. If it doesn't, it gets EACCES as usual. Or the OS displays a dialog asking whether the app should have permanent or one-time access to that file, and then the call doesn't return until the user chooses one.

I don't see a fundamental change here. The application wouldn't necessarily even have to be modified.


By that logic, there is nothing wrong with having a flat memory address space. With access control, the OS can ensure that one application never access another’s memory area. Why do we need isolated virtual address spaces so that application always believes it is the only application running?

> Why do we need isolated virtual address spaces so that application always believes it is the only application running?

Historically mostly because of swap, so the OS can move a page from memory to disk and then back to a different physical memory location without modifying the application's pointers. On large systems with 32-bit applications it was advantageous because the system may have had more memory than 32-bit pointers can address and then each application can have its own address space. ASLR nowadays.

But filesystems already have the equivalent abstraction. If you run out of space on /dev/sda you can add /dev/sdb, copy /home to it and then mount /dev/sdb1 /home and the application that reads /home/alice/file is blissfully unaware that anything has changed. Heck, half the time you're not even reading from the physical drive, the data is cached in memory and you're really reading it out of the page cache.


That's interesting. Some time ago I had a similar idea, but with the following differences:

3. Ownership cannot be transferred. A file remains in the same application throughout its lifecycle.

4. An application can give another application read-only access to a namespace containing one or more files.

6. An application can give another application read/write access to a namespace containing one or more files through a transaction mechanism.

Do you see any advantages or disadvantages of both approaches from your point of view?


The idea behind ownership transfer is to ensure that an application never sees another application’s namespace. Ownership transfer doesn’t really involve copying files as it does on iPadOS — it is still held at the same location on disk. The original application can just no longer see it as it no longer remains in its namespace. At the end of the operation, the second application may choose to retain or relinquish the file as needed.

I have photos on my computer, even though I don't use a photo management application. I have compressed files. I have files taken from other machines. I remove applications. I create my own file formats. Applications themselves are files.

Who owns all the unassigned stuff? The file system. Give it a different name and you change nothing.


You don't use a photo management application because the file-system exposes photos as files. It is like saying -- in java parlance -- I am perfectly fine using object for everything -- I don't need any other data type. In the hypothetical system I proposed, the photo store would be exposed via an application like Explorer/Finder -- just tailored for viewing photos.

Compressed files are containers for files -- they are nothing on their own.

> Applications themselves are files.

Sure -- they go into ApplicationStore -- which tailored for say, quick application discovery and launch.

> Who owns all the unassigned stuff? T

What exactly is the unassigned stuff? Files taken from other machines? I don't see why they can't be put into an appropriate store based on type.

> I create my own file formats.

I assume you create your own file formats because you develop your own applications? In that case, your application will be the owner of the file in question.


What if I don't desire a gui application store or desire a different one or switch application stores or move a dir with executable files from one machine to another?

Is the application store the only way to start at app? If I bind a global hotkey to start an app how is that handled?

How about I have an addon for firefox that allows me to bind keys to operations which can include javascript which can itself start applications which can then write to files.

I use this for example to open links in mpv with a singular line in my config file for the addon, tridactyl,

bind V hint -W ! mpv this means show a hint on links and for the given link send the chosen link to mpv, wherein mpv will look at the link and if it can read it will use youtube-dl to download the link to a temp dir and display the video.

Who owns what there?

Do I have to access my photos via a a singular photo manager? I can imagine images accessed in 17 different contexts by 30 different apps does each of them need permission to access each dir which contains images or just generically to access images.

Does this mean that a singular permission would control both access to the browsers cached image of the ycombinator logo on this page and some ones nude selfies?

I just don't think you can cleanly map apps -> files beyond the trivial cases without making something inflexible that sucks to use.


> What if I don't desire a gui application store or desire a different one or switch application stores or move a dir with executable files from one machine to another?

What does any of this have to do with a GUI? And for transporting, extract the data into a portable bundle, or take the entire store.

> Is the application store the only way to start at app? If I bind a global hotkey to start an app how is that handled?

Again, what does any of this have to do with how the applications are stored? Can you see the files that make up an iOS app? You can still start them, can’t you? A macOS app is a folder called Something.app, the actual binary lies somewhere inside. Do you typically need to poke inside the .app folder?

> How about I have an addon for firefox that allows me to bind keys to operations which can include javascript which can itself start applications which can then write to files.

As far as the computer is concerned, you are merely starting a process. Again, what does that have to do with how applications are stored. They can be triggered however and wherever they are stored.

> Do I have to access my photos via a a singular photo manager? I can imagine images accessed in 17 different contexts by 30 different apps does each of them need permission to access each dir which contains images or just generically to access images

This is already being handled on iOS and Android. Photos apps is merely the default interface for the PhotoStore. You can access your photos from any app.

> Does this mean that a singular permission would control both access to the browsers cached image of the ycombinator logo on this page and some ones nude selfies?

Namespaces inside individual stores, or separate stores.


1. At present I can take a text file even not on the PATH put a shebang on it, mark it executable, and execute it because the underlying medium is a user visible hierarchical data store that I interact with directly. It's not clear how the app store application would be associated with it or even know about it or how say a terminal would borrow the application that it didn't know about to run it.

2. An actual filesystem allows you to run things that aren't in an app store bundle

3. Which app owns the files when An app starts an addon which starts a process which runs an app which accesses a file. Does the last app in the chain own the file? This is challenging because plenty of apps could do things based on arguments passed in that involve modifying the filesystem.

4. The way ios and android handle 2 apps accessing the same file is only acceptable if someone has envisioned the way you want this to happen to some degree on both ends. A file picker works on any type of file and an OS that can't have a file picker seems to be objectively worse whereas adding an image picker would be an upgrade. I enjoy using calibre to manage how ebooks are stored and beets for music but neither is bounded by an underlying system designed by others and neither locks said files into said structure or limits access according to it. Its trivial to all out to calibre including via the cli get the full path to a book for example and do something with it.

It seems fairly clear to me that there several layers of filesystem access.

Applications that should never have access to your filesystem because they are malicious. Avoided by installing only from trusted sources to avoid malware in the first place the best possible line of defense. This also makes it possible to get a list of applications that ought to be revoked and communicate this to users.

Running code that should have no or very very controlled access to the filesystem ordinarilylike the Javascript on this page.

Apps that run as the user on their behalf that accesses the filesystem.

You appear to want to pile a layer on top of the last. This appears to only work for the simpler cases and I'm not even clear what the benefits are supposed to be.


> It's not clear how the app store application would be associated with it or even know about it or how say a terminal would borrow the application that it didn't know about to run it.

Where does the file come from? Suppose you create a file in your terminal application, then it lives inside the terminal and you can use your terminal to run it.

> 2. An actual filesystem allows you to run things that aren't in an app store bundle

Neither a global namespace (file system) nor an appstore are required to execute a program.

> Which app owns the files when An app starts an addon which starts a process which runs an app which accesses a file.

This should be at the discretion of the application developer. This is also the way browsers already work with sandboxed addons.

> This appears to only work for the simpler cases and I'm not even clear what the benefits are supposed to be.

If you don't put everything in the same global namespace, you get more security, cohesion and compatibility. That's why we use VMs, containers, sandboxes, and various user accounts instead of doing everything in a single filesystem using the root user.


I think constraining many apps access to the file system is a fine idea. I just think think that mapping files to applications very quickly becomes farcical as the abstraction just clearly doesn't match.

It not even optimally secure. Why for example would your image editor need access to all your image files instead of just the ones passed in via a secure system dialog?

In that instance the dialog would be the checkpoint not the some weird file system borrow checker.


Instead of "ownership transfer" you can also use locks. Read and write locks are already supported on many filesystems.

Well, you can also use global variables safely with locks.

Sounds like rust’s borrow checker but for files.

That is where I got the idea. :-)

How much is that different from how it works on iPadOS?

It is meant to be more powerful and flexible than what iPad OS does, but somewhat similar.

basically the windows document folder

Except, you won’t see files, just photos, movies, documents, etc.

> Every piece of hardware on a computer is a piece of global mutable state.

It's a tenuous analogy. DRAM, disks, caches...effectively the entire memory hierarchy is implemented as message-passing. Flip flops can be well, flipped, but those are super low-level. Almost all the rest of it requires you to send messages. It's kind of ironic that all those messages basically orchestrate the illusion of global mutable state.


No, communication is implemented as message-passing (but that is largely irrelevant), but ultimately the end result is changing physical state.

>It's kind of ironic that all those messages basically orchestrate the illusion of global mutable state

Presumably because for certain things it was easier to reason about when things were mutable...

And now we're taking this facade of mutable state and trying to add a veneer over it to make it seem immutable.

It's circular :)


I think this is a really good point. QNX aligned much closer with this architecture as far as I can tell.

Doesn't POSIX already let you lock files? I don't see any reason to make it mandatory though, it not as if locking gives you a security boundary.

Sounds like QNX, which we still haven't managed to get back to.

This was first explained by the capability security community. Plan 9's private nameapaces is an approach to capability secure file systems, with the default being the empty namespace. I'm surprised the article didn't mention Plan 9 actually since it discusses capabilities.

I just realised it's 2019 and capabilities are still misunderstood, and the ACL-capability-equivalency myth continues to result in poor solutions to security problems.

For anybody who is curious, the general problem here is described in two great papers as "the confused deputy" [1] and "designation without authority" [2].

Roughly put, systems built with ACLs as the primitive mechanism for authorization can never produce practically secure systems.

[1] http://zoo.cs.yale.edu/classes/cs422/2010/bib/hardy88confuse...

[2] http://srl.cs.jhu.edu/pubs/SRL2003-02.pdf


I fear that it's because we don't know how to make globes. [0]

[0] https://corbinsimpson.com/words/globe.html


Reading that felt surprisingly familiar! I pottered about for 2 years trying to build a GUI for a Globe-based world and gave up.

Plan 9 is mentioned in the article that this one is a follow-on to.

Plan 9 was mentioned but dismissed as "bizarre and unwieldy" (FWIW, I have no opinion on this). Based on what I read, the author is probably not aware of modern capability-based security research and would benefit from exploring the space further as he appears to be re-discovering some of the concepts but is missing a clear understanding of the broader problems.

I've thought for years that 'file systems' are an abomination. We use a cobbled-together schema of parent-directory, filename, some random date/time stamps and maybe a three-letter extension. Why? Because we inheirited that from some DOS days.

Why not a collection of immutable UUID-labelled resources with an arbitrary schema of attributes? Like a relational database or some such.

Overkill? Doesn't every major app have to do this itself, in some container (document/mail folder/image definition/contact list and on and on)?

Designers of operating systems have made no effort to capture this kind of service and provide it as a fundamental OS feature. And OSs are exactly where this belongs - where you can carefully get it right and everybody writing apps can depend on it.


I've had very similar ideas, and think this is really critical missing feature that's holding back a lot of progress.

IMHO, what's needed is something like a document database (in that it allows arbitrary schema) but even more importantly, it needs to have some system-level indexing. Right now, applications that deal with large amounts of small pieces of data each roll their own, e.g., embedded database, because (1) most filesystems aren't capable of adequate performance with very large numbers of small objects and (2) you can't maintain an app-independent index anyway, so... These applications then can't interact with each other without some app-specific API, which is often costly to develop against (at least, compared with a common filesystem API) and with only the capabilities that the app developer sees fit to give you. Which they often have very little incentive to do--their real incentives are usually to keep things proprietary and customers captive.

The absence of these two capabilities makes "the unix way" where "everything is a file" an impossible data model for these types of applications.

IMHO, the right data model is probably some combination of user (UUIDs or strings doesn't matter) and content-hash indexing with versioning and conflict resolution similar to Git (and CouchDB).


> Designers of operating systems have made no effort to capture this kind of service and provide it as a fundamental OS feature.

Actually they've made several. See BeOS and WinFS, for starters, though the idea is even older than that.


> Doesn't every major app have to do this itself, in some container (document/mail folder/image definition/contact list and on and on)?

It does, but it's also something that can be provided by a library as easily as the OS. So e.g. Firefox uses sqlite, but it can use a portable library for that, and then it isn't different on Windows than Linux or Mac.

And then the application gets to choose something with an appropriate level of complexity. Sometimes all you need is Maildir or xattr(7) or JSON, sometimes you need a full on SQL database. One size fits all rarely does in practice.


Except, the results are opaque. I can't operate on the data without using their tools, which might not be what I want.

Imagine if the OS let you browse all the app data, see all the relations (that you were authorized to see), and write tools to operate over it. That would be a most Open system, compared to everything we have which is essentially walled off to us.


It's not really opaque as much as it is application-specific. But then how do you fix that? If you give applications the ability to tag things then one mail program uses a tag called "unread mail" and another uses a tag called "message read" with the opposite value and another uses a tag called "mail flags" with a bitmask where one of the bits is whether the message has been read or not etc. Smells like Windows registry.

Either you somehow enforce a high degree of uniformity, which implies a pretty serious lack of flexibility, or everybody gets to make their own decisions and then everybody makes different ones. And the second one seems better as long as the individualized thing they're doing is sufficiently well documented.


The alternative, right now, is nobody can do anything at all like this. There's not even the opportunity to create conventions in apps.

Opening up application data to OS tools, opens up a whole new world of opportunities (and hazards) for app developers.


IBM's i Series, modern heir to OS/400 doesn't have a "filesystem" in the traditional sense (but tons of tools to emulate and approximate it). Everything ends up in DB2.

I've spent time crawling around the SCSI/SAS/Storage of systems like that. Wrote dirty hacks for end-user clients to cope with incompatibilities. EG multiple versions with the same filename are totally fine. Filenames needn't and aren't unique in such systems, just another col in the DB. How do you tell a Windows ftp client which of the 20 different "report.txt" files it should download? (Filezilla actually has 'VMS' options for this, its more common than you think.)

If all you ever deal with is relatively simple, well defined, CRUD type applications on entirely abstract systems in some remote infrastructure, its easy to think that's how all development should look. It doesn't and it shouldn't.


By attributes. Which one did you want? The latest? The one created by app X? The one you printed yesterday?

The fact is, you can do these queries if you allow flexible attributes. With a fixed file system (almost all file systems) you cant ask. In practice, you've already overwritten the one you want, or one app conflicted with another, or you just can't do you want (have two versions of an app installed at the same time).


Interesting, and should definitely be read with the companion https://www.devever.net/~hl/objectworld

Given the same title, I would write a rather different article. So many of our traditional filesystem problems center around concurrency: what kind of read-after-write or durability semantics are guaranteed. A lot of effort goes into this, which is necessary for databases to work on top of a general-purpose filesystem on top of a disintermediated storage system that may be a disk or SSD with a variety of block sizes. But for most operations it's unnecessary overhead.

Many games ship with a "packfile", which is a pseudo-filesystem that appears as a giant blob to the OS. Usually it's faster to seek individual small items out of the blob than if they were separate files.

Further to that is the security problem; we've moved from "apps are all trusted, but you need to watch the files closely between multiple users" to "there's only one user, but you can't trust the apps".

Note how the cloud has avoided both of these by moving storage to three different areas with different semantics. Apps can speak to "blob storage" (S3) which has a transactionality/security granularity of one blob. Or they can speak to a database (which has intelligence of its own), or separate raw block storage if neither of those suits.

What if we moved from "everything is a file" to "everything is a URL"? Possibly adding a system-default packfile mechanism. So an app would be allowed access to everything that it ships with, but nothing outside except what it could request as URLs with various sorts of security mechanism.


> What if we moved from "everything is a file" to "everything is a URL"?

Oh, Hello RDF.[0] (Granted, in RDF everything is a URI but that's a minor detail here.)

At one point in my life I spent two years trying to work with RDF based data store modeling. Shoehorning everything into the subject-predicate-object worldview is probably beautiful from a purely mathematical point of view, but utter insanity in the real world.

Now, s-p-o model does open up a world of very interesting graph-theory approaches, especially if you're trying to build an inference engine. But at least in my experience it's not a realistic way to build general applications.

And if you thought XML as a data interface format was bad enough...

0: https://en.wikipedia.org/wiki/Resource_Description_Framework


> What if we moved from "everything is a file" to "everything is a URL"?

Do you mean "everything not on my machine is a URL"? We're getting somewhat close to that already.

Or do you mean "everything on my machine is a URL"? Then I don't see how it's different from "everything is a file with a full path", except that someone else is now responsible for the security mechanism.

"Everything is a URL" would give you a unified view of on-machine and off-machine resources...


> Many games ship with a "packfile", which is a pseudo-filesystem that appears as a giant blob to the OS.

Games get to do this because they don't care about interoperability. The packfile is intended to be a closed environment accessible only to the game code itself, with no need for anything else to care about it.

This could be interpreted as a particular case of "can't trust the apps" - a game has no need or desire to trust anything else.


interestingly enough that seems close to how alan kay talks about objects behaving... only they know how to open and read their own data, and sharing would be done via itself through its ‘interface’ i guess in this case the “object” is a whole game, but interesting nonetheless...

>What if we moved from "everything is a file" to "everything is a URL

Not sure what you're accomplishing exactly. Something would have to serve said url and you are moving your security problems into it. Mostly you can accomplish many of the same things with file security, with some difficulty.


As anyone who has put together a chroot knows, it isn't as simple as just preventing a process from accessing the filesystem. Most programs need to access a filesystem, just so they can load libc and the various other libraries it might need.

Do they now? Usually, the linker does all the loading, also known in Linux as ld.so. (even when you use dlopen)

Lazy loading can thunk through kernel so that ld.so service loads things into the process - and preferably strict loading should be used.


Indeed. But ld.so runs in the context of the process that is starting up, so if it doesn't have access to anything, it can't.

It doesn't actually have to, but you'd need a fun bit of stub code in place of unloaded function pointers etc.

Not a useful distinction, since it needs to be visible inside the chroot either way. Not only that but libc via getpwnam() and friends needs to access PAM and thence all the libpam modules.

That also depends. I have a chroot set up here to run sftp-server for incoming ssh connections, and it has 55 files in it, 40 of which appear to be libraries - none of them having anything to do with PAM.

Author here. This is the case with *nix but doesn't need to be.

As far as I'm aware, it's quite common in microkernel-based OSes to implement even ELF loading, dynamic linking, etc. inside system libraries loaded inside the process creating the new process. So the parent process basically constructs the child process, which means you could easily design an OS such that the parent needs access to the library files, but the child doesn't.


I think the idea that the program can load files as code is insane in itself. There shouldn't be such a mechanism in an ideal world. The operating system would decide what you can actually load and load it for you.

How would you prevent that? If programs can load files into memory and execute data loaded into memory as code, they can load files as code. The former is necessary for obvious reasons, the latter for JIT.

It's also pointless. If a program can cause damage by loading harmful code it can also cause damage directly.

Restricting what programs can do is a great way to prevent experimentation and hinder progress.


You can prevent that by using the NX or the XD bit. Its a CPU feature and I believe the support was added over 15 years ago in most popular OSs. Here's the commit for Linux https://git.kernel.org/pub/scm/linux/kernel/git/history/hist...

>It's also pointless. If a program can cause damage by loading harmful code it can also cause damage directly.

It is not pointless, but it is also not perfect. That's why we have defense in depth. Where instead of having one perfect moat to protect the castle, you also have alligators and witches that turn people into frogs. :P


The OS can prevent it, but can it do so without making JIT impossible?

well, if you can’t make pages executable you can “just in time” it by interpreting and writing it to an optimized interpreter format i suppose (but it will be much slower)... as an example, see WKWebView vs UIWebView (from iOS)

You can do that but slow JIT kind of misses the point.

>I think the idea that the program can load files as code is insane in itself. There shouldn't be such a mechanism in an ideal world.

What do you mean by loading files "as code". Do you mean setting the execute bit on memory pages? You need to have the appropriate permissions to do that, and a locked down system wide policy can prevent programs from doing that as well.


One thing missing from this rethinking, as far as I can tell, is names.

You can't move pointers between different processes in general, because each process has its own "memory address namespace". But an absolute path generated by one program does make sense to another program.


That's true. I mainly cover this point - filesystems as a means of connecting different applications together - in another article: https://www.devever.net/~hl/nexuses

This is a fascinating idea, and totally obvious in retrospect :-)

I've noticed features that work towards cutting off access from global file system. Containers are mentioned in the article, but there are other things like systemd giving easy access to per-service private temp files, blocking access to /home, whitelisting paths that are allowed at all etc.

The problem really is that this global mutable state has existed for 50+ years, so lots and lots of things now rely on it: dynamic loading of libraries (could be separate objects from files), configuration (doesn't need to be stored in files), UNIX sockets that are accessed by file name etc.

And not only have many programs started to rely on files, but many have relied on them being mutable. Webservers can be told to re-read their config from a running process, things like that.

So, is there any way forward for UNIX-based systems that don't break the world?

Edit: another interesting observation is that iOS has already done away with the global file system, at least from the perspective of the apps.


Web assembly is a good step in the elimination of the globally shared FS. Dynamically linked libraries are explicitly provided with an index only that links to the library location. https://webassembly.org/docs/dynamic-linking/

WASI https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webas... will provide access to files given to the function by the runtime, not vice versa as in operating systems where a syscall is sent to ask permission for a file.


This reminded me of this [1] talk from 32c3, which talks about the same issues and presents and overview of some approaches ppl have come up with. Somewhat BSD centric, but I found it interesting back then.

[1]: https://media.ccc.de/v/32c3-7231-cloudabi


One thing that has always astounded me is the fact that the following script can be executed without any special permission (apart from execution, read and write):

    rm -rf ~
A key to a more modern file system would be a much more granular permission systems apart from "rwx" for a certain group of users.

What permissions do you think that command should require? You could add a separate "delete" capability, but the fact that you have permission to write to the file already means that you can delete or overwrite its contents, which is effectively the same from the user's point of view. (Granted, the fact that you can delete a read-only file if you have write permission on the directory is a bit odd, but that situation doesn't come up often.)

I don't think more granular permission can solve this problem. What does solve it, however, is read-only copy-on-write snapshots. No matter how files are renamed, deleted, or modified, you can always recover the original version from the snapshot.


This is a very important idea, but it will take a long time to become practical for typical system design environments.

Best get started.


Isn’t this sort of how sandboxing iOS apps already feels like?

> root of all evil

I see what you did there.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: