Hacker News new | past | comments | ask | show | jobs | submit login

Does Plan 9 meet your requirements for a "true multi-tenant OS"? If not, I'd be interested in hearing your ideas about how such a system might be implemented and how it would look to end users.



I haven't looked deeply enough into Plan 9, but from what I have seen it's a step in the right direction.

A major weakness of all existing OSes in this area is the network subsystem. The outdated "privileged ports" restriction means that most services must run or at least by started by root, making the OS effectively single-tenant by default. Simply removing that legacy cruft would improve the situation a lot. There's also no way to assign IPs or network interfaces to users. Network interfaces should have uid/gid and permissions like files, and a user should have a default interface. (Sharing of course would be possible.) Firewall rules (e.g. iptables) should also be per-user as well as global.

Another major weakness is libraries. DLLs should be implemented using either per-user or even better with cryptographically secure content-addressable lookup of binary objects. This would allow the OS to cache symbols globally but in a secure way. The whole binary/DLL paradigm is outdated and needs a modern rethink.

Software installation is broken. There should be no such thing, except perhaps in the case of hardware or system drivers. The idea of "installing" software beyond just unpacking an archive needs to die, period. Apple's .app bundle system is pretty close to the right thing.

I'm not knowledgeable enough about Plan 9 to know if it addresses those two issues.

Containerization sort of accomplishes those things, but in a stupid ham-fisted way that involves a ton of duplication and resource wastage. Virtualization is even more wasteful and ham-fisted.


To the best of my knowledge, Plan 9 eschews dynamic linking entirely and does everything statically.

As for package management... the entire model of Plan 9 is such that most conventional things about Unix package management that we take for granted are simply irrelevant. You don't typically deal with things on the package level, but on the file server level - which is inherently versioned, archivable and introspectable through simple tools. You can do things like immediately swap in a library from the file system cache just to test a program to see if it runs on it, then replace it back, all trivially.

There have been some bolted on approaches to package management as of more recent [1] [2]. Conceptually, they're no more advanced than Slackware's shell script-based pkgtools, largely because they don't need to be.

A Plan 9-ish way to package management would probably involve something like mounting a networked file system to a local share and then maintaining a replica on it. Actual management could then be done through simple mkfiles. A layer on top of that which precompiles and simply union mounts the contents of an archive is certainly possible, too. It's still pretty similar to ports, though, but with less headaches.

Ironically, your ideal form of package management is pretty similar to Slax's use of union mounting compressed file system archives, or even Slackware's tarballs that hold directory contents which are unpacked to install. The Linux community doesn't seem to want that. Dynamic linking complicates things.

[1] http://man2.aiju.de/1/pkg [2] http://www.9atom.org/magic/man2html/1/contrib




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: