Anyways, here's the awful code: https://github.com/adamf/limbotris
Among other classes I took with Axel, he definitely spent at least 20% of the time overall talking about Inferno. I never had my code in front of Dennis Ritchie, but I will throw in from my perspective, the most valuable part of a CS education from RIT was hearing that my code was terrible, just enough times to make me want to make it better later on, in my professional career when it has started to matter.
Plan 9 despite it's crazy UI (the shell and editor have been ported to Linux so some people must have liked it) brought a lot of new ideas which eventually found their way into other projects such as Linux containers and Go (for obvious reasons).
One really good idea that died with Plan 9 was transparent distribution of CPUs across the network. You could - the same way you mount a network drive today - mount a remote CPU and run your local software on it.
MOSIX is the only system from that era that I know is still around. It had a fork by the name of OpenMosix for some time, but according to Wikipedia ( https://en.wikipedia.org/wiki/MOSIX#openMosix ): "On July 15, 2007, Bar decided to end the openMosix project effective March 1, 2008, claiming that "the increasing power and availability of low cost multi-core processors is rapidly making single-system image (SSI) clustering less of a factor in computing"
(I admire the downvote, but please realize this is not a question of one's opinion!)
Even SMP requires careful control if you want to get anything close to the actual performance of the underlying hardware, and the topology is once again very explicit
Kerrighed was an SSI system that actually was actually commercialized, apparently unsuccessfully. Current (I think) proprietary software systems in that sort of space are ScaleMP and Bproc (from Penguin Computing?). Dolphin had an SCI-based hardware solution for gluing together distributed systems, at least until recently. The Plan 9-ish Xcpu service was described as building on work with Bproc, but explicitly wasn't SSI.
Linux lacks a lot of core abstraction properties that would make containers elegant to implement under something like the Plan9 model, at least.
Cool project inspired partially by Linux containers: https://doc.9gridchan.org/guides/spawngrid
No, it's worse than that: It has too many of them, leading to a mess of special cases that you have to deal with. What happens when you have a socket in your file system, and you export it over NFS?
Lacking abstraction properties is fixable -- you can add them. But removing them, especially if they're widely used, is incredibly hard.
Why wouldn't a a socket exported over NFS just work seamlessly?
NFS doesn't have protocol-level special cases for forwarding operations for the local sockets, handling setsockopt(), various socket ioctls -- which, mind you, are often machine specific, where the data sent in ioctl is ABI dependent. I'm not even sure how you'd do that sort of thing, since NFS is a stateless protocol.
And then you would need to repeat the exercise for these special cases for all of the other special types of node, like /dev. What does it even mean to mmap the video memory of a video card on a remote machine?
And then you'd need to fix the assumptions of all the software that assumes local semantics ("the connection doesn't drop, and poll() is always accurate").
On top of that, you'd need to run on a treadmill to add support for new ioctls.
Do you really want to implement the server side of a protocol that can handle all of the complexity of all you can do on all file systems, with all ioctls, across all node types? How many implementations providing resources via this protocol do you think would exist?
> On top of that, you'd need to run on a treadmill to add support for new ioctls.
Absolutely, it'd be a lot of work. So it's a better idea to not implement many of these things and instead simply return an error.
It means you have issues around synchronization and performance, if you use it as anything other than a private read only mapping.
And some things are just impossible, like a shared memory ringbuffer. Which is exactly what you do with the memory you mmap from a video card: submit commands to the command ringbuffer.
> So it's a better idea to not implement many of these things and instead simply return an error.
And now you need to start writing multiple code paths in user code, testing which calls work and which don't, one of which will be broken due to lack of testing. And when you guess wrong at the needed operations, software often goes off the rails instead of failing gracefully. Failure modes like blindly retrying forever, or assuming the wrong state of the system and destroying data.
Too many complicated abstractions break the ability to do interesting things with a system. It's death by a thousand edge cases.
On plan 9, you have 9p.
That, and process creation/control/namespace management, are the only ways to do anything with the system. There are few edge cases. Implementing a complete, correct server is a matter of hours, not weeks.
Technically just as possible, only very slow... Performance is abstracted out by the VFS. You need to stay sane through other measures, like having your software configured right, etc.
> And now you need to start writing multiple code paths in user code
I don't think the number of paths is increased. Any software should handle calls that fail - if only by bailing out. That's acceptable for any operation that just can't complete due to failed assumptions - whether it's about file permissions or that the resource must be "performant" / not on an NFS share, etc.
Now what is the point? How's that different or better? They actually are much more into sharing resources over the network... which means less possible assumptions about availability/reliability/performance. I doubt they can make the shared ringbuffer work better.
How would you go about implementing the CPU cache coherency that allows you to do the cross machine compare and swap?
> I don't think the number of paths is increased. Any software should handle calls that fail - if only by bailing out.
If the software works fine without making a call, then you can just skip the extra work in the first place. Delete the call, and the checks around if the call fails. And if the call is important somehow, you need to find some workaround, or some alternative implementation, which is by definition never going to be very well tested.
> Now what is the point? How's that different or better? They actually are much more into sharing resources over the network... which means less possible assumptions about availability/reliability/performance. I doubt they can make the shared ringbuffer work better.
The tools to make a shared ringbuffer that depends on cache coherent operations simply aren't there -- it's not something you can write with those tools.
And that's the point: The tools needed simply don't work across the network. Instead of trying to patch broken abstractions, adding millions of lines of complexity to support things that aren't going to work anyways (and if they do work, they'd work poorly) pick a set of abstractions that work well everywhere, and skip the feature tests and guesswork.
Primitives that work everywhere, implement them uniformly, and stop special casing broken or inappropriate tools.
And then, it's a day of work to implement a 9p server, and everything works with it. So I can serve git as a file based API, DNS as a file API, fonts as a file based API, doom resources as a file API, or even json hierarchies as a file API, and not worry about whether my tools will run into an edge case. I can export any resource this way, and not need special handling anywhere.
Plan 9 doesn't have VNC; it has 'mount' and 'bind', which shuffles around which `/dev/draw` your programs write to, and which `/dev/mouse` and `/dev/kbd` your programs write to.
Plan 9 doesn't have NAT; it has 'mount' and 'bind', which shuffles around which machine's network stack your programs write to.
Plan 9 doesn't have SSH proxying that applications need to know about: It has sshnet, which is a file server that provides a network stack that looks just like any other network stack.
From parsimony comes flexibility. You're not dragging around a manacle of complexity.
build it in the protocol!
And so on...
> The tools to make a shared ringbuffer that depends on cache coherent operations simply aren't there -- it's not something you can write with those tools. And that's the point: The tools needed simply don't work across the network.
Ok. In theory, we just need to build access to the tool in the network protocol and have the network server execute the magic on the remote machine.
Of course, one needs a way to map e.g. a CAS operation to a network request. I don't think today's CPUs let us do that.
> Delete the call, and the checks around if the call fails.
FILE *f = fopen(filepath, "rb");
if (f == NULL)
fatal("Failed to open file %s!\n", filepath);
> Primitives that work everywhere, implement them uniformly, and stop special casing broken or inappropriate tools.
I've never seriously looked at 9p, but the page you linked strongly suggests to me that it's more abstraction if anything (your initial statement was that that's bad), and more vague (if anything) as a consequence. More like HTTP, and I don't think of HTTP as a sort of universal solution - it's rather a sort of bandaid to glue things together with minimal introspection (HTTP verbs, status codes...). And the fact that it tries to be universal also means that it doesn't match some problems very well, and people will basically just sidestep HTTP there (I'm not a web person, but I've heard of major services that just return HTTP 200 always and just HTTP as a transport for their custom RPC mechanism or whatever).
> Plan 9 doesn't have VNC ... NAT ... SSH
Great. I get it. 9p is a basic transport method that gives some introspection for free if you can model your problem domain as an object hierarchy. But it's far from a free solution for any problem. It might save you some parsing in some cases, but it doesn't compress your VNC stream for example. Nor define the primitives of any problem domain that it just can't know about.
You don't have access to the interprocess cache snooping in software. This is CPU interconnect internal shit, and you actually need access to the local memory bus for correctness. mmap in its fully glory is only really worth having if you can share pages from the buffer cache.
And even if you did, and you turn a ten nanosecond operation into a ten millisecond operation, counting the network packets you send (a factor of a million overhead), without the assumption that all the peers are reliable and never fail, the abstraction still breaks. And if you assume all your peers are reliable in a distributed system, you're wrong. Damned either way.
> I've never seriously looked at 9p, but the page you linked strongly suggests to me that it's more abstraction if anything
No, it's a single abstraction, instead of dozens that step on each other's toes.
> Great. I get it. 9p is a basic transport method that gives some introspection for free
What introspection? It's just namespaces and transparent passthrough. Unless you're talking about file names.
And also we'd need to extend all the cache coherency stuff over the network.
> And if you assume all your peers are reliable in a distributed system, you're wrong. Damned either way.
Technically you have the reliability issues with all the components inside a single system, just as well. They are just more reliable. But I'm sure I have seen hard disks failing, etc.
Ok, let me think about that abstraction stuff. Thanks.
And if you're actually using the knowledge that some files are local, you get bugs and assumptions creep in, and now your software stops working in the case where you're running on someone else's network.
It's about transparently providing the same interface to everyone, and making that interface simple enough that implementing it is easy enough that it's actually done, and the interface actually gets used.
Then, if you want to interpose, remap, analyze, manipulate, redirect, or sandbox it, you can do that without much trouble. The special cases are rare and can be reasoned about.
Reasoning about your system in full frees you to do a huge amount.
You revoke the permissions of the people who abuse those resources, and you don’t expose them to the outside world at large completely unprotected.
Last 9front commit was… 3 days ago: http://code.9front.org/hg/plan9front/
A manifestation of Cat-V.org Derangement Syndrome I wager.
"fully" distributed resources; memory, cpu, storage, what have you...
"LegoOS is a disseminated, distributed operating system designed for hardware resource disaggregation. It is an open-source project built by researchers from Purdue University. LegoOS splits traditional operating system functionalities into loosely-coupled monitors and run them directly on disggregated hardware devices."
That, in addition to the fixes it had (truly everything is a file) and the innovative features delivered (e.g. namespaces).
Can we revive them? Inferno's development seems to have slowed down to a halt.
edit: That said, I happily use some of Plan9port every day. There never was a better Acme editor (or, rather, Acme2k) on any other operating system. Amazing concept!
Just ask aux/icanhasvmx :)
In order to do these things 'cleanly' (with a truly unified view of the entire "system"), you need some sort of process migration infrastructure that even Plan9 and Inferno don't really give you, AFAICT. Projects like OpenMosix and OpenSSI attempted to provide this, but I'm not sure to what extent they were successful, given that they've all been abandoned and bitrotted. Today, one might try to do the same sort of thing while also taking advantage of the new kernel-based namespacing that Linux provides, which might make some things somewhat easier.
So it seems to work. But I'm no expert and I've never done it, so I might well be missing something.
You can have file sync with syncthing, or similar, again across platforms.
IMHO, having read the spec, I think the VM had a great design for the nineties but was not future-proof enough. It's like it was designed with some specific CPUs of the time in mind, and that biased the design. Also, I find it too tied to the language used in the canonical implementation (obviously C).
In contrast, I find the JVM (contemporary to Dis) more generic and more easy to implement using other languajes than C.
BTW, a few years ago I wrote a (incomplete) Dis VM in C#  that I hope to finish^H^H^H^H^H^H rewrite some day...
Purgatorio 64-bit is a ways away, but is a goal.
It'll happen eventually, but the more eyes the better.
Standard communication protocol: a standard protocol, called Styx, is used to access all resources, both local and remote
This looks similar to Qubes Air roadmap: https://www.qubes-os.org/news/2018/01/22/qubes-air/
Lisp everything is a symbol, in Plan9 everything is a file and all namespaces are mutable. This concept is combined in the Interim OS experiment. The idea of everything is a file is very literal and very serious compared to say, unix.
It's worth noting that 9p, while a filesystem protocol, is closer in concept to a REST api than something like say ext4.
In Plan9, the kernel could be thought of as a 9p connection router/multiplexer and all system resources and all servers must express themselves as 9p filesystems. The tooling 9p provides allows you to take trees of named objects (filesystems) and rearrange them however you please and place child processes under rules for modifying their own trees and how they can break scope, if at all.
Forking is more dynamic as provided by rfork where you can give a child a specific subset of your process's namespace. So you could make a child who can only see specific files, executables, or resources and make it so that the child can't mount, unmount, etc.
 Interim OS: https://github.com/mntmn/interim
Some cool manuals:
Some Inferno related stuff:
- Limbo by Example: https://github.com/henesy/limbobyexample
- The Inferno Shell: http://debu.gs/entries/inferno-part-1-shell
- Try Inferno in the browser: http://tryinferno.rekka.io/
- Cat-v Inferno resources: http://doc.cat-v.org/inferno
- Experiments in Inferno/Limbo: https://github.com/caerwynj/inferno-lab/
- Inferno Programming with Limbo: https://web.archive.org/web/20160304092801/http://www.gemuse... (also on cat-v)
- Developing Limbo modules in C: https://powerman.name/doc/Inferno/c_module_en
- Simple FS example in Limbo: https://github.com/henesy/simplefs-limbo
- Inferno client for Redis: https://github.com/pete/iredis
- Inferno for Nintendo DS: https://bitbucket.org/mjl/inferno-ds/src/default/
- Inferno as an Android app: https://github.com/bhgv/Inferno-OS_Android
- Inferno replacing Android (hellaphone): https://bitbucket.org/floren/inferno/wiki/Home
- Porting Inferno to Raspberry Pi: https://github.com/yshurik/inferno-rpi
Also according to my (admittedly rather basic) tests the DIS VM performs far better than JVM.
Dis VM instances are super light… you wouldn't really be needing to worry much about running a large number of them.
Point aside in many cases you can sandbox applications in crafted namespaces using spawn and flags like NODEVS.
The ev3dev project is partially inspired by everything is a file and I've used it in the past for educational purposes by making the business of controlling/using motors/sensors as trivial as read/write.
Which if you get into the architectural aspects of it. You know actually dig into how it's implemented deep down you will see a lot of code that is essentially derived from plan9 and inferno's code in many ways.