Hacker News new | past | comments | ask | show | jobs | submit login
Inferno Operating System (wikipedia.org)
188 points by giancarlostoro on July 20, 2019 | hide | past | favorite | 97 comments



Back when I was in university we had a small course on Limbo/Inferno because of a connection my university (RIT) had to bell labs. I took the course in... 1998? and wrote a version of Tetris for limbo. The code was awful, but did find a bug in the Tk implementation. This lead me to my proudest/most embarrassing programming moment - apparently, my code ended up in front of Dennis Ritchie, who thought the code was terrible. Whatever, Dennis Ritchie looked at my code!

Anyways, here's the awful code: https://github.com/adamf/limbotris


I went to RIT in 2003-2010 for CS and took Axel Schreiner's first Golang course (which AIUI was the second GoLang course offered anywhere, at a Uni level)

Among other classes I took with Axel, he definitely spent at least 20% of the time overall talking about Inferno. I never had my code in front of Dennis Ritchie, but I will throw in from my perspective, the most valuable part of a CS education from RIT was hearing that my code was terrible, just enough times to make me want to make it better later on, in my professional career when it has started to matter.


I find it impressive you still have code you wrote for a university class in '98!


My dad has a box of punchcards for code he wrote in his university class, well before ‘98. I think all of my own 90s era code was on floppy disks that may or may not still be readable.


I have a printout of a Pascal Othello/Reversi game I wrote in 1985 for a class, as well as some C and DCL stuff from 1986-1987. The oldest I still have online is some more C stuff from work in 1988.


I still have code I wrote for the C64 in 1988 lol.


That's a great story! We have written all written embarrassing code at some point in our careers. xD


Is that your Tetris in the screenshot?


So, I think so but can’t prove it. In 1998 I didn’t really understand open source licensing so who knows where that code went.


I'm a little weirded out cause I can't view any of the files or download, either it's a bug you found on GitHub (ha!) or it's a permissions thing? I came here for code, and got bugs... hah


In reality, this was an attempt to turn Plan 9 into a commercial product.

Plan 9 despite it's crazy UI (the shell and editor have been ported to Linux so some people must have liked it) brought a lot of new ideas which eventually found their way into other projects such as Linux containers and Go (for obvious reasons).

One really good idea that died with Plan 9 was transparent distribution of CPUs across the network. You could - the same way you mount a network drive today - mount a remote CPU and run your local software on it.


Transparently gluing boxes together over a low bandwidth fabric died as an active research area right around the time Plan9 was seeing its first development. By the late 80s shared bus SMP had demonstrated its practicality and quickly became the predominant architecture. Today we don't spawn processes on remote CPUs because the whole act of scheduling on multiple CPUs is entirely transparent to us, that's a competing architecture to the predecessor found in Plan 9

MOSIX is the only system from that era that I know is still around. It had a fork by the name of OpenMosix for some time, but according to Wikipedia ( https://en.wikipedia.org/wiki/MOSIX#openMosix ): "On July 15, 2007, Bar decided to end the openMosix project effective March 1, 2008, claiming that "the increasing power and availability of low cost multi-core processors is rapidly making single-system image (SSI) clustering less of a factor in computing"

(I admire the downvote, but please realize this is not a question of one's opinion!)


Plan 9 is not a single system image OS. And we spawn processes on remote CPUs more than any point in history, often using tools like Kubernetes.


Although no downvote, there continued to be plenty of research and dollars in grid computing that did stuff like that on top of "distributed, shared memory" that did stuff like that. Then, all the research in HPC clusters that tried to create a "Single, System Image" running stuff across machines like it was one machine. The MOSIX quote doesn't change the fact that various researchers kept attempting this and making prototypes.


I think the keyword here is transparency -- later designs (even stuff like MPI) explicitly expose the topology of the available hardware. SMP is the closest thing we've ever got to true transparency, and then only for 80% of cases, and for those only because the compute nodes have very similar locality and e.g. memory bandwidth

Even SMP requires careful control if you want to get anything close to the actual performance of the underlying hardware, and the topology is once again very explicit


You better specify what "SMP" means. By definition, Symmetric Multi-Processing doesn't have locality concerns, but that's basically dead, and Shared Memory Programming does, indeed, typically require attention to topology and thread binding -- frequently ignored, of course. I don't know the MPI standard well, but I didn't think there was anything requiring topology to be exposed, and it won't be in the absence of something like netloc, which is still experimental.


"Distributed, shared memory" isn't what I saw of "grid computing". That mostly seemed to be driven by people who didn't understand distributed computing -- with some exceptions like Inferno -- particularly with Globus. People slapped "grid" on anything they could get away with, though. However, distributed shared memory (or the illusion of it) for compute systems does date from the 90s, at least in Global Arrays, and is going somewhat strong in various PGAS systems, including GA.

Kerrighed was an SSI system that actually was actually commercialized, apparently unsuccessfully. Current (I think) proprietary software systems in that sort of space are ScaleMP and Bproc (from Penguin Computing?). Dolphin had an SCI-based hardware solution for gluing together distributed systems, at least until recently. The Plan 9-ish Xcpu service was described as building on work with Bproc, but explicitly wasn't SSI.


QNX can do that as well. You can, from the command line, run a program on computer A referencing a file from computer B and pipe the output to a program on computer C and send that output to a file on computer D.


I never knew that about QNX - Is it possible to also monitor the connected processes across all machines?


I suspect you can, although I it's been years since I last used QNX.


Linux containers feel like a very weak imitation of what they could be under an environment like Plan9 imo.

Linux lacks a lot of core abstraction properties that would make containers elegant to implement under something like the Plan9 model, at least.

Cool project inspired partially by Linux containers: https://doc.9gridchan.org/guides/spawngrid


> Linux lacks a lot of core abstraction properties

No, it's worse than that: It has too many of them, leading to a mess of special cases that you have to deal with. What happens when you have a socket in your file system, and you export it over NFS?

Lacking abstraction properties is fixable -- you can add them. But removing them, especially if they're widely used, is incredibly hard.


Making good abstractions is hard. On Unix I sometimes wish I could unwrap the stream abstraction, but nevertheless I think it is one of the few abstractions that have really stood the test of time.

Why wouldn't a a socket exported over NFS just work seamlessly?


Because it's a kernel data structure thing that exists in the file system. The remote machine doesn't have access to it.


That's true for regular files as well.


No, the remote machine has access to file structures via NFS, which is complicated enough.

NFS doesn't have protocol-level special cases for forwarding operations for the local sockets, handling setsockopt(), various socket ioctls -- which, mind you, are often machine specific, where the data sent in ioctl is ABI dependent. I'm not even sure how you'd do that sort of thing, since NFS is a stateless protocol.

And then you would need to repeat the exercise for these special cases for all of the other special types of node, like /dev. What does it even mean to mmap the video memory of a video card on a remote machine?

And then you'd need to fix the assumptions of all the software that assumes local semantics ("the connection doesn't drop, and poll() is always accurate").

On top of that, you'd need to run on a treadmill to add support for new ioctls.

Do you really want to implement the server side of a protocol that can handle all of the complexity of all you can do on all file systems, with all ioctls, across all node types? How many implementations providing resources via this protocol do you think would exist?


What does it mean to mmap a file on an NFS server? Isn't it a connection drop when a local process dies, too? What happens when a disk is suddenly removed?

> On top of that, you'd need to run on a treadmill to add support for new ioctls.

Absolutely, it'd be a lot of work. So it's a better idea to not implement many of these things and instead simply return an error.


> What does it mean to mmap a file on an NFS server?

It means you have issues around synchronization and performance, if you use it as anything other than a private read only mapping.

And some things are just impossible, like a shared memory ringbuffer. Which is exactly what you do with the memory you mmap from a video card: submit commands to the command ringbuffer.

> So it's a better idea to not implement many of these things and instead simply return an error.

And now you need to start writing multiple code paths in user code, testing which calls work and which don't, one of which will be broken due to lack of testing. And when you guess wrong at the needed operations, software often goes off the rails instead of failing gracefully. Failure modes like blindly retrying forever, or assuming the wrong state of the system and destroying data.

Too many complicated abstractions break the ability to do interesting things with a system. It's death by a thousand edge cases.

On plan 9, you have 9p.

https://9p.io/magic/man2html/5/0intro

That, and process creation/control/namespace management, are the only ways to do anything with the system. There are few edge cases. Implementing a complete, correct server is a matter of hours, not weeks.


> like a [remote] shared memory ringbuffer.

Technically just as possible, only very slow... Performance is abstracted out by the VFS. You need to stay sane through other measures, like having your software configured right, etc.

> And now you need to start writing multiple code paths in user code

I don't think the number of paths is increased. Any software should handle calls that fail - if only by bailing out. That's acceptable for any operation that just can't complete due to failed assumptions - whether it's about file permissions or that the resource must be "performant" / not on an NFS share, etc.

> 9p.

Now what is the point? How's that different or better? They actually are much more into sharing resources over the network... which means less possible assumptions about availability/reliability/performance. I doubt they can make the shared ringbuffer work better.


> Technically just as possible, only very slow... Performance is abstracted out by the VFS.

How would you go about implementing the CPU cache coherency that allows you to do the cross machine compare and swap?

> I don't think the number of paths is increased. Any software should handle calls that fail - if only by bailing out.

If the software works fine without making a call, then you can just skip the extra work in the first place. Delete the call, and the checks around if the call fails. And if the call is important somehow, you need to find some workaround, or some alternative implementation, which is by definition never going to be very well tested.

> Now what is the point? How's that different or better? They actually are much more into sharing resources over the network... which means less possible assumptions about availability/reliability/performance. I doubt they can make the shared ringbuffer work better.

The tools to make a shared ringbuffer that depends on cache coherent operations simply aren't there -- it's not something you can write with those tools.

And that's the point: The tools needed simply don't work across the network. Instead of trying to patch broken abstractions, adding millions of lines of complexity to support things that aren't going to work anyways (and if they do work, they'd work poorly) pick a set of abstractions that work well everywhere, and skip the feature tests and guesswork.

Primitives that work everywhere, implement them uniformly, and stop special casing broken or inappropriate tools.

And then, it's a day of work to implement a 9p server, and everything works with it. So I can serve git as a file based API, DNS as a file API, fonts as a file based API, doom resources as a file API, or even json hierarchies as a file API, and not worry about whether my tools will run into an edge case. I can export any resource this way, and not need special handling anywhere.

Plan 9 doesn't have VNC; it has 'mount' and 'bind', which shuffles around which `/dev/draw` your programs write to, and which `/dev/mouse` and `/dev/kbd` your programs write to.

Plan 9 doesn't have NAT; it has 'mount' and 'bind', which shuffles around which machine's network stack your programs write to.

Plan 9 doesn't have SSH proxying that applications need to know about: It has sshnet, which is a file server that provides a network stack that looks just like any other network stack.

From parsimony comes flexibility. You're not dragging around a manacle of complexity.


> How would you go about implementing the CPU cache coherency that allows you to do the cross machine compare and swap?

build it in the protocol!

And so on...

> The tools to make a shared ringbuffer that depends on cache coherent operations simply aren't there -- it's not something you can write with those tools. And that's the point: The tools needed simply don't work across the network.

Ok. In theory, we just need to build access to the tool in the network protocol and have the network server execute the magic on the remote machine.

Of course, one needs a way to map e.g. a CAS operation to a network request. I don't think today's CPUs let us do that.

> Delete the call, and the checks around if the call fails.

    FILE *f = fopen(filepath, "rb");
    if (f == NULL)
        fatal("Failed to open file %s!\n", filepath);
There. I wouldn't remove a line, and I've magically handled whatever error condition it was, regardless if I've thought about network transparency issues or not.

> Primitives that work everywhere, implement them uniformly, and stop special casing broken or inappropriate tools.

I've never seriously looked at 9p, but the page you linked strongly suggests to me that it's more abstraction if anything (your initial statement was that that's bad), and more vague (if anything) as a consequence. More like HTTP, and I don't think of HTTP as a sort of universal solution - it's rather a sort of bandaid to glue things together with minimal introspection (HTTP verbs, status codes...). And the fact that it tries to be universal also means that it doesn't match some problems very well, and people will basically just sidestep HTTP there (I'm not a web person, but I've heard of major services that just return HTTP 200 always and just HTTP as a transport for their custom RPC mechanism or whatever).

> Plan 9 doesn't have VNC ... NAT ... SSH

Great. I get it. 9p is a basic transport method that gives some introspection for free if you can model your problem domain as an object hierarchy. But it's far from a free solution for any problem. It might save you some parsing in some cases, but it doesn't compress your VNC stream for example. Nor define the primitives of any problem domain that it just can't know about.


> build it in the protocol!

You don't have access to the interprocess cache snooping in software. This is CPU interconnect internal shit, and you actually need access to the local memory bus for correctness. mmap in its fully glory is only really worth having if you can share pages from the buffer cache.

And even if you did, and you turn a ten nanosecond operation into a ten millisecond operation, counting the network packets you send (a factor of a million overhead), without the assumption that all the peers are reliable and never fail, the abstraction still breaks. And if you assume all your peers are reliable in a distributed system, you're wrong. Damned either way.

> I've never seriously looked at 9p, but the page you linked strongly suggests to me that it's more abstraction if anything

No, it's a single abstraction, instead of dozens that step on each other's toes.

> Great. I get it. 9p is a basic transport method that gives some introspection for free

What introspection? It's just namespaces and transparent passthrough. Unless you're talking about file names.


Yes, I realized the CPU issue and already updated my comment. Technically we would need a way to catch the CAS operation and convert it into a network request - like for example segfaults can be handled and converted into a disk load.

And also we'd need to extend all the cache coherency stuff over the network.

> And if you assume all your peers are reliable in a distributed system, you're wrong. Damned either way.

Technically you have the reliability issues with all the components inside a single system, just as well. They are just more reliable. But I'm sure I have seen hard disks failing, etc.

--

Ok, let me think about that abstraction stuff. Thanks.


I'd also argue that if you need to turn a cache snoop into a network round trip (or several?), your abstraction is just as broken as if it returned the wrong value; it's unusably slow :)


Yes, that's why I'm still having trouble to understand the fuss about VFS abstraction as used in 9p etc ;-). I've always been glad to know when I was not on an NFS or sshfs mount (mostly for reasons that you can't design away, i.e. network reliability issues). So why bother abstracting out that knowledge even more?


When you come at it from the perspective that remote resources are the norm, and you assemble a computer from resources scattered across the network, local access becomes weird special case. Generally, you're running a file server and terminal as separate nodes, with an auth server somewhere else.

And if you're actually using the knowledge that some files are local, you get bugs and assumptions creep in, and now your software stops working in the case where you're running on someone else's network.

It's about transparently providing the same interface to everyone, and making that interface simple enough that implementing it is easy enough that it's actually done, and the interface actually gets used.

Then, if you want to interpose, remap, analyze, manipulate, redirect, or sandbox it, you can do that without much trouble. The special cases are rare and can be reasoned about.

Reasoning about your system in full frees you to do a huge amount.


To be fair that is a general UNIX issue.


Linux containers feel like weak imitations of jails and zones found in FreeBSD and Solaris/illumos, for that matter.


Except the user interface for them on FreeBSD compared to i.e. docker is atrocious.


Yeah zones especially, I agree.


Why not Jails? It was pretty much the first to provide this in Unix land. I guess the advantage of Docker / containers was / is the nice TUI for developers as well as Hub. We never had a ‘jailshub’ in FreeBSD.


Somehow I think Tru64 and HP-UX vaults had it first.


Although I'm not sure how that "mounting" of cpus would avoid people hogging resources. I use a HPC system where users submit jobs and there's a whole complicated queuing system going on to make sure that greedy users don't monopolize the system.


You could make the same argument about mounting disks.

You revoke the permissions of the people who abuse those resources, and you don’t expose them to the outside world at large completely unprotected.


"Queuing" is distinct from the resource management component. If you don't have something like cpusets, processes on shared nodes can indeed hog CPU. There are worse issues with shared resources, like flattening the parallel filesystem, which is one of the things which is at least difficult to manage.


This is an interesting concept because I wasn't sure what 'distributed operating system' meant.


Except that Plan 9 has not died.


Yeah idk why people have this notion.

Last 9front commit was… 3 days ago: http://code.9front.org/hg/plan9front/


s/o to the wip 9front ports tree, too: http://code.9front.org/hg/ports


Amazing work you do there, by the way!


Yeah, we're still here.


> "the shell and editor have been ported to Linux so some people must have liked it"

A manifestation of Cat-V.org Derangement Syndrome I wager.


legoos.io

"fully" distributed resources; memory, cpu, storage, what have you...

"LegoOS is a disseminated, distributed operating system designed for hardware resource disaggregation. It is an open-source project built by researchers from Purdue University. LegoOS splits traditional operating system functionalities into loosely-coupled monitors and run them directly on disggregated hardware devices."


There are times when I wish Plan9 and/or Inferno took off and won some respectable market share. Having a laptop, smartphone, tablet, Synology storage at home, servers on DO, and remnant external HDD from prior era becomes frustrating because my files end up being everywhere and nowhere. There were times when I want to be able to start working on some code on my laptop, pause, then continue work on my tablet later on. At those moments it would have been wonderful if I had Plan9's ability to mount remote filesystems, sync across the network, and have consistent view however I access the data. Yes, SSHFS is available, and the same protocol from Plan9 is available in Linux nowadays, but setting them up isn't as simple as it was in Plan9 and being native to the OS.

That, in addition to the fixes it had (truly everything is a file) and the innovative features delivered (e.g. namespaces).

Can we revive them? Inferno's development seems to have slowed down to a halt.


Plan 9 (and some of its distributions, namely 9front and 9pi, the latter being Plan 9 for the Raspberry Pi) are still developed. No revival needed. You'll just have to download, install and use them. And they come with VNC, if you absolutely need incompatible software like a sane web browser.

edit: That said, I happily use some of Plan9port every day. There never was a better Acme editor (or, rather, Acme2k) on any other operating system. Amazing concept!


Yeah Richard Miller just put out tentative support for the Raspberry Pi 4 for his pi port.

https://9p.io/sources/contrib/miller/9/bcm/


No need for vnc on 9front to access a browser nowadays. You can run the browser inside vmx.


The manpage ("bugs") suggests that I shouldn't do that. But thank you a lot - I had, indeed, entirely missed that!


Naaah it just works fine


As long as your processor supports vmx ofc

Just ask aux/icanhasvmx :)

Manuals:

- http://man.cat-v.org/9front/1/vmx

- http://man.cat-v.org/9front/3/vmx


> There were times when I want to be able to start working on some code on my laptop, pause, then continue work on my tablet later on

In order to do these things 'cleanly' (with a truly unified view of the entire "system"), you need some sort of process migration infrastructure that even Plan9 and Inferno don't really give you, AFAICT. Projects like OpenMosix and OpenSSI attempted to provide this, but I'm not sure to what extent they were successful, given that they've all been abandoned and bitrotted. Today, one might try to do the same sort of thing while also taking advantage of the new kernel-based namespacing that Linux provides, which might make some things somewhat easier.


Inferno and Plan 9 sorta had this. Octopus and Plan B were both allowed picking up where you left off on some other machine.


CRIU should allow you to migrate a running process from one system to another, particularly in combination with a container (Docker or LXC/D).


Doesn't this work on about nothing in practice? Like nothing that has an open file, no gui apps etc.


Here is a writeup of using CRIU on openvpn: https://www.redhat.com/en/blog/using-criu-upgrade-vpn-server...

So it seems to work. But I'm no expert and I've never done it, so I might well be missing something.


Fairly sure files, network connections, ... are not a problem, unless I'm confusing it with a different migration solution. GUI apps probably only work if access through some network connection.


Whoa, not going that far. No processes are moving from one computer to another, just proper file syncing and maybe some centralized authentication.


You can have centralized / common auth using LDAP, even across platform.

You can have file sync with syncthing, or similar, again across platforms.


I have some sort of love/hate relationship with this project, regarding the Dis VM.

IMHO, having read the spec, I think the VM had a great design for the nineties but was not future-proof enough. It's like it was designed with some specific CPUs of the time in mind, and that biased the design. Also, I find it too tied to the language used in the canonical implementation (obviously C).

In contrast, I find the JVM (contemporary to Dis) more generic and more easy to implement using other languajes than C.

BTW, a few years ago I wrote a (incomplete) Dis VM in C# [0] that I hope to finish^H^H^H^H^H^H rewrite some day...

[0] https://bitbucket.org/luismedel/sixthcircle/src/default/



And we won't stop posting until Glenda and her labs bread hoard have crushed all others!


There's nothing wrong with posting an interesting story after a year or two. Especially when the topic is as interesting as this one! The current thread is the best so far.


Just a note for anyone who is looking to try this out: there are currently some fairly hard baked and thorny 32 bit dependencies in the Dis VM, so your system will need to be multilib to run it. There's currently a 64 bit porting effort going on in the form of Purgatorio, but that's still some time away from being complete (I recommend using it anyway because it's the most up to date version of the Inferno tree).


Source: http://code.9front.org/hg/purgatorio/

Purgatorio 64-bit is a ways away, but is a goal.

It'll happen eventually, but the more eyes the better.


Namespaces: a program's view of the network is a single, coherent namespace that appears as a hierarchical file system but may represent physically separated (locally or remotely) resources

Standard communication protocol: a standard protocol, called Styx, is used to access all resources, both local and remote

This looks similar to Qubes Air roadmap: https://www.qubes-os.org/news/2018/01/22/qubes-air/


The concept here is fairly profound and you could make the analogy of power to something like Lisp.

Lisp everything is a symbol, in Plan9 everything is a file and all namespaces are mutable. This concept is combined in the Interim OS experiment[1]. The idea of everything is a file is very literal and very serious compared to say, unix.

It's worth noting that 9p, while a filesystem protocol, is closer in concept to a REST api than something like say ext4.

In Plan9, the kernel could be thought of as a 9p connection router/multiplexer and all system resources and all servers must express themselves as 9p filesystems. The tooling 9p provides allows you to take trees of named objects (filesystems) and rearrange them however you please and place child processes under rules for modifying their own trees and how they can break scope, if at all.

Forking is more dynamic as provided by rfork[2] where you can give a child a specific subset of your process's namespace. So you could make a child who can only see specific files, executables, or resources and make it so that the child can't mount, unmount, etc.

[1] Interim OS: https://github.com/mntmn/interim

[2] http://man.cat-v.org/9front/2/fork

Some cool manuals:

- http://man.cat-v.org/9front/1/bind

- http://man.cat-v.org/9front/1/ns

- http://man.cat-v.org/9front/3/proc

- http://man.cat-v.org/9front/5/intro


Except that you cannot run a Qubes CPU server without a giant overhead.


As a happy Qubes user, the overhead in CPU is barely noticeable for Xen virtualization. One needs a lot of RAM though.


Nice to see Inferno posted!

Some Inferno related stuff:

- Limbo by Example: https://github.com/henesy/limbobyexample

- The Inferno Shell: http://debu.gs/entries/inferno-part-1-shell

- Try Inferno in the browser: http://tryinferno.rekka.io/

- Cat-v Inferno resources: http://doc.cat-v.org/inferno

- Experiments in Inferno/Limbo: https://github.com/caerwynj/inferno-lab/

- Inferno Programming with Limbo: https://web.archive.org/web/20160304092801/http://www.gemuse... (also on cat-v)

- Developing Limbo modules in C: https://powerman.name/doc/Inferno/c_module_en

- Simple FS example in Limbo: https://github.com/henesy/simplefs-limbo

- Inferno client for Redis: https://github.com/pete/iredis

- Inferno for Nintendo DS: https://bitbucket.org/mjl/inferno-ds/src/default/

- Inferno as an Android app: https://github.com/bhgv/Inferno-OS_Android

- Inferno replacing Android (hellaphone): https://bitbucket.org/floren/inferno/wiki/Home

- Porting Inferno to Raspberry Pi: https://github.com/yshurik/inferno-rpi


It's a really early draft, but here's the base for an awesome-inferno repository: https://github.com/henesy/awesome-inferno


This OS doesn't require an MMU or protected memory. I wonder if a processor architecture that lacks those features could yield much higher performance due to less complexity, reduced memory latency or less cache miss penalty. Will it be enough to mitigate the perormance penalty of runing code in the Dis VM?


You could do that (see Singularity OS [0] and Midori [1]), but recently it was discovered that it wouldn't work on modern out-of-order CPUs due to Spectre [2].

[0] https://research.cs.wisc.edu/areas/os/Seminar/schedules/pape... [1] http://joeduffyblog.com/2015/11/03/blogging-about-midori/ [2] https://arxiv.org/abs/1902.05178


You'd have to compensate for the lack of memory protection by running all unprivileged code in a VM, which would very likely introduce its own class of complexities and performance penalties.


People always defend Java as if these penalties don’t exist somehow.


The question was more about how the big the penalty is compared to running a MMU.

Also according to my (admittedly rather basic) tests the DIS VM performs far better than JVM.


Inferno already runs all unprivileged code in a VM. When I installed it on a Raspberry Pi for example could disable or enable its MMU, it didn't make a difference at all.


Inferno instances are really meant to be compartments for individual programs more than a desktop OS in and of itself.

Dis VM instances are super light… you wouldn't really be needing to worry much about running a large number of them.

Point aside in many cases you can sandbox applications in crafted namespaces using spawn and flags like NODEVS.



Would Plan9 fit in with applications like robotics? I feel like such an operating system would do well with the need to interoperate between a bunch of different types of sensors and hardware, and the need for sensor fusion. I’m not sure if Linux and device drivers are the right foundation for that kind of stuff.


This exists and to a pleasant effect!

http://doc.cat-v.org/inferno/4th_edition/styx-on-a-brick/

The ev3dev project is partially inspired by everything is a file and I've used it in the past for educational purposes by making the business of controlling/using motors/sensors as trivial as read/write.

https://www.ev3dev.org/


Yes, until someone asks "Can I use $POPULAR_MEGAFRAMEWORK".


This project had my favorite naming theme in all of computing.


Does it have the seven shells and a lake of boiling blood instead of the kernel?


With full compatibility with the latest version of your Seas of Ire


inferno is cool because it takes a lot of great ideas Ken and rob pike came up with for plan9 to address unfixable architectural flaws inherent to Unix's overall design, and basically makes an vm based portable OS/userland that is everybit what Java could have been.

And then of course Rob Pike, Russ Cox, Ken took what they learned from their background at Lucent and paired it with Robert Griesmers background in developing innovative VMs for Javascript and Java and they essentially put everything they learned to develop the Go programming language.

Which if you get into the architectural aspects of it. You know actually dig into how it's implemented deep down you will see a lot of code that is essentially derived from plan9 and inferno's code in many ways.


I remember reading about this in a Danish PC mag ‘PC Pro’ (or ‘Alt om Net’) in ca 1998. From memory there was also a framework called ‘Hades’ as per the mythological river. Or am I wrong here?


I actually tried the Android port once. It was... interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: