And where is Plan 9 now? If I want to get involved should I look into 9front? Inferno? Or this guys github mirror  because the Plan 9 website is down?
I understand these are some smart guys, but the cat-v website makes a ton of divisive statements and "leaves it as an exercise to the reader" to figure out why they hold these views. The trolling isn't awful, but in general, I'm having a really tough time fighting my way into the Plan 9 circle.
How have you dealt with this? Aside from my initial (more historical) question, any suggestions for getting started? Is it even worth getting familiar with Plan 9 or rc or acme or sam or the plethora of ported tools? Or should I just wait until Russ Cox and Rob Pike come up with a production-ready OS?
No disrespect meant, just have a lot of questions.
 - https://github.com/0intro/plan9
If you are not married to your current editor of choice, give Acme a try -- it's quite different, but I've been using it as my primary for about 5 years (I write mostly write C, Bash, Python, Go, Puppet. Hesitant to use it for more editor-dependent languages like Java and C#), and I've been very happy with it. It's available for Linux, BSD, and OS X as part of plan9port. For me, secret to being productive in Acme was learning to write my own plumbing rules, which let you turn plain text into hyperlinks based on pattern matching.
Plan 9 does not have to be a purely historical endeavor; If you manage to write a 9P file server, you can actually mount it on other OSes, either through the native 9P support in the Linux kernel, or through a wrapper program like 9pfuse. For example, I recently saw a 9P file server that turned jira into a file tree of tickets that you could edit with a text editor.
People wonder why I cannot do things fast in their GUI world, and then I show them a bash script that obviates 10x per page operation filing a ticket, and then they do not understand, but respect my annoyance at the "user-friendly" way.
The file server was somewhat specialised too: the Plan 9 one used an optical WORM jukebox to provide its long-term storage. If you didn't have one of those you could simulate it with disk storage, but there's a cost trade off there.
Without this investment in hardware it was like trying to understand how NFS or web infrastructure works with only a single machine to work with.
In some ways it's similar to the obstacles that hobbyists face investigating the Hadoop ecosystem today: the hardware required to build a realistic installation on which to experiment is quite costly. With Hadoop you can use virtual machines and/or cloud hosting to try things out. When Plan 9 came out you didn't have that option so you needed to assemble physical hardware yourself.
... which can just be run on the same machine.
> Without this investment in hardware it was like trying to understand how NFS or web infrastructure works with only a single machine to work with.
Network transparency is an important feature, but not the only one. You are saying running X makes no sense because your X server and client are on the same machine.
No, I'm saying that it's hard to properly understand the advantages Plan 9 (or NFS or X or Hadoop) brings if all you have is one machine to run it on.
That's… interestingly ahistorical. No one was force back from plan 9 to ubuntu or openbsd because when plan 9 "lost" neither existed. Plan 9's window of opportunity existed in the early '90s when linux and the web weren't firmly established.
Unfortunately Plan 9 wasn't released with a proper license until 2002, at that point Linux had already won the war (for half a decade), the problems that plan 9 solved had been solved in other ways (perhaps less elegantly) by the rest of the world, plan 9 lagged behind on a number of important areas and bell labs were being downsized significantly.
I didn't see that happen.
And since no one else had done so before, bafflingly.
Also keep in mind that Plan 9 has involved significantly since the publication date. 8 1/2 and WORM have been superseded by rio and Venti, the 9P protocol is a bit more extensive, ndb and /net plus the auth subsystem have become more prominent since then, etc.
windows for workgroups was released the same year, while we were struggling to set up IPX networks to play doom between two machines at home...
But, it all stopped. Research budgets dried up, the computer got bigger than any one OS team could handle, and some OS out of Finland came in and scooped up all the mindshare. Since then, we've been stumbling in the dark, using scant shards of the tools they made, like python, with overall systems not much better than what they have. Lispers talk of the AI Winter, but the ongoing OS Winter is far, far worse in terms of the absolute stagnation of an entire force of society. We really do deserve better than what we have now.
Earlier this week there was a thread on HN about Taos, another very interesting OS from that time.
Today, with multicore, heterogeneous cores, a huge range of communication types, IoT and the physical world becoming connected from the tiniest of pieces up to the largest structures, there should be ample areas where OS research could find interesting challenges and solve problems.
The most interesting I've seen in a few years is the library OS work from MSR. But coming from Oberon, the most interesting feat was doing it to Windows and get it to work. Unfortunately it doesn't seem to become part of Windows.
To be fair though, the amount of virtualization tech developed the last decade could be perceived as OS related.
One professor I met once claimed that the Mach OS, micro kernels was the bane of OS research. Suddenly there was an OS easy enough to work on for a Phd to test implement a feature and get a degree.
The revivial in systems work has already started, and there is a lot going on eg see some of the talks in . Projects like CHERI show you can do research on a real production OS (FreeBSD) not a toy one, while SeL4 shows you can even do correctness proofs on a small OS. Unikernels and other library OS projects and performance oriented projects are putting what was system code into userspace where it can be iterated faster, and moving to high level and scripting languages. Comtainers and microservices are causing a huge rethink of the monolithic architectures too.
And so on and so forth. Much stuff being done in directions that might actually achieve something. There's still hope for those of us wanting something other than monolithic garbage whose uptimes still can't beat a VMS cluster from the 80's and whose security is a measure of how convenient it is to hackers.
CheriBSD is more of a hardware research platform. We've had capability-based hardware and similar protection schemes for decades, but they never caught on.
Containers are an absolute disappointment the way they were popularized with Docker, and I think it's naive to assume that "microservices" are anything but a buzzword. The microservice architecture itself is exceptionally old and basically a reapplication of OO principles to high-level software components.
Unikernels and libOS are kind of interesting, though.
I agree on containers and microservices. IT rarely learns the good lessons of the past but often repeats its failures. The first good containers were Burrough's apps where they compiled source against a good API with checks at interface level (compile-time & runtime) with full reuse of code in memory to avoid duplication. Hardware-enforced, optional isolation & duplication if you wanted a brick wall. That has both an efficiency and security argument. This new shit might be an improvement on typical Windows or UNIX/Linux deployment but seems to be improving in wrong direction.
Best example is still them standardizing on complicated HTTP- and XML-based middleware instead of simpler formats (eg s-expressions) with a grammar I can auto-generate w/ validation checks on TCP/UDP-based middleware which I can also auto-generate w/ validation checks. Designing robust, efficient solutions with mainstream stuff is like walking through a friggin' minefield of problems.... where it can even be done! Starting to think they're all closet masochists...
I encouraged Microsoft Research to continue investing in tooling and verification tech that represents The Right Thing approach to systems. That pieces of it drift to their commercial products is even better. :)
The unikernels work going on.
Android's Java userland.
Apple's hybrid approach with containers.
This is one of the topics I actually agree with Rob Pike. UNIX is past its due date. There is so much to explore, specially since POSIX just perpetuates C insecurity model.
I haven't looked into Amoeba too deeply, I must admit, but other than its support for network transparency and single system imaging from heterogenous machines, I don't think it's very Plan 9-ish at all.
Sprite homepage for HN readers following along:
Edit to add: The retrospective is definitely worth reading. I think most administrators, to this day, have it harder in some ways than people running Sprite due to their clever design choices in networking, storage, and single-system-image.