I up-voted this in "new" in the hopes of an interesting debate from smarter people than me about why Plan 9 didn't succeed or what would it take for a new OS to break the status quo. But 2 hours in, 0 comments. Is there really nothing to say anymore about this?
Okay. here's a question:
how would the world look like today had Android started based on Plan 9 instead of Linux?
My understanding of Plan 9 is that the primary benefits it brought are in its ability to make a large collection of computers look like one unified system. So rather than asking what would it look like if Android was based on Plan 9, I'd think of it as: what would it look like if my laptop, TV, and smartphone all ran Plan 9.
I think then you'd have a scenario in which, for example, you can pick up your phone and tell your TV to start streaming some file you have on your laptop. You can probably do this now using some collection of complicated programs that you have to install, but with Plan 9 it would be easy since everything appeared to be just one part of a larger computing system.
Or another example: you're working at your office with a desktop computer running Plan 9. Then you leave the office for the night, but want to finish something at home, so you open up your laptop and have access to all your work files as if they are stored locally on your laptop. You just edit them in place and the copy at work is automatically updated as you go.
So that's how things would be different I think. Not revolutionary, but definitely an improvement on what we have now. (Note: I'm not a Plan 9 expert and have never actually used it, however, I do find the ideas from it interesting and have read a bit about it, so I think the above is fairly accurate).
> Then you leave the office for the night, but want to finish something at home, so you open up your laptop and have access to all your work files as if they are stored locally on your laptop. You just edit them in place and the copy at work is automatically updated as you go.
I do this all the time, Gnome file manager (Nautilus) mounts my work computer over sftp with three mouse clicks (Connect to Server / select server / Connect).
Granted, I can only work on the remote files in Nautilus, and with GUI programs than I launch by right-clicking on the files in Nautilus. If I want a command line, I need to do some other, but also simple, tricks.
Plan 9 goes far beyond than that. Everything can be shared automatically, because everything is a file. Here are some scenarios that should be possible:
- You can access files remotely automatically from any application (akin to what sshfs does)
- You can access non-files remotely automatically: a printer is just a fd you write to
- A GPU from a distant cluster is just a resource you can use as if it were local (I'm speaking of implementation, not performance).
- A mail service can be accessed on a mail server, which is itself not connected to the wild internet but goes through a firewall... itself accessible through the same file protocol. Oh and the mail you want to send sits in your outbox on your phone, so you just grab it from your computer.
- You can play the music that lives on your machine on the speakers anywhere in your house by communicating with the remote fd
- One of the dream of Rob Pike (co-inventor) is that you wouldn't need a multi-core machine in your pocket for your mobile lives, you'd only need a good connection so you can access all resources on any server that is connected to the network. Your "smartphone" would then just be a terminal to pilot all your resources.
All of this accessible with the same interface, available in the OS. It may be have been a ludicrous dream, especially when we look at how much data would have been shuffling between machines, but the prospect of such networking still sounds awesome.
IO devices were also mounted/manipulable within this filesystem. So you could mount your home keyboard/mouse/monitor to your remote workstation (maybe an HPC box in some datacenter) and continue working where you left off. Java tried something similar with Jini but without the powerful abstract of everything being a file descriptor in some hierarchical mapping.
At this point I'm not sure it's worth it. We've got far more covered by just using a working Linux and taping specific protocols together than trying to make Plan 9 even work everywhere (If we're talking IoT, we're talking exotic hardware, so there's much much more chance Linux will need less work to be adapted, if not already done)
On top of that, we already have some platforms to use on Linux: the Erlang VM already has a notion of "global cluster of things" where the actual location of a resource doesn't have an impact on how things are executed (other than performance). The plan9 utilities are also partly ported to run on Linux; maybe we could try to run services that speak 9p/9p2000/styx on top of Linux ? That would be more useful.
it is absolutely not a ludacris dream. I've been dreaming it for years. All my machines run linux. As long as I'm NOT on a phone or tablet, I sign in, ssh, restore tmux/screen session, and pull X via xpra.
Why my phone is unable to be a terminal to all my other machines is beyond me.
I should have 1 hard drive and 1 "computer", and 1 desktop. Every other machine should be a window to this desktop.
I mean no disrespect but sampo's comment illustrates the problem plan 9 and any other advance has. "I can already do that, but" beats unification and concept so easily. You cannot just build a better, simpler foundation, you absolutely must demonstrate something new or dress it up with "chrome". We don't care about the better mousetrap. I can put together a clunkier solution that does what your new concept does is almost a point of pride. We all do it, and I wonder how it affects are advances.
* Create an NFS mount to a directory holding thousands of images. Create another NFS mount with directories and subdirectories holding many files of any type.
* Open Nautilus
* Go to Edit -> Preferences. Click on the Preview tab
* Set Show Thumbnails to "Local Files Only" and "Count Number of Items" to "Local Files Only"
Then click on the NFS mounted directory in Nautilus.
You will not have any fun.
This bug has been sitting in the Nautilus bug tracker for over three years.
A fix would not be rocket science, just rewording what it says in the Preview tab would be a fix. Three years gone by and they can't even reword it. Which is fine, but it doesn't say much for the file manager and its inter-OS operability.
You would probably be better served by writing a 2 lines bash script to call sshfs and have it mounted through fuse.
I don't see a point in going through GIO monstrosity for this.
That all sounds neat, but how does it work on less than ideal networks? The non trivial jitter and latency would mess up programs that assume things are local. Even things like buffering audio needs to be network aware. If local, low latency can be achieved. If remote, then buffer to avoid skips.
I'm a bit suspicious of things that make network access transparent.
I think the thing people tend to miss in looking at plan9 is that, it being true that the infrastructure it provided doesn't explicitly solve all these last-mile problems people have with network transparency, it does solve several major problems you come to before you can even reason about that. Having a uniform network protocol, a reasonable multi-node security model, and a workable method of uniformly addressing remote resources are the horse, what you do with it is the cart.
Certainly horrible if we were to switch to it now, but if it had caught on, maybe we would have put more money on improving networks instead of improving CPUs.
OTOH, Plan 9 was designed as a company-global "machine": you have a room filled with CPU clusters and storage clusters, and your own simple, stupid terminal to manage your resources. But your terminal is only a view; ultimately the data lives in the storage cluster and is processed in the CPU cluster.
As you can see, CPU and storage were expected to live on a high-speed, low-latency link because most of the data would fly between those two. Your terminal was expected to be anywhere else, possibly on the other end of a slow ethernet cable, and would coordinate the transmission of data from there. But most of the transmission would be between CPUs and storage. It wasn't supposed to be a truly p2p decentralized model.
> Or another example: you're working at your office with a desktop computer running Plan 9. Then you leave the office for the night, but want to finish something at home, so you open up your laptop and have access to all your work files as if they are stored locally on your laptop. You just edit them in place and the copy at work is automatically updated as you go.
No more a nightmare than someone bringing in a flash drive and copying the file to it before they left. You'd only have access to things you already had access to.
Right, but there are still access control settings in Plan9 that would be able to limit access if the administrator chose to do so, there is just more flexibility to what can be done by default.
I wouldn't be so sure. Allowing employees to move arbitrary hardware running arbitrary software into their VPN is a security risk many companies do not want to take.
That is why it took pressure to get iPhones into enterprise networks.
I just got into a mess where the company VPN uses a windows/mac specific client, no linux version and no way to access the VPN. So knowing how to set up VPN access 'on my own' is currently of no use to me.
It was mostly an example. And it's relatively to avoid the IP issues. Just don't let employees personal computers be part of the corporate Plan 9 system.
I think one of the main reasons why Plan 9 didn't succeed, was simply the fact that it wasn't free software from the beginning. Plan 9 was released around the same time as Linux. But it was treated as a secret and then tried to be directly marketed. That way it simply missed the opportunity that Linux took because Linux was free software. Only in 2002 was Plan 9 released under a free software license.
My understanding is that Plan 9 is the original intent of Unix materialized. That is, the fundamental concept that everything is a file.
Plan 9 managed to prove that everything can be a file.
I think the difference would have been that some things would be easier to achieve, if Android had been built around Plan9. But Plan9 itself doesn't have the maturity that Linux has, so bringing it to maturity would have taken time.
When I tried to run Plan 9 in a VM I gave up because I couldn't figure out how to open a shell in the window manager and type ls. I also couldn't figure out how to use the text editor (why can't editors be simple enough so you can just type?) If Plan 9 booted in text/terminal mode I'd use it.
Does anyone know if you can boot Plan 9 in text mode?
However well-designed the OS is internally, it also needs to be approachable externally.
You points seem pretty obvious. In order to get people to switch, at least some basic functionality people use everyday should be the same.
Without similar basic functionality, people will use it briefly, get frustrated, and then never use your OS again. They're also going to tell all their friends and your new OS is dumpster bound before it even gets out of beta.
It was not obvious to Rob Pike (who if not the "leader" of Plan 9 was at least the developer with the highest status).
Pike seemed to lack an understanding of how to entice users whose motivations and interests are different from his own to go through the trouble of actually learning and acclimating to his software. A general lack of marketing savvy, perhaps.
Here is another example of that:
In an post to the plan 9 mailing list (9fans) written in the 1990s, Pike seemed genuinely confused as to why some browser maker (Opera maybe?) did not respond positively to Pike's invitation to port their browser to Plan 9.
This was at a time when on a really good day Plan 9 had fewer than 200 users (most of whom were researchers at Bell Labs and maybe coworkers of those researchers). "I never even received the courtesy of a reply," is how Pike ended the post.
I did not mean to focus so long on one personality. The important point is that there's a lot of things one has to pay attention to if one expects a particular piece of software to gain widespread adoption. In particular, it is not enough to show that you are a very impressive person surrounded by other impressive people with some very innovative idea.
You're leaving out many posts from earlier times where Pike explicitly tells people they're probably never going to get access to Plan 9. It was a research operating system, and even the name was selected to make it unmarketable.
In a nutshell, nobody ever cared how well it was received outside of Bell Labs.
It's an odd failing of incredibly smart people that their needs are the same as everyone else's. It seems to be very common (maybe dumb people make the same mistake as well, but you just don't notice it as much)
The ideas from Plan 9 are very compelling, and parts of them, like namespacing, have been adopted, thats why we have Docker now to some extent. But for operating systems, build from scratch is hard, there is a lot of code. But it is getting easier. See it as the prototype...
Plan9-style namespacing -- as something that users can do directly without privilege -- has definitely not been adopted outside plan9. What we have is a pale pale shadow of the capabilities exposed by that.
Unfortunately, true user control over the namespace is incompatible with the unix security model (thanks to setuid being the only means of privilege escalation), and the namespace compositing that makes, for example, just stacking bin directories onto /bin instead of using a PATH requires that it be possible to have more than one file with the same name in the same view to work well, which is (probably?) incompatible with POSIX.
As far as why Plan 9 didn't break the status quo, I'll have to agree with what Eric Raymond said in 2003[0]: Plan 9 was not "compelling enough," whereas Unix was "good enough."
In my own opinion, a sufficient number of people, projects, and services were already using Unix-based systems, and they had constructed their projects and services around this Unix model. The marginal benefits of Plan 9 were not great enough to offer sufficient incentive to deviate from the Unix model.
ESR is wrong on this point. Not completely wrong, I can imagine a world where things went differently and his assessment was correct but it's not the world we live in.
Plan 9 "failed" because it spent almost 2 decades between "almost impossible to license" and "almost unmaintained".
AT&T was more concerned with preventing anyone but them from making money off of Plan 9 than they were concerned with actually making money off of Plan 9.
Lucent only got it long after the ship had already sailed, they don't seem to know what to do with it, it's commendable (and surprising) that they did not just gun it down.
Plan 9 is a terrible choice for a phone. It's meant more for networks of devices that work together instead of standalone devices that download shit. I don't use Plan 9 when I'm away from my fileserver and cpu servers because it looses a lot of the flexibility and what I love about it.
I think that if you replace Linux with Plan 9 the world would look largely the same. It would probably work better together though.
>It's meant more for networks of devices that work together instead of standalone devices that download shit.
Why should my phone just be a standalone device that just downloads shit? I wasn't a phone that can access that screen on my TV and send videos that sit on my file server, or can check 9p enabled sensors and display their status, our integrate with all sorts of crap around my house or at work.
Rio, acme, and rc would be an awful match for phone hardware, but the plan 9 kernel and 9p protocol would be very nice.
Phones and the apps that run on them have adapted to run reasonably with intermittently available, slow, pricey networks and relatively low amounts of power. Does Plan 9 have a smart way to deal with these network and power constraints? It seems unlikely since it wasn't designed for it.
To me that's a bit like arguing that Linux/Windows/OS X should have driven broadband take-up.
What actually happened was that a meta-OS - the web - drove broadband take-up, and broadband providers scrambled to improve the technology to meet demand.
The demand for super-bandwidth mobile services isn't there in anything like the same way. And there's no equivalent meta-OS for mobile.
But... at some point we're going to be moving to non-local storage and non-local processing, and Linux isn't really ideal for that.
My guess is that will happen when computing finally starts moving past concepts that were developed in the late 1960s. AI may well be a driver of a non-local distributed computing which isn't based on the cycles-as-utility or cycles-as-private-resource models we're stuck with now.
Is it too unrealistic to consider the possibility that the web could evolve into a single connected intelligent application that automatically load-balances and distributes cycles and storage across all connected devices?
The intelligent way to reduce power consumption for a radio is to not leave it on all the time. Instead you batch up requests and do them all at once. For example, a phone OS has a special push notification system to so that status updates from different apps will be delivered to the phone in the same radio cycle.
This is fundamentally different from a world where you assume always-on connectivity. It's more like the old pre-Internet days where email and Usenet were stored and forwarded.
If you have a compute task to accomplish on a phone, there's a tradeoff whether doing an RPC versus computing it locally would use less power and have less latency. As phone CPU's get faster it becomes more feasible to do compute-only tasks locally. This also increases availability.
So I'd argue that the trend is more towards offline computing and data synchronization protocols, not always-connected computing.
It's an incredibly slow wire protocol if there's any latency involved. Copying a large file from Bell Labs' servers to my California-based system via 9P took an order of magnitude longer than using HTTP between the same systems. Unfortunately that's kind of baked into the design of 9P.
aan(1) can handle the network bit. I use that to mount my fs on my laptop over my phone so if the net drops it doesn't bother me as much. 9front is pretty average on battery. I get the same usage times as FreeBSD.
If the conniptions over changes like kdbus and systemd s any guide, one problems is that there are too many people invested in "that's the way we've always done it."
The fact the Gods of Unix moved on doesn't seem to bother people.
I don't see how kdbus and systemd are relevant here. The design philosophy of the latter is just about a polar opposite to that of Plan 9.
On the other hand, major Unices being more Plan 9-esque would actually bring them closer to the Unix philosophy that opponents of the aforementioned technologies frequently cite.
It doesn't really seem "dead" (though close to it), it's interesting to me that even 6 years after this post Plan 9 was still being ported.
To be honest, a fair amount of the good features have been ported from Plan 9 into Linux. I don't bother using Plan 9 personally because it doesn't have a lot of the support I need, and many Linux distros support what I need.
However, if there more general support I would definitely use it, it's pretty slick.
I only encountered Plan 9 once, and then only the header inclusion scheme it promotes for it's C code. Which IIRC goes like 'header files must not include other header files hence every source file has to include the headers for all declarations used in that source file and in the headers it includes'. Which means when for any header file you want to include in your source you have to go figure out which headers are needed to pull in all declarations. For every single soure file. I really failed to see how any possible benefits of that would outweigh the cons.
I wouldn't be surprised if plan9 builds are an order of magnitude faster per LoC - Deeply nested includes and cyclic includes waste a huge amount of time.
Avoiding cyclic dependencies might also be a good thing in terms of overall software design.
Absence of a built-in simple database model, I think.
The unix way is to use a flat file, which turned out in the long term to be inadequate. Part of the underlying reason for systemd on Linux is that people want to configure their system with a robust database, not shell scripts or ad-hoc text file formats which must be edited by hand, or by custom database drivers (like Berkeley DB). Proplist might be another partial solution to this requirement on systems which lack an active data model. BeOS had a strong database-like file system which may address the same thing.
Architecturally I very much like the look of plan9. From a user interface perspective it's the pits. The UI seems to be anchored to the work-flow of the initial developers.
The last problem is a hurdle that that all good operating systems with fewer than a hundred million installs face. Driver support.
Plan9 with a Wayland style compositor supporting hardware acceleration could be a base for some cool new directions in UI. A Raspberry PI running Plan9 with a spiffy accelerated compositor and plan9ish file-framebuffer-windows would be enough to convince me to dive in and have a play around.
I keep a Raspberry Pi on my desktop with plan9front because slow as it is, it's still the fastest way to get a VNC/ssh client going, but I don't think Wayland would be much of an improvement (although anything is better than Rio at this point).
The problem here is that most of the plan9 community is too set in their ways to counter the dogmas around, say, Acme and three-button mouse usage (just to use an example that is UI-related but not strictly about the environment) or has moved on.
Mind you, plan9front is still being maintained (and fixed, including EFI and 64bit stuff), but people doing UX and graphics seem to be in short supply in that neck of the woods.
Have you ever gotten in-between a bunch of siblings fighting? They're all mad at each other until an outsider comes in and then suddenly there's a unified front against the new guy.
I blame the ultimate demise of commercial unix on infighting between HP, Sun, Cray, Digital, Tandem and everyone else who had a better idea of what UNIX should look like: i.e. how their UNIX was perfect in every way and safe to lock into because it addressed all your needs.
I remember impassioned arguments about whether HP-UX or Solaris was the better server to host Sybase or Oracle because the developers of said products developed on one first and ported to another, or so it was said.
When I tried to show Linux to my peers in the early 90's it was "Meh, proprietary hardware is better than PC hardware, why would we use that?" Eventually as we all know commercial Linux won the day over pretty much everything else.
I also tried to show people Plan 9 and was again shot down by people who lacked vision, but this time they were right. The only thing it seems most UNIX family members agree on is Plan 9's the red-headed cousin every UNIX vendor hates.
For me it had echoes of Apollo's open namespace concept on their OS. But as others have stated a) it didn't have any compelling reasons to adopt it over standard UNIX for an average business, b) it had a bunch of UI quirks that only a mother could love.
I still think we have things to learn from Plan 9 if you take the open namespace concept and wrap it in a container. My company (a Fortune 50) is getting rid of laptops, privileged remote access (no root over VPN) and even desktops (most everyone's hosted on virtual desktops now.)
Why not give me access to a desktop wrapped in an encrypted container? I boot their OS, it establishes contact with a server that verifies my boot-disk is uncorrupted and then downloads whatever I need to work inside the container, but once I'm done it's all destroyed until next time?
I can operate inside my employer's namespace but once the access is gone there are no local traces? shrug
Anyway back to Plan 9, it was good. It wasn't great enough to make anyone switch.
Plan9's successful implementation of the "everything is a file" abstraction has yet to be seen in popular unices... I wish it weren't so, it's a beautiful and simplified abstraction compared to what we commonly have today.
Also, its namespacing, and a few other opinionated ideas...
QnX is doing fine though. Not as good as it could be doing but it's really doing more than ok. Wished that RIM would have the foresight to open source it.
Okay. here's a question: how would the world look like today had Android started based on Plan 9 instead of Linux?