Hacker News new | comments | show | ask | jobs | submit login
Plan 9: why no compelling successor to Unix has got off the ground (faqs.org)
51 points by ionfish on Jan 24, 2009 | hide | past | web | favorite | 38 comments

It's a chapter from Eric Raymond's The Art of Unix Programming. His take is that Plan 9 wasnt enough of an advance over Unix to make a compelling switch. I think that, except for times when the market is in a state of flux, a new entrant with a modest improvement doesn't stand a chance. I remember reading a while back that a newcomer won't blow away an entrenched incumbent unless there is a 5 to one price performance advantage. This was the world of atoms, but something like that probably holds in software. Linux won because free trumps $500 per cpu, and it was really the same thing as Unix.

Plan 9 seems to my untrained eyes like a more than modest improvement, but I suspect even immodest improvements will generally not be sufficient to oust an embedded solution that is, as Raymond says, "just good enough."

Operating systems are a means to an end. What application can you build on Plan 9 that you can't readily build on Linux, or Win32?

Choosing one's operating system is not merely about possibility, it's about productivity.

If a system is simpler, and has fewer and more robust APIs that work for more attached systems, then in theory at least it will be less work to program for, the programs will require less maintenance, and tasks will be accomplished with less friction.

The question, of course, is whether this improvement is sufficient to warrant the many headaches of changing operating systems; in general, it isn't, which is why Plan 9 failed.

Sure. So, what's the application that can be built more easily, with less friction, on Plan 9? I'll make it easy; what's the application that's easier to build on Plan 9 than on Win32?

From the article:

> There is no ftp(1) command under Plan 9. Instead there is an ftpfs fileserver, and each FTP connection looks like a file system mount.

So, an application which needs access to files on another system can make one call to ftpfs, then return to using normal system calls to read and write files. With ftp, you would have to write an application to deal with the ftp client's api instead.

Presumably this advantage wasn't enough to get Plan 9 lots of users.

I don't even think it's an advantage to application developers. Some of the ideas behind application-filesystems have been discredited (for some of the same reasons as RPC was discredited), but more than that, it's just not that much more convenient for the application developer.

In every programming environment I can think of, you're only ever a library call away from FTP anyways.

I think it is an advantage to application developers. A lot of good ideas die because the programmer has to write 900 lines of cruft wrapper code to experiment with a 100 line cool idea, and the programmer never makes it through the 900 lines.

If you can mount web pages as a file system, then you can write a web search indexer in a few tiny lines of code. In fact, you can focus on making simply index-search functionality, and ignore whether it is a web search engine or desktop search engine.

Alternatively, you can use curl or some other library, and you can still get the job done. But you end up thinking a lot more about interface stuff to write applications, and computers end up having huge amounts of duplicate code on them -- curl, the code in various browsers, just numerous different implementations of client http all over the place, and many of them incomplete or buggy.

If Plan 9 was in fact "simpler, and [had] fewer and more robust APIs", then all systems programs.

There's a difference between "an operating system as a platform for developing applications" and "an operating system as a system-level REPL".

As a platform for developing apps: not sure about plan9.

As a hypothetically richer system-level REPL: having a larger subset of "everything" accessible through a filesystem-like interface allowed Plan9 to offer at least the possibility of more possibilities here.

Fuse and the contributed filesystems are pretty close in spirit I think: eg, the grabfs filesystem for macfuse (which gives you access to the per-window images via a file system interface).

It's not about what you can or cannot build. It's about how you build it. For example, if you build a distrubuted system on unix or win32, the MPI API is almost the only way to go for you, which turns your code and entire application into complete mess. Whereas, Plan9 introduces a very different approach, which makes your code neat, lean and robust and you can dynamically add/remove nodes without rewriting and/or restarting your app.

I believe there will come some Linus Torvalds into Plan9 world who will revive the system and its ideas. From my point, Plan9 is ideal infrastructural solution for heavily loaded distributed web systems (AKA cloud networking) and other Web 2.0 stuff.

PS. pls don't mention Mosix.

Operating systems are a means to an end other than building an application. Libraries are a means to the end of building an application. Operating systems are a means to a different end: running multiple applications together.

The way that operating systems do this has changed over the years. At first, it was just by letting one run while the other is waiting on the tape drive, or the user. Later, it was by letting one read the files of the other, or by permitting you to readily use hundreds of kilobytes of memory on a 16-bit machine by connecting processes through FIFOs, or by allowing objects in one process to call methods on objects in another.

There were a number of kinds of interactions between applications that seem like they should be more straightforward to build in Plan 9 than in Linux. I don't have any experience with Plan 9, so I could be wrong about these:

Untrusted processes in Plan 9 can do the equivalent of chroot, and because things like network access go through the filesystem, chrooting them can restrict those things too. Something like VNC, providing a shared window to run graphical applications in, should be nearly trivial to build in 8½, and wouldn't lose accelerated graphics --- it should even be easier than in the VT-100 world where screen runs. Virtual desktop software should be easier, and work better, in 8½ than in X (at least prior to Compiz). Spreading such a large virtual display across more than one physical display, as with x2x (but with the ability to move windows between them) is the same problem as the shared display. A networked chat application should be able to use 9P (within the SSI) and therefore the client should be a simple shell script, while the server merely has to provide a 9P service accessible over the network. And then there are the examples that are actually in the Plan9 papers: exporting your local process namespace to a compute server so that your compute-intensive process there can access all of your local resources transparently; 8½ itself. And of course anything that deals with text on Plan9 can assume it's in UTF-8, rather than guess. (To the extent that's not true, it's because it's trying to interact with programs beyond the boundaries of the Plan9 system. You can't expect Plan9 to make much of a difference for how applications not on Plan9 interact.)

There are any number of features that are implemented currently on Linux, but badly, that a cleaner architecture like Plan9's would support better. x2x is one example; the "virtual filesystems" in GNOME and KDE, which permit you to access network servers and digital cameras and CD audio (but only in some applications) are another. International text is a third; I don't know about you, but I'm constantly running into character-set incompatibilities. (What do you mean, Python doesn't know what encoding my source code is in? And why isn't naïve a valid variable name?) Attempts like Janus and Plash and CPUShare (and Chrome! and Native Client!) to run native code in an environment without full access to your account are, I think, a fourth.

Inferno plugin http://www.vitanuova.com/inferno/pidoc/index.html . It is similar to NativeClient/Java

The other disadvantage is that the other OS (Linux, at least), can steal the good ideas of it's competitors. Take /proc for example. Now, plan9 may have a nicer interface, but it cannot compete on a features-level.

The Linux and BSD /proc is a pale imitation of plan9's, though. The most interesting ideas in plan9 are inherently incompatible with Unix because they're direct consequences of a deep overhaul of its security and file/namespace systems. (Much deeper than, say, adding sudo.) The free Unices' /proc filesystems attach to the global filesystem namespace, but on plan9 each process can have its own, and consequently what you can do with them is very different.

(FWIW, I detest the plan9 GUI, and am more interested in microkernel-based approaches such as in Minix 3, but getting into that in any real way is so far down my spare-time-fun list that I can only consider myself a curious onlooker.)

" Linux won because free trumps $500 per cpu, and it was really the same thing as Unix."

Linux won the hearts of many in mid and late 90's because it was VASTLY better than the mainstream OS back then. That Linux was the first many tried before other possible things (minix, bsd, plan9, etc.) most of all was because of the name.

Yes. The name. Linux sounds cooler than any of the other alternative OSs, and therefore it won them.

By 1994, when I first used Linux, it was vastly better than any other Unixes for interactive use. The basic Unix software was much higher quality than other Unixes (see the original "fuzz") paper for some measurements of this), much more featureful (look at the old joke about GNU Hello, the "hello, world" program with 150 options), and much more usable (e.g. --version and --help, long options in general, tab-completion and WYSIWYG command-line editing in bash; compare to csh !-2:s/vresion/version/).

Linux defaults in 1994: fvwm, rxvt, color (in ls and rxvt), Emacs, Seyon, less. Unix defaults in 1994: twm (or mwm if you were really unlucky), xterm (huge memory hog), black and white, vi (not Vim!), cu, and a more where you couldn't scroll backwards. Also remember term? You could get, approximately, an internet connection through a dialup shell account on Linux, by the simple expedient of recompiling all of your networking software to use term.

When I started using IRIX in 1994, ls to a terminal defaulted to single-column output; find . -follow -print on a directory with symlinks up the hierarchy would infinite-loop; find defaulted to starting from no directories and not doing anything with the files it found; etc. It had a much prettier GUI and 3-D acceleration, though.

Remember also that at the time Microsoft OSes shipped with no TCP/IP support.

Actually, Linux won over BSD because BSD's legal status was in question when AT&T sued BSDi.

Also, Linux was appealing for hackers, as even today, it's harder to contribute to BSD, having a more centralized development methodology than Linux.

Not only that. The BSD license does nothing to prevent a competitor from grabbing your contributions and make a proprietary product with them.

Linux is a better target for companies like Canonical or Red Hat because of the GPL.

BTW, this is why Linux is a better target for companies like IBM, HP, SGI, Oracle and so on: because when they contribute something they can expect to get all other contributions in return. There is no such guarantee in BSD.

And yes. The AT&T lawsuit did not improve BSD's prospects a bit.

I used Plan9 for a while, I had a project going with some other people that involved a lot of distributed access, and Plan9 was pitched as solving a lot of our problems from the start, that we would have to hack around if we used anything else. (I describe the project down below, for the curious.)

The title, "Plan9: why no compelling successor to Unix has gotten of the ground" presumes that Plan9 is better than Unix. Plan9 sucks, and is a waste of your time.

The thesis of this article is that Plan9 was better than Linux, but Linux was good enough no one had a big enough reason to switch to go to the trouble; this fails to take into account that Plan9 was dead before Linux ever got out the door.

One reason for that was the license. For a while it was under some Bell Labs license that no one was sure was "open source" or not. As the Lucent took over the labs, and then had troubles itself, there was a lot of uncertainty about it. At some point the license was changed or clarified or something, and the new license could possibly be interpreted as allowing you to re-license it under the GPL, and there were rumours and hints that Rob Pike or some other iconic figure had told someone the goal of the license was to allow that, but it was obfuscated to get it past corporate lawyers. That didn't inspire confidence.

At one point in my life, I am embarassed to say, my goal was to spend the next year sitting down and reading every single source file in Plan9, putting a GPL or LGPL license on it, and then re-distribute "Plan9-GPL". I was convinced the license allowed it, and I figured that if I did it and was the obvious target for a law suit from Lucent and I was not sued, then Plan9 would take off. One good thing about Plan9 is that the total number of lines of code is low, one person could do that. However, like other powerful computer systems, sometimes there is a lot packed in few lines.

The user interface is worse than TWM. It's main editor basically requires that you constantly have one hand on the mouse and hunt and peck keys with the other hand, as if it was designed by one of those old computer illiterate people who can't type and clutch the mouse in a death grip the entire time they are in front of the computer. Certain people in the Plan9 "community" talk about rio like it is something better than Windows 2.0 or Commodore 64 GEM, and make a lot of jwz-like snarky comments about X, but it isn't.

I suspect the reason for this is that no one actually uses Plan9 on a daily basis. The headers of emails on the 9fans mailing list indicate that everyone except for a few holdouts uses Outlook. I think they edit their files in windows, and copy them to Plan9 via ftp or samba.

I wanted to document how to set up a "node" for our project in painstaking detail. As a result, I repeatedly wipped a disk using dd if=/dev/zero from tom's root boot floppy, and re-installed again. A few weeks into my experimenting, they updated Plan9 and it lost the ability to write the MBR, so I had to fdisk from a linux each time. This remained a bug for nearly two years, causing me to realize that in all probability, no one in the Plan9 developer community was installing any new computers.

That was a shock to me, as I had invested a lot of money in hardware that would run Plan9 (specific video cards and wireless cards, mostly) and a HUGE amount of time. When I saw some poor dude post on the 9fans mailing list 18 months later or whatever wondering why Plan 9 wouldn't boot after a clean disk install, I knew it was dead and that all that time was wasted.

I believe that most instances of Plan9 are currently run in VMWare on Windows desktops. If you are doing operating system research, and operating systems are the interface between application code and hardware, and you need a two layers of closed source between you and the hardware just to get started, you have failed.

It may have once been a framework for some cutting edge experiments in operating systems, and maybe still is. The C compiler it has is fast and nice, at least compared to gcc of the 2002 era or so. If I had some reason to write a C compiler, I would dig up the 9c code and take a look at it, and possibly start there.

I liked a lot of Plan9's ideas, but the whole experience made me appreciate linux a lot more. I don't think that Linux needs to take the best ideas from Plan9, Plan9 needs to steal the mediocre ideas from Linux -- like command completion, how about porting emacs or vi or nano, the basic stuff that allows you to get to working on the cool stuff.

The people involved with Plan9 seem very interested in the interface between application code and the operating system, and less interested in the operating system -- that's why they run Plan9 in Xen on a Linux computer. However, for people interested in that, they don't seem write much code that uses that interaction.


P.S.: The project was to make this giant tit-for-tat resources trading system - we were starting with wireless access, so that if a person put up a wireless access point, you could use it only if you had a wireless access point at your house that was also open to the first person to use; next we were going to do offsite storage, so you would make a few gigs available to the system, and get a quota of a few back, but your stuff would be distributed and replicated accross many other people's drives thus resistent to disasters; and then a cpu trading thing similar to folding@home type systems; and so on. There was no business plan, our goal was to change the world not get rich (the participants came out of Austin's Cypherpunks group). We had a lot of excitement and hype going on, and it died due to interpersonal fighting (if you know the Cypherpunks from that time period you know what I'm talking about).

Plan 9 admittedly sucks in many ways. It has limited hardware support, a weird user interface, and no active community. But also has redeeming qualities that I miss every day that I have to program on linux.

In Plan 9 every process has a namespace associated with it that represents all the resources it has access to. For example, /dev/mouse represents the mouse in Plan 9. But typically it isn't provided by the kernel but by the window manager, rio. So when you open rio it opens the mouse and then provides a new /dev/mouse for each process launched within it. Every process has a different /dev/mouse that only activates when the mouse is in that process's window(if it has one).

To draw to the screen there is one function: draw(). You pass it two rasters and some coordinates and it draws one on top of the other. That's it. Everything works across networks and architectures.

The thread library is wonderful. Threads in Plan 9 are non-preemptive and run in a round-robin fashion. They communicate using CSP channels. CSP can greatly simplify the design of many programs.

In a few of the programs I wrote would access the mouse and keyboard in seperate threads. You could use the regular open() and read() calls from each thread and there was no need to worry about locks. When I wanted something to run on both cpus I forked a process, which is very cheap.

The text editor, acme, is different and probably sucked for you. But that is because you misunderstood what it was. In linux and windows programs are islands. These islands are expected to provide everything the program does in one executable. Acme is not an island. It does not have a builtin spell checker or word counter. What it does have is a namespace that allows you to access its resources.

Plan 9 is a system which was deliberately made by very smart people. It has serious issues which make it not often a realistic choice, but it has a clarity and consistency of design that linux will never know.

Thanks for your insights. I agree that from the point of view of a developer writing complex applications simply, Plan 9 is on the right path. What I read of the cleaner operating system interface is what attracted me to Plan 9 at the start.

I think a lot of the "new" things in programming are also attempts to fill that same need, of a cleaner, simpler interface to the underlying computer. All the "frameworks" out there, Android even, are attempting to clear that tangle of OS interface that has grown up underneath the applications.

However, to have any of the nice design and brain-work come to real usefulness, Plan 9 has to actually address the grundgy parts of the problem. The hardware support must be there -- not for all hardware out there, but you have to get the items that have huge market share and are cheap. The user interface needs to be improved of course, and I think the community is actually pretty active and helpful for it's size -- if other issues were attacked, that would take care of itself.

Thinking over how Plan 9 failed, I am reminded of rms's assertion that there is a GNU operating system, and that there was so much work to do GNU ended up writing everything else except the kernel, and thus GNU is most often used with the Linux kernel from somewhere else. Plan 9 is kind of what an operating system that just has a kernel looks like. They need to copy large parts of GNU as quickly as possible.

I realized what acme was, but I never even attempted any spell-checking or fancy features. I may not have tried hard enough, but I had problems simple placing the cursor where I expected, and simple stuff like that. I never gave it hours uninterrupted learning and experimentation, which is what I did when I decided to become as good at vi as I am with emacs; given the whole Plan 9 system seemed dead, it didn't seem worth it.

If I had infinite money and were dictator of the world and etc, here is what I would do to make Plan 9 happen:

-- Make sure that the cheapest Dell laptop had complete hardware support. Note that this is something that Linux often fails at, and even Windows has problems in this area.

-- Re-license it under the GPL (maybe BSD) licenses. If that is not allowed under the current license, it is conceivable that a small number of smart people could re-write it in a year, using the Lucent as a template, and possibly improving certain areas as they went. Even if they tossed the hard features, once a self-hosting, working, useable system was available under the GPL, a larger community could re-address the problem areas.

> It has serious issues which make it not often a realistic choice, but it has a clarity and consistency of design that linux will never know.

Fail. Reality is crufty.

Ha, I agree with you, really. Reality is crufty and getting things done involves compromise.

Still, I think there is value in studying other ways of doing things. For example, my studies of Plan 9 introduced me to CSP, erlang style concurrency, which is a great way to structure some types of programs.

And don't forget that some problems can only be solved by introducing simplicity at some level. For example, google achieved high levels of reliability and scalability by using a simpler computational model: map-reduce.

It looks like your assessment of Plan 9 is the same as Rob Pike's: the effort (and impact) of making an OS actually work (drivers, usable GUI, etc.) totally overshadows the effort (and impact) of implementing better abstractions.


That sounds pretty much like why PHP won; the effort of making a programming language work (ie the standard library) overshadows the effort of implementing better abstractions.

This deserves a post in its own right. Would you mind to post it?

I'd forgotten that text doesn't show up with a link submission. Here's what I posted originally: Not new, obviously, but an interesting story for those of us who don't know it, and some generally applicable lessons. "In 2003 it looks like Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough."

My long-held theory is it failed for lack of a blurb. When you visited the site, it was hard to figure out what Plan 9 was exactly. Digging into the documentation produced a link to a gzipped .ps file, and after installing Ghostscript, then Ghostview, then some fonts, then uncompressing the file and viewing it, one could read an academic paper describing in arcane language some problems of interest to file systems theoreticians. This state of affairs continued for years. Not surprising at all that it failed to get much traction.

A simple question: Rather than waiting for Linux to slowly absorb Plan 9's ideas, would there be some way to take all of Linux's drivers, scheduling, and other such excellent-but-non-OS-specific code, and code a new Plan 9 on top of that?

What's happening with HURD anyway, on that note?

For anyone who wants to play with it, there appears to be a live CD:


There are issues with the livecd. I couldn't get it to boot in modern VMware (they use version 4 or so). I recall having problems booting it on a physical system.

Not to dissuade people, but try and research which VMs/hardware it supports /before/ trying it out.

FTP mounting as a filesystem was the big thing? Wowzers.

It was FUSE, but better, and 15 years earlier.

For instance, FUSE filesystems still can't support nonblocking IO.

I think taking the standard read write calls on arbitrary underlying implementations to the logical limit was the big thing.

unix is nice, because you can read off a file or a socket or a mouse or a disk. This lets you do that over ... anything, i guess.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact