Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Why isn't Plan9 popular?
96 points by justlearning on Sept 27, 2010 | hide | past | favorite | 80 comments
Hello, I don't have an elaborative question. I 'discovered' Plan9 yesterday. It seems like Plan9 was the successor to Unix. So why don't we see it in the mainstream?

What seemed intriguing to me was that for the last 8 years with all the linux/unix around, I never came across any references in - books/blogs/articles.

link: http://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs



"Plan 9 failed simply because it fell short of being a compelling enough improvement on Unix to displace its ancestor. Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough."

- Eric S. Raymond

It's worth noting that Plan 9/Inferno are cited pretty regularly in computer science papers and a number of the ideas of P9 have been absorbed by Linux, such as representing 'everything' with the filesystem. Last I checked, the 2.6.x kernels also support the P9 protocol.

"...for the last 8 years" Yeah, well, Plan 9 hasn't had an official release since 2002.


Ya, I had never heard of Plan 9 until I started doing graduate work, and then one in ten papers somehow referenced plan 9, and almost every single operating system paper references it some how.


How true, IE6 used to have a large market share( may be still has considerable market share ) only because for all of its short comings, it gets the work done.


Plan9 doesn't have releases, it has continual development.


A lot of Plan 9 innovations have appeared in Linux. One of the outstanding issues are union aka. overlay filesystems, and these are currently being addressed.

That said, Plan 9 seems to see continued use in some niches. For example, Plan 9 has been ported to Blue Gene.

(What's kinda cute is that the Plan 9 kernel (including TCP and such) has fewer lines of code than the Ruby parser, 8000 vs 10000, IIRC.)


I'd add union/overlay filesystems are in FreeBSD since at least 6.1 and became stable ca 2007. I use this stuff quite often.


(In my opinion) Here's a bunch of reasons why Plan9 isn't popular, and then a reason or two why something like it likely will be before too long:

The first obvious answer (also mentioned in other comments) is because "Systems Software Research is Irrelevant". See Rob Pike's paper by that name: http://herpolhode.com/rob/utah2000.pdf

To the reasons Pike offers there, I'll add:

1. The companies that owned the software had poor strategies for encouraging widespread adoption, if that was even ever their goal.

2. The first dot-com boom ca 2000 greatly expanded the IT / programmer workforce and (I claim) significantly dumbed it down. The advantages of a new operating system lack economic impact until a huge number of programmers are trained to use those advantages. We have huge sunk costs invested in maintaining a huge supply of mostly weakly skilled programmers and admins, a sort unlikely to adapt to and adopt a new OS.

3. In a related way, the sectors that could in principle drive something like Plan 9 adoption are heavily invested, by now, in massive amounts of bloatware that, dysfunctional as it is, is both critical and non-portable.

4. Modern hardware is fast enough that fairly high level programming languages and environments tend to dominate. These often include a "least common denominator" view OS capabilities so that programs port easily among Windows, Linux, Unix, MacOS, etc. This hurts demand for OS features other than the "least common denominator".

-----------------

Why it might get better:

Notice that none of the reasons listed above apply to a market niche like "the OS for Google's clusters" or, say, massive clusters providing an SQL-based RDMS.

On clusters like that:

1) You don't need a mass audience of buyers for a new OS. One or a few big customers will be plenty.

2) You don't care about hordes of cheap, weakly skilled programmers. Paying experts is peanuts compared to your hardware, power, and real estate costs.

3) You're not tied to bloatware. You need only run a few things very, very well.

4) You don't need to do "least common denominator" programming and, in fact, any new OS feature that can save you some $s per server-hour is potentially a huge win.

My betting money is that Pike et al. will produce YANOS (yet another new OS), quite possibly mostly written in the Go programming language, really well suited for huge compute clusters.


I'd put more money on enough people finally paying enough attention to DragonFlyBSD for it to really gain traction.

Check out the blurb on their main page (http://www.dragonflybsd.org/):

The DragonFly project's ultimate goal is to provide native clustering support in the kernel. This involves the creation of a sophisticated cache management framework for filesystem namespaces, file spaces, and VM spaces, which allows heavily interactive programs to run across multiple machines with cache coherency fully guaranteed in all respects. This also involves being able to chop up resources, including the cpu by way of a controlled VM context, for safe assignment to unsecured third-party clusters over the internet (though the security of such clusters itself might be in doubt, the first and most important thing is for systems donating resources to not be made vulnerable through their donation).

If anyone truly "gets" cloud computing, it's these guys. And they have the advantage of not being quite as big of a jump from existing systems as Plan 9 is.


> which allows heavily interactive programs to run across multiple machines with cache coherency fully guaranteed in all respects.

Cache coherency gets increasingly expensive with scale. At some point, the costs exceed the benefits.

The location of point depends on lots of things, but if you're on the other side....

Also, there are many kinds of coherency.


with cache coherency fully guaranteed in all respects

It's not at all clear that this is desirable. I'd much rather have thin (one or two socket) systems with a reliable network and write for distributed memory. Even four-socket systems have quite unpredictable performance characteristics.


I don't know, they'vr got some coverage with HAMMER a couple of years ago, but where did it lead them?


My betting money is that Pike et al. will produce YANOS (yet another new OS), quite possibly mostly written in the Go programming language, really well suited for huge compute clusters.

Can't wait to see what kernel programming in Go will be like.


Very C-like, I'd wager.


IMO Plan 9 didn't gain critical mass because Unix is still "good enough". It's also very difficult to keep current with hardware drivers for a niche OS.

However, there are some things that came from Plan 9 that are very much mainstream, the most notable being UTF-8 which was invented by Ken Thompson and first implemented on Plan 9. Linux's clone(2) system call is obviously inspired by Plan 9's rfork.

And one could argue that the Go language is a spin-off of Plan 9 as it's substantially implemented by Plan 9 refugees to Google (Ken Thompson, Rob Pike, Russ Cox) and is explicitly descended from other languages that came out of Bell Labs (Newsqueak, Alef, Limbo). Of course whether Go will be a success is an open question, but I think its chances are as good as any other equivalent language.


Speaking of Plan 9 and Go, I have to ask the same question: why isn't Go more popular?


It has only been out since November 2009, and it wasn't considered ready for production then. Maybe it is now, but it definitely hasn't had much time to be establish a large user base.


Go is in a "public beta" status, i.e. what you see right now is its development, not its refinement. It basically hasn't reached a 1.0 release version yet. Participants of the mailing list are very early adopters who write some libraries for it. It's still likely enough that important parts of the language spec will change in non-compatible ways, and especially the standard library still isn't too stable (e.g. the http module is a bit orphaned right now).

But the progress is quite impressive, and it might hit a niche for lots of people (those Zed Shaw tends to call "longbeards"). Unix systems programming is still dominated by C, as there aren't that many valid alternatives. Java is systems programming for the Java system, C++ is C++. Ocaml is probably one step too far for bit pushers. And then there's the pedigree of the developers.

I think within a year the language should have matured enough. If someone manages to come up with a "killer application" for it (c.f. RoR/Ruby or node.js/server-side Javascript), it might take off.

It might also end up like Modula-3.


There is also the Rust programming language being developed by Graydon Hoare(the Mozilla Tracemonkey compiler dude), Brendan Eich (the Javascript creator) and Dave Herman (of PLT Scheme - Racket ?) http://lambda-the-ultimate.org/node/4009

From the language faq : http://wiki.github.com/graydon/rust/language-faq

   "Have you seen this Google language, Go? How does Rust  compare?

   Yes.

   Rust development was several years underway before Go    launched, no direct inspiration.
   Though Pike’s previous languages in the Go family   (Newsqueak, Alef, Limbo) were 
   influential, Go adopted semantics (safety and memory model) that are   quite unsatisfactory.
     Shared mutable state.
     Global GC.
     Null pointers.
     No RAII or destructors.
     No type-parametric user code.
   There are a number of other fine coroutine / actor languages in development presently. 
   It’s an area of focus across the PL community."


Go is picking up traction with Real Companies doing Real Things. I can say that it's now being taken seriously within Google, and is being used for some production services there.

I've also been in correspondence with one group at another company who have written close over 50k lines of Go code for their storage infrastructure. They're very happy with it. (Just rooting for a better garbage collector; it's coming.)

Despite its familiar appearance, Go represents a different approach to programming in general. One that I find personally refreshing, and it seems to be catching on.


What about D?


That's a good question. D has been around for quite some time, has some well-known names (Bright, Alexandrescu), but has yet to break big. Compared to it, Go, a much more recent development, tends to do quite well (just my estimates, PL mindshare is a bit hard to measure).

I think it's mostly because of some internal and positioning issues. I haven't been following it recently, but I remember some problems with different standard libraries, D 1 vs D 2, and three compilers that each had issues. Maybe that's resolved, but it certainly didn't make things look too good.

Also, it seems to be more a successor to C++, which makes some people a bit distrustful. Yes, it's much cleaner, but also quite a bit more complex than C, and not a total programming environment as Java or C#. So it mainly gets its users from the C++ camp, which isn't the mightiest army nowadays (And whose Windows battalions aren't supported very well by D).

Go is in the news a lot because of its Google and Unix (Pike, Thompson) heritage, because it's a bit simpler (more C crossed with Python than ++C++), and because it has some prominent concurrency support, which is close to the current hype (every emerging language has to prove its mettle there).

By the way, don't read this as a value judgment. It's just a personal assessment why I think that Go is currently a bit more popular and will continue to be so.


Although D has been around for longer, it's worth out that the Go amd64 (aka x64) compiler is the most mature compiler in the Go compiler suite (ARM, x86, amd64) but AFAIK the D compiler is only available for 32-bit x86.


Well... Not so much. I'm playing with it, but apparently it doesn't yet fully run on Windows. I think that'll work for my project, but that would prevent most people from using Go.


I think this is a hard question to answer without a solid definition of popular.

That said, at our startup we use Go for all server side work and I wrote Go production code while still working for Google. In my little world it's very popular, the only other language we use is Javascript.


Interesting, how was Go used at Google? How do they deal with incompatible changes?


that's interesting - did you evaluate something like Node.js or Clojure/Aleph for the same purpose ?


If anyone wants to have a go with Go you can now play with it in your browser: http://golang.org/


I can't speak for general popularity, but the reason I haven't been able to try it as a C & C++ replacement is that I tend to use C or C++ where I have to. Weird hardware, integrating with some existing piece of software, etc. Using it for that kind of thing is hard and definitely more work than using C/C++. I guess it's the same reason I don't use D.


Because it's still not really on Windows. Which means all the programmers stuck in Windows shops out there aren't doing anything with it, the game writing community isn't doing anything, and even people who write desktop apps are still looking at it a bit funny. Critical mass is slow to build as a result.


No support for apt-get installation?


Because it's promoted by a company with institutional ADD?

/snark


Not substantially performant for use in systems programming.

Not substantially powerful or enough libraries for use in web dev (yet, this could change).

Global GC (inappropriate for the niche it's ostensibly targeting).

Overly opinionated about concurrency. (C++ and Java are the gold-standard for library-centric concurrency methodologies)

Go sits in an uncanny valley.


> C++ and Java are the gold-standard for library-centric concurrency methodologies

The last time I checked concurrency isn't possible with standard C++. And multi-threaded programming in Java has been called many things, but "gold-standard" isn't one of them.


You can use Java's concurrency features to implement better concurrency features (as opposed to the language just giving you better concurrency features from the outset), which is presumably what was meant by "less opinionated" or "library-centric". You can have asynchronous or synchronous message passing or STM on mutable references to immutable data or really anything built on top of Java, but an opinionated language like Go just picks something (synchronous message passing in Go's case) and makes you use it.[1]

[1] You can implement asynchronous message passing on top of synchronous message passing, but not the other way around. Presumably the "least opinionated" language would simply give you the concurrency primitive with which you can implement the greatest number of other concurrency primitives, which presumably[2] is locks.

[2] I could be wrong about this.


I think you're the only one to get my point so far.

A lot of the best libraries and concurrency approaches I've implemented in Clojure so far have been imports from Java. (NIO2 being one example)


>The last time I checked concurrency isn't possible with standard C++

If you don't know, then I can't tell you. People have been doing threading, re-entrant code, libevent/libev in C and C++ for decades. (cf. DragonflyBSD)

I didn't say it was trivial, just that it's an industry standard for when you need to get things done (TM).


It appears you are being intentionally obtuse, so I repeat: Neither the current C and C++ language standards provide any support whatsoever for concurrent programming. Therefore you cannot do concurrent programming in either of those languages without resorting to using operating system specific or 3rd party libraries.

DragonflyBSD, a project that was only started in 2003, has obviously not been around for "decades" so hardly serves to advance your claim.


>Neither the current C and C++ language standards provide any support whatsoever for concurrent programming.

They're not supposed to. They're systems programming languages. When you're writing a kernel or embedded software, having the language mandate some kind of concurrency model at the standards level is nonsensical.

I wasn't linking decades to DragonflyBSD, I was saying that the techniques and methods have been known and used for decades.

The original point, restated here for clarity is this.

Go was supposed to be a new-age systems programming language. Its current design (mandated concurrency model and global GC) preclude that from ever happening without fundamental changes.

Rust has a better chance of becoming the next systems programming language.

I don't know what your niche or corner of the realm of software is, but I get the feeling you're commenting on a field you have no substantial experience in.

It's evident to anybody with any real experience or wisdom concerning software that having choices like concurrency made for you at the language-level narrow the scope of usefulness for that particular language and pretty much precludes it from being a top-grade choice for embedded/systems development.

Example:

Were it not for the glut of hardware performance and memory capacity improvements in smartphones, Dalvik/java would've been impractical at best on Android. It's still a vastly inferior experience compared to the iPhone, which is obj-c with optional drop-down to C/C++.

And don't mention the NDK for Android, it's a bad joke.

Bootstrapping a runtime for a language that depends on a particular concurrency model irrespective of the problem at hand is stupid beyond my faculty for describing with words.

tl;dr good luck using Go for hard real-time systems.

Why do you think C++ achieved popularity so quickly? It was pliable to the demands of the problem being solved just like C at the expense of complexity.


Fair enough, if you define a "systems programming language" as one in which it's appropriate to write a real-time operating system kernel, Go is very likely not the language for you given its built-in garbage collection. Its suitability as a language for OS kernels has been discussed in this thread, with comments from the Go authors: http://groups.google.com/group/golang-nuts/browse_thread/thr...

However, if you define a systems programming language more broadly to include statically typed languages that compile to machine code with good support for a theoretically and practically sound concurrency model based on CSP, Go might very well be something worth investigating.

And I wouldn't say that C++ achieved popularity "quickly" inasmuch as according to Stroupstrup he started working on it in 1979 (http://www2.research.att.com/~bs/bs_faq.html#invention) but it wasn't in widespread use until the early/mid-1990's and wasn't standardized until 1998. By that timetable I'd say that Go has a few years yet before it can fairly be judged a success or a failure.


>By that timetable I'd say that Go has a few years yet before it can fairly be judged a success or a failure.

I don't care, that's not what I'm talking about.

I was talking about the use-case uncanny valley it sits in.

If you have a point unrelated to anything I said, use your blog. I've had enough grief from hackerne.ws over the past week that I have zero patience for someone using me as some kind of pontification aid.

I know what's involved in making a kernel, and what's typical for systems programming and embedded development today. I don't give a good goddamn what the devs of Go have to say about it.

Use case inappropriate. That's the last time I'm repeating myself.


It's hard to make any judgement of a language that's barely a year old, especially about performance and libraries. All industry standard languages are at least a decade old.

> C++ and Java are the gold-standard for library-centric concurrency methodologies.

It's a standard, but it is not by any means the gold standard. Just about anything would be a better concurrency model. Also see "Threads Cannot be Implemented as a Library": http://www.hpl.hp.com/techreports/2004/HPL-2004-209.html


>It's hard to make any judgement of a language that's barely a year old, especially about performance and libraries. All industry standard languages are at least a decade old.

It's a product of their design decisions and has nothing at all to do with implementation or time.

Look at Rust if you want to see a realistic up-and-coming systems programming language.

>It's a standard, but it is not by any means the gold standard. Just about anything would be a better concurrency model.

Right, well get back to me when you write that multi-threaded web server in Python.


They open sourced it too late for it to get much traction, I think.

Also, does it solve any particular problem in such a way that it can't be ignored? Doesn't seem like it to me.


There were the problems with the license for some years.

http://www.gnu.org/philosophy/plan-nine.html

Once it became "free" it had too little functionality.

There's too little of the software and it's not so easy to port the existing Unix/Linux software:

http://plan9.bell-labs.com/wiki/plan9/porting_alien_software...

Moreover, very, very little of the hardware is supported.


Yeah, it wasn't open source. Edit: wikipedia says it was only open sourced in 2002.

Incidentally, the 'crossing the chasm' approach would be to find some niche where a new OS could succeed - perhaps something where the network effects were less pernicious.


I think you're right. Plan9's first "public" (to universities only) release was 1992, when Linux was just getting off the ground and the FreeBSD & NetBSD projects did not yet exist. People interested in Linux or 386BSD at the time would have probably tried Plan9 if they could have.


It's worth looking at Rob Pike's (one of the Plan 9 inventors) paper, "Systems Software Research is Irrelevant": http://herpolhode.com/rob/utah2000.pdf

My own view is that Linux is Good Enough for most people. And that "good enough" beats "better", at least for most values of "better". This is an admission that "path dependence" matters for the operating system market, and that network effects can cause the appearance of market failure.


FWIW, if you're interested in Plan 9 it's also worth checking out Plan 9 from User Space (http://swtch.com/plan9port/): "Plan 9 from User Space (aka plan9port) is a port of many Plan 9 programs from their native Plan 9 environment to Unix-like operating systems."


I use Plan 9 extensively and intensively, and I believe there are multiple reasons why it is comparatively little-known and little-used. I think the most important reason is the historical conflict over software patents between Rob Pike and Richard Stallman, and the cultural rift that it represented. Plan 9 is now under a pretty decent permissive license, but it still doesn't really function in the manner of a conventional open source operating system project. By the time Plan 9 was under a decent license, the mindshare of FOSS developers and the corporate ecosystem were already completely committed to other projects.

Plan 9 is still relevant and interesting and useful for some tasks. It may still be ahead of its time. Plan 9 has had data deduplication for about a decade, years before network data dedupe storage appliances became an important product. The Linux kernel is still struggling to figure out a decent way to implement union directories in filesystems, but unions have been working elegantly in Plan 9 since its early days.


My theory is this:

- too weird - Plan 9 is very difficult to get used to as a user experience. Novel, fascinating, powerful, yes! But also minimal, impenetrable, and very very different from the UI you're used to as a Unix hacker.

- nonexistent marketing

- too hard to get into initially; I wish they had gone to a public source repository model earlier, although in all fairness the project predates a lot of this.

But, it was still an enormous success:

- UTF8 came from Plan 9

- Linux's clone syscall

- /proc

- numerous other minor things

The main thing I wish I saw more that was in Plan 9 was structured regular expressions. I'd love an awk based around them, or an ssam that didn't basically run sam on the file. The second thing I wish I saw more was Plan 9's per-process filesystem namespaces. Chroot and jail are crude hacks compared.


> - UTF8 came from Plan 9

Not really, Ken Thompson certainly contributed to it, but they didn't invent it.


No, UTF-8 was invented by Ken Thompson. Source, by Rob Pike: http://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt

Key quote: "Looking around at some UTF-8 background, I see the same incorrect story being repeated over and over. The incorrect version is: 1. IBM designed UTF-8. 2. Plan 9 implemented it. That's not true. UTF-8 was designed [by Ken Thompson], in front of my eyes, on a placemat in a New Jersey diner one night in September or so 1992."


1. It's not marketed -- professionally or at a grassroots level.

2. It's a research OS not necessarily meant for every day use.


I totally agree. From the one-off-not-quite-compatible license to no marketing from Lucent to no real niche to lets try Limbo, Plan 9 never got a firm base with marketing.


Howdy, I installed Plan 9 several years ago(close to 10, if I remember correctly). I thought it was interesting, but it had a few problems. 1. It was too hard to install. I figured it out, but it was difficult enough that I thought I would not be able to recommend it to many others, unless they were fairly advanced. 2. The supported hardware list was too small. Mostly, this was a problem with supported nics. On a system like Plan9, a system without a good NIC is pretty useless. 3. The community of users was too small. I did not find any other local users. I found a small community of online users, but they only seemed interested in using Plan9 to break into systems and that did not interest me. I should probably try it again, since many of these things could have improved. I have also used Inferno a bit on a prototype ATT phone and I would like to program that a bit. Linux ended up having most of what I wanted Plan9 for and that is probably why I have not tried it for so long.


I earlier posted reasons why (IMO) Plan9 "failed" but will likely (in a new form) succeed on big compute clusters.

The other possible place for a big breakthrough is mobile. For example, if all of your "apps" are in Java and those apps hardly ever touch traditional unix system calls, it is easy to knock a Linux or Windows OS out from the bottom of the stack.


Same reason the Hurd never materialized. Linux was good enough that nobody cared to put in the massive effort required to make the slightly better solution industry-strength. It's something like the death by a thousand cuts, only in this case, the thousand cuts are unsupported hardware and silly programs that don't have ports or package maintainers, but that people still want anyway. BSD gets away with being the unpopular but probably better alternative to Linux by a massive effort to port all linux software to it, and a great linux compatibility layer for things that aren't ported.


Hurd was never fully functional. It never worked well enough to have an opportunity to be rejected for being weird.


I think it is significant that both Unix itself and Linux were category killers. Unix succeeded because people kept going "Wow, there's nothing like this!" and writing in to get it, copying it passing it on to other people. When Linux took off, there was no Free-ly available Unix-like OS (at that time). Linux quickly gathered a critical mass of users who saw it as something like the answer to their prayers. I guess the strengths of Plan9 were not such as to create a new category for itself, and its flaws made it an unsuccessful competitor against existing Unix-like systems.


I've worked with Plan9 and Inferno. I think they both failed for two reasons:

1. lack of device drivers for commonly found hardware at the time.

2. Ugly looking graphics and desktops and difficult (relative to what was widely used at the time) to use as well.

IBM's use of Plan9 with BlueGene uses the server side benefits so they need only a small number of custom device drivers and the GUI is not used at all.


Similar situation to why weren't other featureful alternative OSs more popular? Like BeOS? Imagine how different today's world would be if Apple had bought BeOS instead of NeXTSTEP (which included Steve Jobs)? Often it's some large corporation's decisions that dictate what technology gets adopted, what fails.


Be did a couple of things that prevented BeOS from going anywhere.

One was basing the machine on PPC processors initially, because it limited their audience. I thought it was a cool idea, but wasn't willing to buy outdated hardware to get the OS. Mac users at the time could have simply acquired the OS and installed it, but the number of mac users willing to do that wouldn't ever be large.

One was not supporting IDE when they ported to x86. When they finally went after a large market, they made sure that hardly anyone in that market COULD use it. I had to do a lot of finagling with my computer in order to install BeOS on it.

AFAIK Apple actually DID try to buy Be before going after NeXT once they realized that Rhapsody simply wasn't going to happen. I've heard a number of stories regarding this, and I don't know which to believe; I was informed that the Be CEO rejected Apple's offer. If someone has better knowledge of that part of history than I and can confirm or correct me, that would be appreciated :)


Be definitely wasted a lot of time in strategic churn, porting from Hobbit to PowerPC, BeBox to Mac, PowerPC to x86, PCs to thin clients, etc. For the brief time that BeOS ran on Macs, the likely goal was not to get Mac users to switch to BeOS but to convince Apple to buy Be. I heard that Apple offered $75M for Be and JLG refused. (Some people assume that Apple offered the same amount for Be and NeXT, but that's pretty unlikely considering the difference in maturity.)

BTW, where you wrote Rhapsody you meant Copland. NeXTSTEP was renamed to Rhapsody (and then OS X) after Apple bought it.


Oops. I lost track of product names frequently, especially since there are so many ;)

You're probably also right about the reason for starting out with a PPC based system. I'd heard about a similarly sized offer for Be, as well as JLG's refusal.

There weren't a lot of realtime OS's out there, and still aren't -- BeOS could have been the next OS/2, only without IBM's marketing behind it. (Can anyone say, "Friendly fire?")


It's because Unix has become very popular and it is good enough for mainstream use despite its flaws. Plan 9 brings certain design improvements but they don't actually justify a need for replacement.

It's actually not that surprising since good design on its own is never a sufficient criteria for something to become mainstream.


I use the wmii window manager which is based off some Plan9 principles. Surely you wouldn't bring it home to grandma, but for fast efficient use of screen space with a keyboard I think it's hard to beat.


Glenda is super cute but did you try to use Plan9? I give it a small try last year and well... GNU/Linux is mainstream, FreeBSD is a really good derivative of Unix too, OpenBSD is great when we want to have a secure OS... NetBSD would probably run on my toaster... But Plan9? It have a cute mascot.

Oh and I really like its name and the "Plan 9 from User Space" thing, especially after having seen the fabulous movie! But I think of it only as a cool thing to know, never seriously think about using it in place of a GNU/Linux or a *BSD. (same thing for Open Solaris too in fact)


One killer setup for Plan9 would be to package it up as a VM and distribute it. May be Amazon can pre-create some Plan9 EC2 images.


Are you running it?


Because worse is better.


Or because good enough is good enough.


Or Avoiding success at all costs...;)


I've been a Plan9 user/dev since 2000. Have been to two of the IWP9 conferences - sadly I won't be going to Seattle for the 5th unless I score some big cash in the next few weeks - http://iwp9.quanstro.net/

* Licensing - Plan9 wasn't open when open was the in thing.

* No web browser - just as the web became a big thing, no web browser is a deathtrap.

* The gap is too great a leap - until the recent LinuxEMU almost NONE of your fave apps were available - unless you really like cmd line tools (I do :)

* Hard to justify in your organisation - when you're one of only 50 people in the world that know how to use an OS, making it part of your infrastructure is a huge risk.

That said, to suggest it is a failure is erroneous. We have at least one successful company using Plan9 in their hardware - Coraid. They have the commitment, income and a few of those 50 people to make it work for them. IBM also use it on Blue Gene and other super computers, as do Sandia National Labs, LANL and others around the world. It was also used at the Sydney Olympics to control the stadium lighting and Lucent use it in their cell phone masts in Real Time mode.

Come and chat to people in irc://irc.freenode.org/#plan9

Oh - there's also a fantastic port of many of the tools into Linux / BSD. http://swtch.com/plan9port I use them every day - Venti is particularly useful


Hey! I'll be at IWP9 5e this year if things go well. Fun to see another Plan 9 user on here. I'm an intern with Sandia Labs, working on my master's degree at the moment, so I've got this work I've been cooking up all ready for a Work In Progress report at IWP9.

To expand a bit, I think Plan 9 has potential in the world of scientific computing. There are a few things scientists expect on their supercomputers, and Plan 9 either has them or can have them without unreasonable effort. They include: MPI, Python (already available), Linux compatibility layer (exists), and FORTRAN 77 (this can happen if we reaallllly need it). Check out the HARE project, where Plan 9 was ported to the Blue Gene systems as part of the ongoing Fast-OS project, which aims to find a better OS for supercomputing. (Basically, Linux is too huge, whereas custom kernels are just too stripped down and customized. Plan 9 is small but general)


Ron's talks last year were fascinating. It's great to hear the truth behind the veil of the HPC world. That they run Linux but then use OSBypass becase the Linux kernel is too slow was a revelation, they don't shout that part too loud when gushing about Linux. The curried syscall system was enlightening.


Thank you, that was kinda my baby--Ron had the actual idea of course, but I did the implementation. It was really great how easy it was to do.


What are you able to use Plan9 for on a daily basis, out of curiosity?


My normal work - programming. Linux / BSD have a 9p servers so one can mount them in Plan 9. The environment is so much nicer both in looks and operation.


From the 9fans mailing list a while back, someone asked what the plan9 people at Google use:

http://9fans.net/archive/2010/02/344

and

http://9fans.net/archive/2010/02/366


I used plan 9 for a while, and I really like it; but when I was looking to use a network filesystem I tried a 9p remote mount between two Linux servers over the internet. I was disappointed to find that it was much much slower and generally performed worse than sshfs, perhaps due to lack of caching and readahead. Perhaps there is some way to use 9fs more effectively with a caching / readahead layer, I guess it would be simple enough to implement such an intermediate server in plan 9.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: