
 So, why Plan 9? - pmarin
http://www.mail-archive.com/9fans@9fans.net/msg17067.html
======
mhd
Why I agree that the everything-is-a-file approach that Plan 9 takes is pretty
great, and it would be great if that would find a greater use in Linux or the
BSDs (beyond /proc etc.), one has to take those comments about the size and
quality of the code with a grain of salt.

Once again, the last 20 percent of performance take 80 percent of the effort,
and quite often about as much lines of code. And Linux/BSD had to be that
highly optimized, or it would have failed in the server sector, probably
wouldn't support as many architectures and weird hardware parts.

Having said that, I guess that a lot of people would be happy with the level
of support and speed Plan 9 supports, as hardware tends to merge (RIP MIPS,
Sparc, 68k, PA-RISC, S3, Tseng, Token Ring…), and Moore's Law takes care of
the rest.

~~~
supersillyus
What you're saying makes sense to me, but do you have a citation for Plan9
being notably slower than Linux on equivalent hardware? Seems like it'd be
trivially true unless there is a fantastic architectural advantage to plan9
somewhere, but I'd love to see some numbers.

~~~
mhd
It's been a while since I lurked on 9fans (after all the late adopting
minimalist tiling window manager fanboys started coming in in droves), so I
can't give you any links to recent comparisons.

First of all, the comparison would be pretty unscientific if you include all
the different variables. So let's say you want to test common server
performance, then you'd probably do something like an apache bench for a web
server. Now comes the problem: Apart from the kernel issues (memory
management, ethernet driver, scheduling), you'll have different web servers,
compiled with different C compilers.

The GCC port for Plan 9 is ancient, and I'm not aware that e.g. Apache is
running on it.

Yes, you can simply test page loads/second with any technology available to
any platform, and if you buy into the fact that _everything_ you're doing is
wrong (so not just Plan 9 vs Linux, but also gcc vs. 8c, apache vs pegasus
etc.), well…

------
jacquesm
Plan 9 is actually a great little OS, it is where linux _could_ be if it
wanted to badly enough.

Instead we're stuck with early 70's technology. Which is good enough, but it
could be so much better if not for a bunch of NIH and ego.

No disrespect to the kernel devs, they're doing a great job. But longer term I
wish there was some more real innovation instead of just a slightly better
(and free) mousetrap.

For another interesting take on OSes look at QnX.

~~~
derefr
Perhaps a group could make a Linux distro with the goal of forking, and
Plan9ifying, the source of every package they ship. No one would have to use
the resulting distro (other than to demo the neat way that the Plan9ified
packages integrate so easily with one another) but the original
authors/maintainers of the upstream packages could adopt the Plan9ification
patches a la carte, gradually increasing Plan9ification across the board. The
patches of one package would always be guaranteed to work with the patches
from any other, as the resulting combination has to hang together in the demo
distro—and, since this would only be an _eventual_ goal of the group, they
would also have to make sure that all their patches didn't _expect_ any
Plan9ification on any other package's part.

~~~
jacquesm
That's not a trivial exercise. The philosophy behind plan 9 is radically
different from some of the ways things are done in Linux, and it would take a
great effort to get this even close to production grade.

Plan 9 is structurally very different from Unix under the hood, it is in many
ways a better unix but backwards compatibility was not what they had in mind
when they designed it.

Unix is said to have 'everything is a file' as its mantra, plan 9 shows what
'everything is a file' really means.

~~~
davidw
> The philosophy behind plan 9 is radically different from some of the ways
> things are done in Linux, and it would take a great effort to get this even
> close to production grade.

So it's not just "NIH and ego", it's the fact that people are loath to abandon
working software for benefits that probably seem somewhat abstract?

The "crossing the chasm" approach is to find a niche where your product _can_
win and then go from there. Taking on Linux head-on is just not going to be a
winning proposition.

~~~
jacquesm
It was NIH and ego that stopped adoption from some of the more advanced
knowledge available at the time when the relevant portions of linux were
developed.

There was no 'working software' to abandon at the time, there is now.

~~~
davidw
Plan 9 came much later than Linux, if I'm not mistaken. Things like
microkernels were around then, sure, but is Plan 9 interesting because its a
microkernel or because of how you interact with it and how it's architected
(everything is really a file)? Edit: it appears that Plan 9 existed internally
at Bell labs at about the same time Linux was being developed, but was only
released to the public, under a commercial license, in 1995. It was finally
open sourced only a few years back.

Also, there certainly was plenty of existing Unix software out there when
Linux (and BSD) came out. Not nearly as much as now, but nothing to laugh at
either.

~~~
jacquesm
Don't forget minix and the Tanenbaum / Linus exchanges.

That's a pretty well documented era, and guess what, Tanenbaum was right.

But Linus was riding high on the momentum that he'd generated and Tanenbaum
lost the popularity contest. But that didn't make him wrong, and over time
those few disadvantages that have been trotted out as the reasons why he was
wrong have all been put to rest.

That's part hindsight, but Tanenbaum had a huge amount of experience and was
already ahead of 'conventional' unix. Some people think he wasn't ahead
enough, but it was certainly a step forward from where linux is, even today.

~~~
acqq
> Tanenbaum was right

Citation needed. Where do you get that? Performance-wise the biggest problem
of OS X is still that part that is based on the "many servers are the OS"
idea.

That's the reason why this exists:

<http://www.ertos.nicta.com.au/software/darbat/>

Also, NT wanted to try moving in that direction but the critical parts are not
"many servers."

Also, there's a reason why L4 was developed (note: post Tanenbaum): the "real"
microkernels were very problematic performance-wise and it's actually harder
to produce them (example: Gnu Hurd)

<http://en.wikipedia.org/wiki/L4_microkernel_family>

<http://en.wikipedia.org/wiki/GNU_Hurd>

~~~
eru
On the other hand, you can view the current fad of virtualized everything and
hypervisors as a kind of exokernel.

~~~
acqq
> you can view the current fad of virtualized everything and hypervisors as a
> kind of exokernel.

No I can't, as long as the underlying OS is still a monolithic one, thus
selected because it's the fastest and the most convenient to be maintained.

~~~
eru
Not all hypervisors have an underlying OS. I should expressed myself more
clearly.

I was talking about bare metal hypervisors, not hosted hypervisors. (See
<http://en.wikipedia.org/wiki/Hypervisor#Classification>)

We have some experiments with very stripped down domains (i.e. virtual
machines) that make full use of paravirtualization. They are quite close to
being processes in an exokernel OS. And boot up really fast---like processes
should do.

~~~
acqq
Thanks, my wording was indeed clumsy, but it doesn't change the fact: the
hypervisor is simply not doing the stuff that the classical OS kernel does,
it's not replacing anything, it's just a layer with some specific new
functionality, so _when you're changing the meaning of the terms being
discussed_ of course it can appear that you're winning the argument, whereas
you're just performing tricks. No matter from which direction you observe the
systems, the overall functionality possible with monolithic kernels is still
not really substituted with something better.

~~~
eru
> the hypervisor is simply not doing the stuff that the classical OS kernel
> does

And that makes the comparison with exokernels apt. Exokernels are not supposed
to do what normal kernels do.

(Though if you run a normal kernel on top of a hypervisor or exokernel, in a
sense you haven't reached the true potential of the system and your critique
is more than valid.)

------
cturner
On topic of running old OSs in virtualisation, I purchased and installed
"VirtualAcorn" and took delivery yesterday. It's RiscOS-in-a-box. I'm running
on top of OSX.

My motivation is I have a friend who composes, and is still using a twenty-
year-old acorn running the original release of Sibelius on floppy disks. He
bought it directly from the Finn's apartment door back in the day. We thought
it might be a good idea to get his life's work off this collection of ancient
hardware and floppies :)

It has a command-line, but I haven't worked it out yet.

~~~
barrkel
I remember programming on Acorn A3000 or similar, nearly 20 years back. I used
to program on display models in computer shops, as my family couldn't afford
to buy one. I seem to recall hitting a function key to get a command prompt at
the bottom of the screen, and further dialog with the command line caused the
rest of the GUI to scroll off screen with each new line printed.

I programmed in its dialect of BASIC, which was - I recall - like a Commodore
64 on steroids. I was impressed, in a way which I wasn't by QBasic on PCs,
because I had yet to learn the benefits of structured programming, and missed
the line numbers on the PC.

Thinking back, I think it auto-numbered lines as you hit return, like this:

    
    
        10 PRINT "Hello"<press Return here>
        20 |
    

I remember that in particular being major improvement over the C64; I had seen
so little, and had very few resources to learn from.

------
onan_barbarian
Some of the ideas in Plan 9 are utterly fantastic.

/proc comes from Plan 9, and I remember being awed by the Plan 9 filesystem,
which used memory and hard disks to cache a giant WORM jukebox (in the Bell
Labs version, anyhow). I forget what the path was but you could just
transparently go and have a look at the whole file system on any given day and
wander around a read-only version of the fs for that day.

I'm sure there's a lot out there that looks like this now, but this was in
1992/1993.

The compiler was also cool as hell but lacked any of the aggressive
optimizations that even gcc has now. It was very much oriented towards
incredibly fast compile times rather than squeezing out the last x% of
performance through optimization. I believe that the go compiler is a
descendant of this one.

On the negative side, it was a very insular culture with an attitude that
'things we don't need to do aren't worth doing'. Avoiding shared libraries is
fine, I guess, as is having a minimalist GUI approach. But telling people that
shared libraries suck and shouldn't be used and using the fact that _you've_
got a tiny static library that does all the GUI stuff _you_ need as proof
seems a bit rich, when the GUI that you produced doesn't have any of the
conventional features of a GUI library. When so many things are being changed
at once it's pretty easy to conflate two unrelated things into a 'win' (e.g.
'haha, we don't have a lame shared library system on our machine, and by the
way, Motif sucks anyhow').

It's fine up to a point but it was starting to ring very hollow when you had
go to another machine to run a web browser that looks even faintly like an
actual web browser.

This may all have changed; it's been a long time since I had anything to do
with P9. But I think the 90s are the point where Plan 9 decisively 'missed the
boat' and the insularity and general attitude was part of the problem. If
'ape' (ANSI Posix environment) had been better maintained and taken seriously
Plan 9 might have been considerably more popular and its alternate (frequently
superior) ways of doing things might have taken off.

------
acqq
Note that even pros must use some tricks to run something of it on the modern
hardware, so it appears as it's still mostly an experimental research
platform?

<http://www.mail-archive.com/9fans@9fans.net/msg17086.html>

"I am typing this in acme in 9vx running on FreeBSD, using the rio port in P9P
for my window manager. Because I'm away from my home network, I'm running 9vx
with the the root on my local machine. When I'm at home, I use 9vx booting
with its root taken from a real Plan 9 file server. I also run it on qemu
fairly often."

------
kia
From one of the replies in the thread:

You could have pointed out that the entire source tree is smaller than the gcc
manual.

------
wazoox
Don't forget to try Plan9 for user space, the Plan9 utilities ported to POSIX
systems.

------
sz
_getting things right the first time around is much more of a practical matter
than you may at first realize._

------
jpr
Because OpenBSD community isn't arrogant enough for you.

