
Rob Pike: Reflections on Window Systems (2008) [video] - pmarin
http://epresence.kmdi.utoronto.ca/1/watch/630.aspx
======
justin66
"To be honest, looking back on it, Unix today is worse than the systems that
Unix itself was created to get away from."

~~~
Dewie
Can anyone here corroborate that there are better alternatives to operating
systems than the Unix-likes? Maybe anyone that agrees with Rob Pike's view on
this in particular? All I seem to hear is that Unix-y OSs are great, but I
don't hear much about any potential alternatives. So I'm curious about them.

~~~
wolfgke
What Rob Pike is refering to is of course Plan 9 and its derivatives (say
Inferno). Whether you want to consider them Unix-likes or not is your choice
(many HNers will say it's more Unix than Unix).

If you look for desktop alternatives to Unix, OSes to look at are BeOS (an
open source clone is developed under the name HaikuOS) and the Windows NT
kernel (don't hurt me: the Windows NT kernel is IMHO far more elegant than,
say Linux or Mach. What is crappy is especially the WinAPI; please look deep
below the surface: Here Windows gets nice).

The reason why you won't find many desktop/server alternatives to Unix can be
read here:
[http://herpolhode.com/rob/utah2000.pdf](http://herpolhode.com/rob/utah2000.pdf)
Thus lots of OS reserach is either focused on embedded stuff or virtualization
instead of desktop or server.

Accepting this, especially have a look at the L4 kernel family (a family of
very small and fast microkernels). If you like highly secure designs, you'll
probably like seL4, the first formally verfied kernel. Also QNX is worth a
look (QNX is Unix, accepted; nevertheless a very elegant kernel design).

~~~
vezzy-fnord
I'm not sure what huge technical advantage the BeOS model offers over
contemporary Unix-likes in this day and age, other than its file system
support for extended attributes, which are a controversial topic and
nonetheless still used by things like XFS.

I'd also like some clarification on what you think is good about the NT
kernel. It seems far too entangled with other cruft that forms the Windows
stack.

That said, I'd also like to mention the Hurd. At this point, it's really a
fragile ad-hoc reimplementation of 9P file servers on top of a modded Mach,
but such a model was still quite daring for general purpose computing back
during the Hurd's original window of opportunity (1989-1995, or so). Probably
would have advanced the state of modern OS at least a bit, if it weren't for
managerial incompetence.

It's still a Unix at heart, though, despite many important extensions.

~~~
wolfgke
> I'm not sure what huge technical advantage the BeOS model offers over
> contemporary Unix-likes in this day and age, other than its file system
> support for extended attributes, which are a controversial topic and
> nonetheless still used by things like XFS.

BeOS was very optimized for multimedia, which is an interesting property I
think. Also the GUI library used multithreading from beginning. This surely
isn't interesting for servers, but for desktop computers.

> I'd also like some clarification on what you think is good about the NT
> kernel. It seems far too entangled with other cruft that forms the Windows
> stack.

For a lack of time for explanations I will only give one example: How to do
fast asynchronous I/O (You know: Asynchronous IO: The hot thing that node.js
is about ;-) ;-) ). Under FreeBSD/Linux asynchronous I/O is just synchronous
non-blocking I/O (FreeBSD needed to implement kqueue to even allow this in a
fast way; the Linux developers implemented epoll (which is incompatible to
kqueue :-( )). Under Windows NT it is an easy problem that has been solved
(from beginning?). See

> [http://sssslide.com/speakerdeck.com/trent/pyparallel-how-
> we-...](http://sssslide.com/speakerdeck.com/trent/pyparallel-how-we-removed-
> the-gil-and-exploited-all-cores)

for details.

~~~
vetinari
> BeOS was very optimized for multimedia, which is an interesting property I
> think. Also the GUI library used multithreading from beginning. This surely
> isn't interesting for servers, but for desktop computers.

BeOS _claimed_ to be optimized for multimedia, but that does not mean that it
was. I remember, that I was able to have fluid DivX (3.11) playback on PII-300
running Windows and Linux, but not on BeOS.

What BeOS _did_ have is DirectShow-like media architecture, using nodes and
pipelines. But at the time, it was not an effective architecture.

(And yes, the BeOS engineers never managed to support VESA GTF in their
display drivers, meaning that the picture on my monitor was always shifted
compared to other OSes).

~~~
pjmlp
DivX?

Back when BeOS was still on sale, Windows latest versions were Windows 2000
and Windows 98, with XP around the corner.

I never remember using DivX on those systems.

~~~
vetinari
Yes, DivX.

The original hacked codec appeared in 1998. It got boost in popularity when
the movie The Matrix came out (1999).

This happened in Windows 98 timeframe. Windows 2000 was in early beta, not yet
on sale and XP was unheard of yet. The current linux were Redhat 5, 5.1 and 6;
BeOS 4 and 4.5.

~~~
pjmlp
Actually I think I was still only using Real back then, but cannot really
remember.

I would need to go dig into my Zip floppies collection, a few thousand
kilometers away from my current location.

------
f2f
i loved plan9. i was into it before the 3rd edition was released and i built a
small career out of it. it was and still is the most enjoyable system to work
with.

i'm happy it's getting its dues in retrospect and is now considered a cool
thing, but i can't help but remember how just after the first open release
everybody and their uncle kept complaining about trivia: "how can I become
root?" "it doesn't do x11?" "its license isn't gnu-open?"

rob is probably right: licensing killed plan9, but so did everybody who
couldn't see the forest for the trees. the linux juggernaut was too popular
back then.

~~~
mercurial
I'm not sure licensing was the only reason. A pitch that goes "we work like X,
but better" is unconvincing. Especially when it ends with "in an incompatible
way."

~~~
gcb4
their motto if i recall was never that. they had two demos about being
Unixier... one involved tar-ing a process in one machine, sending to another
one (or mounting that machine cpu over the network, don't remember) untar-ing
and the process continued from when it was first packaged with ui state and
all.

~~~
smorrow
That was Plan B. I've never used it, or read the papers or the manual, but
everything was centralised on a single box. The reason the tar thing works at
all is because all it contains is pointers to your centralised state. In case
anyone is trying to wrap their heads around the above comment.

The closest Labs equivalent would be Protium, which is for writing programs
like he mentioned with the sam/samterm split but with reattachability. Like
all the most interesting application-level softwares for Plan 9, that actually
do something for you, it wasn't released.

------
bishop_mandible
The Blit video he mentions:
[https://www.youtube.com/watch?v=emh22gT5e9k](https://www.youtube.com/watch?v=emh22gT5e9k)

A similarly interesting talk by him on CSP-inspired languages (Occam, Erlang,
Squeak, Newsqueak, Limbo, Alef, Go):
[https://www.youtube.com/watch?v=3DtUzH3zoFo](https://www.youtube.com/watch?v=3DtUzH3zoFo)

Slides (not shown in the video): [http://go-
lang.cat-v.org/talks/slides/emerging-languages-cam...](http://go-
lang.cat-v.org/talks/slides/emerging-languages-camp-2010.pdf)

Interesting as well: From parellel to concurrent (on Sawzall and Go):
[http://channel9.msdn.com/Events/Lang-NEXT/Lang-
NEXT-2014/Fro...](http://channel9.msdn.com/Events/Lang-NEXT/Lang-
NEXT-2014/From-Parallel-to-Concurrent)

------
mwcampbell
In the middle of the talk, Pike said that the idea of a workstation in every
office was one of the stupidest things he'd ever heard, but then decided not
to digress and explain why. Has he discussed that at length in any other paper
or recorded talk?

~~~
smorrow
They're more expensive first of all. and each one has to be administrated
individually, especially if it has a local disk. If the kernel is kept
locally, then you have to go around each machine every time you compile a new
one. Same thing for applications. Same thing for let's say /usr/dict or
something stupid.

Plus it's another mechanical component which can fail.

If the user directories are local instead of on a central box, then you

1) need to be at a specific one to get your stuff, and you

2) can't just cd into someone else's /usr/*/src and collaborate.

The Labs word for timesharing was "communal" computing, if that helps.

For a school or company, in which the computers are provided by the
organisation itself, the Plan 9 way is very obviously better than insular
systems.

There's more, but it's another whole tangent.

Since you asked, the main Plan 9 paper ("Plan 9 from Bell Labs", 1995) does
say

"the early focus on having private machines made it difficult for networks of
machines to serve as seamlessly as the old monolithic timesharing systems.
Timesharing centralized the management and amortization of costs and
resources; personal computing fractured, democratized, and ultimately
amplified administrative problems."

but even more to the point would be: with a terminal, you just plug it in and
turn it on.

~~~
cwyers
"For a school or company, in which the computers are provided by the
organisation itself, the Plan 9 way is very obviously better than insular
systems."

Only if you can guarantee 100% (or close to) access to high-speed network
connections at all time. The value of a computer with meaningful local storage
and execution is that it continues to work when the network is gone or isn't
performing well. And it takes a lot less bandwidth to send data to a user's
computer and let local applications worry about dealing with the data than it
is to send a full screen to the user, if you're talking about GUI. You can
also smooth out latency issues a lot easier.

So I don't think it's obviously better in those cases. But it's very obviously
worse OUTSIDE of those institutions. Here's Pike's description of his dream
setup[1]:

"I want no local storage anywhere near me other than maybe caches. No disks,
no state, my world entirely in the network. Storage needs to be backed up and
maintained, which should be someone else's problem, one I'm happy to pay to
have them solve. Also, storage on one machine means that machine is different
from another machine. At Bell Labs we worked in the Unix Room, which had a
bunch of machines we called 'terminals'. Latterly these were mostly PCs, but
the key point is that we didn't use their disks for anything except caching.
The terminal was a computer but we didn't compute on it; computing was done in
the computer center. The terminal, even though it had a nice color screen and
mouse and network and all that, was just a portal to the real computers in the
back. When I left work and went home, I could pick up where I left off, pretty
much. My dream setup would drop the "pretty much" qualification from that."

That's fine, so long as all you want to do on your home computer is do more
work. But that's such an incredibly narrow vision of computing. The truly
personal computer enabled a lot of things that weren't possible under the old
dumb terminal model Pike pines for, and has made computers accessible to many
more people. We'll continue to get some of the benefits of this as the "cloud"
continues to be integrated into things, but I don't think we're going back to
the "communal" computing model, and if we do, it's going to be because someone
looks at the benefits of the personal computer model and finds a way to
provide them in the communal model, rather than sitting there and pining about
how it was in the old days before the peasants ruined everything.

1) [http://rob.pike.usesthis.com/](http://rob.pike.usesthis.com/), thanks to
tjgq for providing the link below

~~~
smorrow
> dumb terminal

> send a full screen to the user

I think you're confusing Plan 9 with thin clients. Even the Blit in the video
was a smart terminal ("you can run programs on it"). The way Plan 9 works is

> send data to a user's computer and let local applications worry about
> dealing with the data.

It was apparently different in the 90's ("terminal was a computer but we
didn't compute on it") but nowadays in Plan 9 you do nearly everything on the
terminal. For different reasons: the WM, the browsers, the editors run on the
terminal for responsiveness; image and music decoders run on the terminal
because they need a higher bandwidth to the screen/speakers than to the fs
(jpegs, mp3s, etc are compressed); plumber and factotum run on your terminal
so that you get exactly one instance of them per session, and that they die as
soon as you end your session.

The central boxes are only really used for FS/backup, auth, cron, maybe mail
servers or whatever.

So nearly everything runs on the terminal, but it's a Plan 9 terminal so it's
still just a matter of plug it in, turn it on, set up netbooting (once).

> it's very obviously worse OUTSIDE of those institutions.

What I was going to say before I decided that it was too much of a tangent
was: and Plan 9 NOT being a proper distributed OS could be a plus: it's
possible to use recover(4) or lapfs(4) and bring the laptop outside the
network. No-one actually does that, and lapfs isn't even a real Plan 9 program
in C. but it's still made feasible by the fact that your editor and so on is
not on the far side of a connection and isn't going to go anywhere. unlike on
e.g. an Amoeba terminal, which really is just a window manager, no editors
locally, no browsers or viewers.

I dunno about using "backup" to describe the Plan 9 file server, but that's
even more unrelated to my original comment.

------
bishop_mandible
Who's the guy at the end saying "let's forget where I work and where you
work", asking the "hard" question?

~~~
theoh
Bill Buxton. He works for Microsoft now but used to work for Alias|Wavefront
(also in Toronto as I understand it.)

~~~
vanderZwan
I don't know how famous he is in programming circles, but in HCI and
Interaction Design he is a well-known and respected for his fantastic research
on two-handed interfaces.

~~~
theoh
I encountered him in a talk at Siggraph 2001. He is a charismatic guy and used
a lot of creative examples of interface design. In particular he showed
[http://en.m.wikipedia.org/wiki/Ammassalik_wooden_maps](http://en.m.wikipedia.org/wiki/Ammassalik_wooden_maps)

It is sad that Maya is now an Autodesk product: Autodesk is such a square,
white bread, corporate outfit. The interface to their flagship AutoCAD is
ludicrously clunky and antique, while something like Revit (another
acquisition) is clever but really dumbed-down and limited in scope. Oh well.

------
qewrffewqwfqew
For anyone else wanting to watch this on Android, or just have an mp4 and
slides in pdf: the archive is linked near the bottom of
[http://genius.cat-v.org/rob-pike/](http://genius.cat-v.org/rob-pike/)

~~~
qewrffewqwfqew
and more talks from DGPis40 here (Pike's link to the event is broken):
[http://hciweb.cs.toronto.edu/DGPis40/webcasts.html](http://hciweb.cs.toronto.edu/DGPis40/webcasts.html)

------
qnaal
but I guess we're stuck with it until we come up with a better way to install
emacs

------
zvrba
An interesting bit of trivia about
[http://en.wikipedia.org/wiki/Plan_9_from_Outer_Space](http://en.wikipedia.org/wiki/Plan_9_from_Outer_Space):
The film's title was the inspiration for the name of Bell Labs' successor to
the Unix operating system.

------
robert_tweed
Is there a mirror of this somewhere that doesn't require Flash?

