
Notes from a 1984 trip to Xerox PARC - shawndumas
https://commandcenter.blogspot.com/2019/01/notes-from-1984-trip-to-xerox-parc.html?m=1
======
gumby
I don't really understand this memo. I also worked at PARC when Pike was there
though I didn't interact with him or even know he was there. If he hadn't said
PARC and Dolphin up front I would have thought he was talking about a
completely different place.

There were, _barely_ enough Dorados to go around; I worked mostly at night so
I could be sure to get one, but as far as I remember everybody had a console
on their desk (keyboard/mouse/bitmapped display). Perhaps, as he was a
visitor, he didn't have a desk of his own?

Dorados were ECL machines (so obviously quite expensive), part of a family of
D-machines (Dolphin was an earlier, less powerful machine, and the Dandelion
workstations (which didn't need to be in a machine room) were later marketed
as the Star. Typically you booted up an entire environment (including
environment-specific microcode -- I wrote some of that): Smalltalk,
Interlisp-D or Cedar/Mesa. Each language environment was a complete OS written
in that language.

It's possible that he had previously never had had access to dedicated single
user machines like that; by the time I went to PARC I was used to having CADRs
at MIT (again, in machine rooms with the consoles in offices) so the concept
of these machines was unremarkable in itself.

But what he says about files and FTPing stuff around makes no sense. The
environment by 1984 was heavily networked, with the email system (like a
modern IMAP environment) entirely client/server living abstractly as a network
service. Files lived on fileserver too and opening a remote file was, well,
how you did things. Since I didn't have a dedicated machine once I connected
to one I simply booted up and had my whole environment available. I never used
FTP at all. Similarly what he said about not saving state unless he was using
an Alto (or running the big D machines in Alto mode -- then they did have
local disk). The other environments all inherently saved state, like a modern
iOS or Mac app. Likewise the three environments I mentioned all handled
integrated printing as would be familiar to anyone today -- and by 1984
(actually by the late 70s) there were laser printers (Dovers and even color
Pengin printers) which were lickety-split; you rarely waited.

In fact his statement "In fact, the whole world outside your machine, Ethernet
notwithstanding, is a forbidding world." is cute but inexplicable: not only
were various resources available on the net in what we'd call today an any
cast (see what I wrote above about grapevine, the mail system) but by 1984 the
ARPAnet had switched over to TCP so I routinely transparently used resources
from MIT from my Interlisp environment similarly to how you might open a URL
in a text editor today.

I agree with what he said about not using text editors in a traditional mode
(Interlisp's preferred mode was a structure editor) and indeed it would have
been nice to have had 'grep' in Smalltalk!

It reads as if he was unfamiliar with exploratory programming environments
(like Smalltalk, Interlisp, and the Lispms). He's clearly trying to map the
debug-exit-edit-compile-restart model onto a different paradigm, that of an
integrated interpreted-compiled environment. I had the same experience in
reverse when I started using Unix: it was weird to me that an error caused the
process to exit and dump core; I was used to an error in _any_ application
stopping and entering the debugger with all file, process, network etc
descriptors all live and inspectable/pokeable and being able to fix the bug in
situ and continue. That was the 1970s programming environment I'd learned on.

~~~
kragen
He _did_ have access to dedicated single-user machines like that; in fact he
built some of them himself with Bart Locanthi a couple of years before. But he
had no idea what to do with them, so he programmed them to act as multiwindow
terminals for a remote Unix system, running a window system over a serial
line. I know that sounds like a joke but it sounds like it was kind of cool —
I'd love to play with a Blit (oh hmm, they've released the firmware and an
emulator in 9front and 8th Edition Research Unix) — but it also sounds like a
real waste of a 68000 to me.

I think the thing he was saying about FTP was when the only copy of a file he
wanted was on somebody else's local disk. It sounds like he was doing more
stuff with Dorados pretending to be Altos than was normal at the time, and,
yes, trying to get them to act like batch-mode programming environments.

(Didn't Cedar's FS support having local files too? Obviously I've never used
Cedar, just read papers about it.)

~~~
gumby
> Didn't Cedar's FS support having local files too? Obviously I've never used
> Cedar, just read papers about it.

It may have but I only farted around in it bc my real job was in interlisp
(plus it was strongly typed iirc which i was allergic to in those days). Yeah,
all the D machines booted into an Also mode (which itself was a kind of NOVA).

Even the KL-10 clone (MAXC) had an Alto as its front end processor (real dec
ones used a PDP-11!)

------
365nice
I too recall early encounters with Smalltalk (albeit 1989), and also recall
that feeling of missing grep - and it was only years later that I realised I
had missed a hugely important point... possibly the most important point.
Smalltalk is so elegant, malleable and simple that it was only 5 lines of code
to implement a more interactive grep... d’oh. Why didn’t I think of it at the
time? Too cocky?

Code completion was trivial to implement even back then, particularly as code
is an object like anything else. Sometimes you miss the obvious and hold
yourself back.

Sadly the industry hasn’t innovated much since PARC, those much mailigned
Smalltalk images are now called Docker ;)

------
DonHopkins
Wow, this is really deep:

>One philosophical point is that each window is of a type: a browser, a
debugger, a workspace, a bit editor. Each window is created by a special
action that determines its type. This is in contrast to our style of windows,
which creates a general space in which anything may be run. With our windows,
it is easy to get to the outside world; the notion of Unix appears only when a
Unix thing happens in the window. By contrast, Smalltalk windows are always
related to Smalltalk. This is hard to explain (it is, as I said,
philosophical), but a trite explanation might be that you look outwards
through our windows but inwards through theirs.

~~~
pavlov
I understand what he's talking about with Smalltalk windows, but it's unclear
to me what these "outwards-looking" Unix windows would have been in 1984.

Is he talking about X and how a remote client app can freely render in a
display server window using the (rather awful) X primitives?

~~~
kragen
No, Bob Scheifler's message, "I've spent the last couple weeks writing a
window system for the VS100. I stole a fair amount of code from W, surrounded
it with an asynchronous rather than a synchronous interface, and called it X,"
was 1984-06-19, and I don't think it escaped MIT until 1985; Pike surely
wasn't talking about X two months before. Danny's inference that Pike was
talking about the Blit is probably correct — the Blit (from 1983) does operate
the way Pike described, and X normally operates the way Smalltalk did, with
each window owned by a client application that gives it idiosyncratic
behavior.

~~~
DonHopkins
HyperLook (which ran on NeWS, and was like a networked version of Hypercard
based on PostScript) was kind of like looking outwards through windows, in the
way Rob Pike described the Blit (although it was architecturally quite
different).

I think his point was that window frames should be like dynamically typed
polymorphic collections possibly containing multiple differently typed
components that can change over time (like JavaScript arrays), not statically
typed single-element containers that are permanently bound to the same
component type which can't ever change (like C variables).

With X11, the client comes first, and the window manager simply slaps a
generic window frame around it, subject to some pre-defined client-specified
customizations like ICCCM properties to control the window dressing, but not
much more, and not under the control of the user.

With HyperLook, the "stack/background/card" or window frame came first, and
you (the user at runtime, not just the developer at design time) could compose
any other components and applications together in your own stacks, by copying
and pasting them out of other stacks or warehouses of pre-configured
components.

[https://medium.com/@donhopkins/hyperlook-nee-hypernews-
nee-g...](https://medium.com/@donhopkins/hyperlook-nee-hypernews-nee-
goodnews-99f411e58ce4)

For example, I implemented SimCity for HyperLook as a client that ran in a
separate process than the window server, and sent messages over the local
network and drew into shared memory bitmaps. SimCity had its own custom frames
("screw-up windows") that looked more "mayoral" than normal windows.

[https://cdn-images-1.medium.com/max/2560/0*ZM8s95LNxemc5Enz....](https://cdn-
images-1.medium.com/max/2560/0*ZM8s95LNxemc5Enz.gif)

There were multiple map and editor views that you could copy and paste into
other stacks while they were running, or you could copy widgets like buttons
or drawing editors into the SimCity stack while it was running. So you could
copy and paste the "RCI" gauge into your graphics editor window to keep an eye
on it while you worked, or paste a clock into your SimCity window to see when
it was time to stop playing and get some sleep. And you could even hit the
"props" key on the clock to bring up a PostScript graphics editor that let you
totally customize how it looked!

[https://cdn-images-1.medium.com/max/600/0*oHtC0F5qK83ADw1H.g...](https://cdn-
images-1.medium.com/max/600/0*oHtC0F5qK83ADw1H.gif)

It had "warehouse" stacks containing collections of pre-configured component
prototypes (including widgets, applets, and window management controls like
close buttons, resize corners, navigation buttons, clocks, graphics editors,
etc) that you could copy and paste into your own stacks, and which would
automatically be listed in the "New Object" menu you get in edit mode, to
create them in place without opening up the warehouse:

[https://cdn-images-1.medium.com/max/600/0*sHClGU8ALljuRQKb.g...](https://cdn-
images-1.medium.com/max/600/0*sHClGU8ALljuRQKb.gif)

[https://cdn-images-1.medium.com/max/600/0*QwIQ_GLxQl1v968F.g...](https://cdn-
images-1.medium.com/max/600/0*QwIQ_GLxQl1v968F.gif)

[https://cdn-images-1.medium.com/max/600/0*aWbuo6k_eJuZnUmV.g...](https://cdn-
images-1.medium.com/max/600/0*aWbuo6k_eJuZnUmV.gif)

[https://cdn-images-1.medium.com/max/600/0*zya4vNBP3libpNSA.g...](https://cdn-
images-1.medium.com/max/600/0*zya4vNBP3libpNSA.gif)

[https://cdn-images-1.medium.com/max/600/1*G0LWky2iejYm4IGBsU...](https://cdn-
images-1.medium.com/max/600/1*G0LWky2iejYm4IGBsUyfeA.png)

Normal X11 window managers put the cart before the donkey. The window frames
are generic, stupid and unscriptable, and can't even be customized by the
user, or saved and restored later. The client comes first, with just one
client per generic frame. You can't move or copy a widget or panel from one
frame to another, to use them together in the same window. That's terribly
limited and primitive.

We implemented a more-or-less traditional (but better than normal) ICCCM X11
window manager in PostScript for X11/NeWS, which had tabbed windows, pie
menus, rooms, scrolling virtual desktop, etc, uniformly for all NeWS apps and
X11 clients. But the next step (not NeXT Step) was to use HyperLook to break
out of the limited one-client-per-frame paradigm.

[http://www.art.net/~hopkins/Don/unix-
haters/x-windows/i39l.h...](http://www.art.net/~hopkins/Don/unix-
haters/x-windows/i39l.html)

(It's interesting to note that David Rosenthal, the author of the ICCCM
specification, was also one of the architects of NeWS, along with James
Gosling.)

Our plan was to use HyperLook to implement a totally different kind of
integrated scriptable X11 (and NeWS) window manager, so you could compose
multiple X11 clients into the same frame along with HyperLook widgets and
scripts. You could integrate multiple X11 clients and NeWS components into
fully customizable seamless task oriented user interfaces, instead of
switching between windows and copying and pasting between a bunch of separate
monolithic clients that don't know about each other. But unfortunately Sun
canceled NeWS before we could do that.

Here's some more stuff I've written about that stuff, and how window managers
should be implemented today in JavaScript:

[https://news.ycombinator.com/item?id=18837730](https://news.ycombinator.com/item?id=18837730)

[https://news.ycombinator.com/item?id=18865242](https://news.ycombinator.com/item?id=18865242)

[https://news.ycombinator.com/item?id=5861229](https://news.ycombinator.com/item?id=5861229)

[https://news.ycombinator.com/item?id=16839825](https://news.ycombinator.com/item?id=16839825)

[https://news.ycombinator.com/item?id=18314265](https://news.ycombinator.com/item?id=18314265)

[https://news.ycombinator.com/item?id=8546507](https://news.ycombinator.com/item?id=8546507)

>Who Should Manage the Windows, X11 or NeWS?

>This is a discussion of ICCCM Window Management for X11/NeWS. One of the
horrible problems of X11/NeWS was window management. The X people wanted to
wrap NeWS windows up in X frames (that is, OLWM). The NeWS people wanted to do
it the other way around, and prototyped an ICCCM window manager in NeWS
(mostly object oriented PostScript, and a tiny bit of C), that wrapped X
windows up in NeWS window frames.

>Why wrap X windows in NeWS frames? Because NeWS is much better at window
management than X. On the surface, it was easy to implement lots of cool
features. But deeper, NeWS is capable of synchronizing input events much more
reliably than X11, so it can manage the input focus perfectly, where
asynchronous X11 window managers fall flat on their face by definition.

>Our next step (if you'll pardon the allusion) was to use HyperNeWS (renamed
HyperLook, a graphical user interface system like HyperCard with PostScript)
to implemented a totally customizable X window manager!

~~~
kragen
> I think his point was that window frames should be like dynamically typed
> polymorphic collections possibly containing multiple differently typed
> components that can change over time (like JavaScript arrays), not
> statically typed single-element containers that are permanently bound to the
> same component type which can't ever change (like C variables).

Well, dynamically typed, yes, and changing over time, but I don't think he was
talking about sticking your editor _and_ your command line in the same window
_at the same time_. Rather, you could run a program in an mpx window on the
Blit, and it would draw in that window until it was done, and then you would
get your shell prompt back in that window.

I know this isn't really related to the overall question of end-user
customizability and composability that you're focusing on here, but I dig the
olwm-style pinned dropdown menus in your last screenshot there. It's a shame
neither Open Look nor olwm nor olvwm ever did antialiased text — I know it was
important for the window manager not to eat the entire palette of your cgfour
or NCD, but when all your chrome text is black on slate grey, a couple of
greyscale levels would have gone a long way.

Your other links are timely, since I'm trying to figure out how to do the
BubbleOS windowing system Wercam this week, maybe using Unix-domain sockets
and shared-memory bitmaps, or rather pixmaps; the thing that sounds most
inspiring to me is Genode's nitpicker, but supporting the interposition-based
design of the nitpicker window manager would require either an extra copy of
all the pixels (to put them in the right place in the framed window) or a draw
command that involves a whole sequence of rectangles instead of just one. I'm
kind of thinking that by default windows will be embedded in my shell, like
graphics in Jupyter, and then I'll see how things go from there.
[https://gitlab.com/kragen/bubbleos/tree/master/wercam](https://gitlab.com/kragen/bubbleos/tree/master/wercam)

------
cco
>For example,

    
    
        2 raisedToInteger: 3 
    

is perfectly readable, but chatty at best.

For the life of me I will never understand this opinion. If you're trying to
eek out the last bits of memory, fine, verbosity is a valid concern, but at
least today code should be written to be read by a human, the more easily it
can be read and understood the better.

~~~
kragen
You can calculate the (population) standard deviation of a set of data as the
square root of the difference between the mean of the squares of the data and
the square of the mean of the data. Is that easier to understand than
√(Σ(xᵢ²/N) - (Σxᵢ/N)²)?

If you're one of the two and thirteen one-hundredths percent of the population
that said "yes" to that, can you explain how to derive that formula (which
gives a one-pass algorithm) from the fundamental definition √(Σ((xᵢ-x̄)²)/N)?
Or, rather, that the standard deviation is the square root of the mean of the
differences between the data and their mean? Without using any formulas,
remember, because the position you're defending is that sequences of words are
more easily read and understood by humans than mathematical formulas are. But
convincingly.

The benefit of notation as a tool of thought is not that it saves memory in
the computer; it's that it enables you to think thoughts that you could not
have thought with an inferior notation.

~~~
cco
That is a good point, I would refine my argument a based on your response.

√(Σ(xᵢ²/N) - (Σxᵢ/N)²) is unhelpful on its own, and as you point out a more
"human readable" version of this equation would be rather convoluted. The
middle ground here would be to use more specific syntax like yours, while also
ensuring that each variable is clearly defined and this logic is robustly
documented to explain both the why of this function and how this specific
function carries out that logic.

~~~
kragen
The justification for why that formula is correct (equivalent to the
fundamental definition) is even more incomprehensible than the formula itself
if written in English, but fairly clear, almost trivial, using algebraic
notation. That is, if you want to "explain both the why of this function and
how this specific function carries out that logic," using algebraic notation
is far, far better than just using a natural language.

------
sagichmal
This is a very nice read.

------
yesenadam
Fascinating article.

meta: There's a <style> tag with 2400 lines (47k+ characters) - a 'Blogger
Template Style' \- occupying most of the page source. Guess I shouldn't be
surprised at the length, but I was. ...What's the record for that sort of
thing?

~~~
yesenadam
..and it's bad to ask this why?

