
Ergonomics of the Symbolics Lisp Machine (2012) - icc97
http://lispm.de/symbolics-lisp-machine-ergonomics
======
gumby
Since the article hammers on about Cyc -- I architected Cyc at MCC, rebuilding
the project from the bottom up, developing a layered architecture (that others
then took in directions I hadn't imagined) which really emphasizes the
reusability and component architecture that was possible with the lispm (I
used the predecessors of them as well when I was at MIT and at other research
labs). I agree with the 100K LoC in a year, in order of magnitude at least. It
was definitely the most productive I have been in my life, but then again it
was a research project not an end-user package.

I had two machines in my office at MCC, one with a color monitor (in addition
to the stock B&W), both with maxed-out RAM (8 MW, each word I believe was 40
bits (36 bits of machine word + 4 of ECC?). I can't imagine how much it cost
-- probably enough to buy a small house in Austin.

I never used the "complete" key or several of those far away ones as they were
too distant from the home row. Tab completed as it had in Emacs for years (I
started using Emacs in 1977 and still use it as my development environment). I
still use the same "dynamic window" typing approach described in the article,
but with a C++ app passing JSON data to a web app. You'll notice the article
references Gene Ciccarelli -- Emacs began as his TECO init file around 1976 or
so.

(I also used D-machines at PARC before moving to Cyc -- a very different
experience).

------
YouAreGreat
> The whole point was to make an individual or a small group very productive
> and let them manage high complexity.

And I think it was very impressive.

In contrast, nowadays we seem to get most interest and discussion here for
projects that aim make programming more accessible (beginner-friendly, child-
friendly, end-user programmable) and/or more suitable to large-team and crowd
programming (type systems, golang, git, ...)

Do we still try—do we even _want_ —to design for individual programmer's
productivity and mastery of high complexity?

~~~
fao_
Mm. I've been wondering recently why is it that Programmers don't have more
specialized tools (Which incidentally reminds me of this lecture: [Bret Victor
-- Seeing Spaces]
[https://www.youtube.com/watch?v=klTjiXjqHrQ](https://www.youtube.com/watch?v=klTjiXjqHrQ)).

It's interesting to think that high-performance traders have _an entire market
dedicated to_ specialized computer interfaces, terminals, etc. all focused on
improving their productivity and ability to reason about the stock exchange.

[https://news.ycombinator.com/item?id=11732258](https://news.ycombinator.com/item?id=11732258)

As the top poster says about the Bloomberg terminals:

 _" "1\. You have to have a Bloomberg keyboard to work in finance. It's not
even a question. It's a COGS for anybody who wants to work in finance or
trading.""_

~~~
yourapostasy
At the end of the day, the reason such a market hasn’t grown around
programmers the way it has around financial instrument traders is because
programmers are still in the “expense” column while traders are in the
“income” column or at worst, the “cost of sales” column. Interestingly enough,
many fintech companies tend to avoid this classification of their programmers.
The closer programmers get to the incoming money spigot, the less they are
treated as “expense” column line items.

------
cpr
The title immediately called to mind the incredible keyboards, which were
still one step down from the original TK MIT AI Lab Microswitch boards.

Finer, faster, more satisfying keyboards have never been made...

------
foobarian
"The NYT article talks about computer-supported work in general. I will
explain how the Symbolics Lisp environment made the software developer more
productive. The Symbolics Lisp Machine was a high-tech solution. There are
also useful lower-tech solutions. Many Lisp programmers like the relative
'simplicity' of just an Emacs window with a 'slave' Lisp process (say, SLIME +
Emacs). Some of these developers are using just a terminal and Emacs. It gives
them an integrated environment that is both simple and effective. The
Symbolics Lisp Machine environment is more complex. It has been designed to
support the development of novel and complex software, especially AI software.
The whole point was to make an individual or a small group very productive and
let them manage high complexity. Artificial Intelligence software often was
very complex (Cyc is an example for that)."

Somehow whenever Lisp is the topic we end up with writing like the above,
hand-waving about some mysterious properties of Lisp that will bring us to
programmer Nirvana.

It seems modern workstations have similar features as the described system,
with much higher bloat, which certainly makes it impressive for its time. I
wish the article could have put more weight on the modern context and tone
down the propaganda a bit.

~~~
gumby
It's not just Lisp, it's the lispm. This was true of Interlip-D as well (the
PARC lisp environment that ran on the Dandelion (AKA Star) as well as other D
machines. It had two factors:

1 - Lisp is a dynamic language -- even compiled code can be modified and
supersceded. This makes rapid prototyping really easy; it makes exploratory
programming really easy, and it makes it easy to incrementally provide core
improvements.

2 - As the entire system is in Lisp ("it's Lisp all the way down") there are
no barriers (technical nor conceptual) between user code and system code. When
I was developing Cyc I make the frame datastructures (Units and Slots in the
parlance) first class objects that behaved _exactly as if they were part of
the base system distribution_. They looked like a fundamendal datastructure
just like conses, arrays, integers etc, which means people could rite code
using them in ways I hadn't anticipated. Coupled with the introspective nature
of Lisp you could easily write patches or new functionality that took
advantage of what was on the system. And the debugger was always running, so
instead of a core dump you could poke around and see what happened with all
the dynamic data intact. (In the D-Machine case, on the project I worked on at
PARC we actually added some instructions (wrote microcode) to the machine for
this purpose).

Now, let's compare this (good and bad) with the state of the art today:

We definitely have dynamic languages heavily inflected by this work: C++
classes can be indistinguishable from built-in ones, and the entire STL can be
written in C++. Clojure, js, etc are mainstream dynamic languages.

But what we still have gain mismatches at interfaces: calling into the kernel
is expensive; you have a cost in performance and functionality when
interfacing between languages (e.g. a lot of boxing/unboxing when you
interface C++ to Python, for example) and the build and development tools are
typically disjoint from the program under development, which means the code
and tool can't introspect about their relationship with each other. On the
other hand, a Pascal or C program on an MIT-style Lispm (Symbolics/LMI/TI) had
the same calling conventions and datatypes as a Lisp program and could
intercall freely and be debugged clearly.

And people are less interested in that these days: the dominant mode of
programming is to assemble large numbers of black boxes and hope that they
work. That gives me the heebie-jeebies, but perhaps that's just snobbery on my
part: tons of useful systems are being built that way, and by a much larger
set of people than used to be considered programmers.

There were downsides of course, many of which are the reason these kinds of
architectures no longer exist. The biggest one is the two-edged sword of the
simplicity and complexity of Lisp: it's a very simple language with many sharp
tools in the toolbox; a new user could _very_ rapidly come up to speed, but
most commonly a new user ended up developing a thick rope that they then
proceeded to hang themselves with. It's a systems programming language and its
very simplicity makes it too complex for most for application development. The
same issue affects C++ today: it's an immensely powerful and expressive
systems programming language but also a lot to get your hands around. Compare
that to Go which explicitly tries to head in the opposite direction.

Another "downside" is exploratory programming: I still use that approach but
in terms of full disclosure: a good friend of mine denigrates it as
"programming by successive approximation". I can see his point.

And of course the lack of barriers would make applications quite vulnerable in
today's security environment.

It's still by far the most productive environment I've ever used, but I'm not
sure it has a place in today's world.

