
Design of Lisp-Based Processors Or, LAMBDA: The Ultimate Opcode (1979) [pdf] - bootload
http://dspace.mit.edu/bitstream/handle/1721.1/5731/AIM-514.pdf
======
gavinpc
Here is Alan Kay from a talk that was just on HN today.

[https://www.youtube.com/watch?v=ubaX1Smg6pY&t=8m9s](https://www.youtube.com/watch?v=ubaX1Smg6pY&t=8m9s)

    
    
        The amount of complication can be hundreds of times more than the
        complexity, maybe thousands of times more.  This is why appealing to
        personal computing is, I think, a good ploy in a talk like this because
        surely we don't think there's 120 million lines of code—of *content* in
        Microsoft's Windows — surely not — or in Microsoft Office.  It's just
        incomprehensible.
        
        And just speaking from the perspective of Xerox Parc where we had to do this
        the first time with a much smaller group — and, it's true there's more stuff
        today — but back then, we were able to do the operating system, the
        programming language, the application, and the user interface in about ten
        thousand lines of code.
        
        Now, it's true that we were able to build our own computers.  That makes a
        huge difference, because we didn't have to do the kind of optimization that
        people do today because we've got things back-asswards today.  We let Intel
        make processors that may or may not be good for anything, and then the
        programmer's job is to make Intel look good by making code that will
        actually somehow run on it.  And if you think about that, it couldn't be
        stupider.  It's completely backwards.  What you really want to do is to
        define your software system *first* — define it in the way that makes it the
        most runnable, most comprehensible — and then you want be able to build
        whatever hardware is needed, and build it in a timely fashion to run that
        software.
    
        And of course that's possible today with FPGA's; it was possible in the 70's
        at Xerox Parc with microcode.  The problem in between is, when we were doing
        this stuff at Parc, we went to Intel and Motorola and pleading with them to
        put forms of microcode into the chips to allow customization and function
        for the different kinds of languages that were going to have to run on the
        chips, and they said, What do you mean?  What are you talking about?
        Because it never occurred to them.  It still hasn't.
    

_EDIT_ added last para.

------
DonHopkins
I believe this is about the Lisp Microprocessor that Guy Steele created in
Lynn Conway's groundbreaking 1978 MIT VLSI System Design Course:

[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/MIT78.html](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/MIT78.html)

My friend David Levitt is crouching down in this class photo so his big 1978
hair doesn't block Guy Steele's face:

The class photo is in two parts, left and right:

[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Class2s.jp...](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Class2s.jpg)
[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Class3s.jp...](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Class3s.jpg)

Here are hires images of the two halves of the chip the class made:

[http://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/MIT78c...](http://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/MIT78chip%20photo-1%20L.jpg)

[http://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/MIT78c...](http://ai.eecs.umich.edu/people/conway/VLSI/InstGuide/MIT78chip%20photo-2%20L.jpg)

The Great Quux's Lisp Microprocessor is the big one on the left of the second
image, and you can see his name "(C) 1978 GUY L STEELE JR" if you zoom in.
David's project is in the lower right corner of the first image, and you can
see his name "LEVITT" if you zoom way in.

Here is a photo of a chalkboard with status of the various projects:
[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Status%20E...](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Status%20Em.jpg)

The final sanity check before maskmaking: A wall-sized overall check plot made
at Xerox PARC from Arpanet-transmitted design files, showing the student
design projects merged into multiproject chip set.

[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Checkplot%...](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Checkplot%20s.jpg)

One of the wafers just off the HP fab line containing the MIT'78 VLSI design
projects: Wafers were then diced into chips, and the chips packaged and wire
bonded to specific projects, which were then tested back at M.I.T.

[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Wafer%20s....](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/Wafer%20s.jpg)

Design of a LISP-based microprocessor
[http://dl.acm.org/citation.cfm?id=359031](http://dl.acm.org/citation.cfm?id=359031)
ftp://publications.ai.mit.edu/ai-publications/pdf/AIM-514.pdf

Page 22 has a map of the processor layout:
[http://i.imgur.com/zwaJMQC.jpg](http://i.imgur.com/zwaJMQC.jpg)

We present a design for a class of computers whose “instruction sets” are
based on LISP. LISP, like traditional stored-program machine languages and
unlike most high-level languages, conceptually stores programs and data in the
same way and explicitly allows programs to be manipulated as data, and so is a
suitable basis for a stored-program computer architecture. LISP differs from
traditional machine languages in that the program/data storage is conceptually
an unordered set of linked record structures of various sizes, rather than an
ordered, indexable vector of integers or bit fields of fixed size. An
instruction set can be designed for programs expressed as trees of record
structures. A processor can interpret these program trees in a recursive
fashion and provide automatic storage management for the record structures. We
discuss a small-scale prototype VLSI microprocessor which has been designed
and fabricated, containing a sufficiently complete instruction interpreter to
execute small programs and a rudimentary storage allocator.

~~~
gsg
Hah! That big one has SIMPLE written on it, but is ten times the size of the
others!

Thanks for the history bomb.

~~~
DonHopkins
Here's a map of the projects on that chip, and a list of the people who made
them and what they did:

[http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/MPC78map.g...](http://ai.eecs.umich.edu/people/conway/VLSI/MIT78/MPC78map.gif)

1\. Sandra Azoury, N. Lynn Bowen Jorge Rubenstein: Charge flow transistors
(moisture sensors) integrated into digital subsystem for testing.

2\. Andy Boughton, J. Dean Brock, Randy Bryant, Clement Leung: Serial data
manipulator subsystem for searching and sorting data base operations.

3\. Jim Cherry: Graphics memory subsystem for mirroring/rotating image data.

4\. Mike Coln: Switched capacitor, serial quantizing D/A converter.

5\. Steve Frank: Writeable PLA project, based on the 3-transistor ram cell.

6\. Jim Frankel: Data path portion of a bit-slice microprocessor.

7\. Nelson Goldikener, Scott Westbrook: Electrical test patterns for chip set.

8\. Tak Hiratsuka: Subsystem for data base operations.

9\. Siu Ho Lam: Autocorrelator subsystem.

10\. Dave Levitt: Synchronously timed FIFO.

11\. Craig Olson: Bus interface for 7-segment display data.

12\. Dave Otten: Bus interfaceable real time clock/calendar.

13\. Ernesto Perea: 4-Bit slice microprogram sequencer.

14\. Gerald Roylance: LRU virtual memory paging subsystem.

15\. Dave Shaver Multi-function smart memory.

16\. Alan Snyder Associative memory.

17\. Guy Steele: LISP microprocessor (LISP expression evaluator and associated
memory manager; operates directly on LISP expressions stored in memory).

18\. Richard Stern: Finite impulse response digital filter.

19\. Runchan Yang: Armstrong type bubble sorting memory.

The following projects were completed but not quite in time for inclusion in
the project set:

20\. Sandra Azoury, N. Lynn Bowen, Jorge Rubenstein: In addition to project 1
above, this team completed a CRT controller project.

21\. Martin Fraeman: Programmable interval clock.

22\. Bob Baldwin: LCS net nametable project.

23\. Moshe Bain: Programmable word generator.

24\. Rae McLellan: Chaos net address matcher.

25\. Robert Reynolds: Digital Subsystem to be used with project 4.

Also, Jim Clark (SGI, Netscape) was one of Lynn Conway's students, and she
taught him how to make his first prototype "Geometry Engine"!

[http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/MPCAdv.ht...](http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/MPCAdv.html)

Just 29 days after the design deadline time at the end of the courses,
packaged custom wire-bonded chips were shipped back to all the MPC79
designers. Many of these worked as planned, and the overall activity was a
great success. I'll now project photos of several interesting MPC79 projects.
First is one of the multiproject chips produced by students and faculty
researchers at Stanford University (Fig. 5). Among these is the first
prototype of the "Geometry Engine", a high performance computer graphics
image-generation system, designed by Jim Clark. That project has since evolved
into a very interesting architectural exploration and development project.[9]

Figure 5. Photo of MPC79 Die-Type BK (containing projects from Stanford
University):

[http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/SU-
BK1.jp...](http://ai.eecs.umich.edu/people/conway/VLSI/MPCAdv/SU-BK1.jpg)

[...]

The text itself passed through drafts, became a manuscript, went on to become
a published text. Design environments evolved from primitive CIF editors and
CIF plotting software on to include all sorts of advanced symbolic layout
generators and analysis aids. Some new architectural paradigms have begun to
similarly evolve. An example is the series of designs produced by the OM
project here at Caltech. At MIT there has been the work on evolving the LISP
microprocessors [3,10]. At Stanford, Jim Clark's prototype geometry engine,
done as a project for MPC79, has gone on to become the basis of a very
powerful graphics processing system architecture [9], involving a later
iteration of his prototype plus new work by Marc Hannah on an image memory
processor [20].

[...]

For example, the early circuit extractor work done by Clark Baker [16] at MIT
became very widely known because Clark made access to the program available to
a number of people in the network community. From Clark's viewpoint, this
further tested the program and validated the concepts involved. But Clark's
use of the network made many, many people aware of what the concept was about.
The extractor proved so useful that knowledge about it propagated very rapidly
through the community. (Another factor may have been the clever and often
bizarre error-messages that Clark's program generated when it found an error
in a user's design!)

9\. J. Clark, "A VLSI Geometry Processor for Graphics", Computer, Vol. 13, No.
7, July, 1980.

[...]

The above is all from Lynn Conway's fascinating web site, which includes her
great book "VLSI Reminiscence" available for free:

[http://ai.eecs.umich.edu/people/conway/](http://ai.eecs.umich.edu/people/conway/)

These photos look very beautiful to me, and it's interesting to scroll around
the hires image of the Quux's Lisp Microprocessor while looking at the map
from page 22 that I linked to above. There really isn't that much too it, so
even though it's the biggest one, it really isn't all that complicated, so I'd
say that "SIMPLE" graffiti is not totally inappropriate. (It's microcoded, and
you can actually see the rough but semi-regular "texture" of the code!)

This paper has lots more beautiful Vintage VLSI Porn, if you're into that kind
of stuff like I am:

[http://ai.eecs.umich.edu/people/conway/VLSI/MPC79/Photos/PDF...](http://ai.eecs.umich.edu/people/conway/VLSI/MPC79/Photos/PDFs/MPC79ChipPhotos.pdf)

A full color hires image of the chip including James Clark's Geometry Engine
is on page 23, model "MPC79BK", upside down in the upper right corner,
"Geometry Engine (C) 1979 James Clark", with a close-up "centerfold spread" on
page 27.

Is the "document chip" on page 20, model "MPC79AH", a hardware implementation
of Literate Programming?

If somebody catches you looking at page 27, you can quickly flip to page 20,
and tell them that you only look at Vintage VLSI Porn Magazines for the
articles!

There is quite literally a Playboy Bunny logo on page 21, model "MPC79B1", so
who knows what else you might find in there by zooming in and scrolling around
stuff like the "infamous buffalo chip"?

[http://ai.eecs.umich.edu/people/conway/VLSI/VLSIarchive.html](http://ai.eecs.umich.edu/people/conway/VLSI/VLSIarchive.html)

[http://ai.eecs.umich.edu/people/conway/VLSI/VLSI.archive.spr...](http://ai.eecs.umich.edu/people/conway/VLSI/VLSI.archive.spreadsheet.htm)

------
SrslyJosh
Server seems a bit slow. Here's a Coral cache link:

[http://dspace.mit.edu.nyud.net/bitstream/handle/1721.1/5731/...](http://dspace.mit.edu.nyud.net/bitstream/handle/1721.1/5731/AIM-514.pdf)

(Pretend you're on slashdot 10 years ago.)

------
juliangamble
Lisp All the Way Down!
[https://twitter.com/thelittlelisper/status/55098754701526220...](https://twitter.com/thelittlelisper/status/550987547015262209)

~~~
sp332
[https://xkcd.com/224/](https://xkcd.com/224/)

------
typedef_struct
Thanks for this--now I know where [http://lambda-the-
ultimate.org/](http://lambda-the-ultimate.org/) got its name

~~~
hga
Indeed. This was one of the last in a series of papers by Steele or both:
[http://library.readscheme.org/page1.html](http://library.readscheme.org/page1.html)

There's quite a bit of enlightenment to be found in them.

------
melloclello
Imagine where we'd be today if this had taken off instead of x86...

~~~
hga
There are serious resource issues that made that unlikely:

About the time Sussman and company were getting back their first silicon,
Intel was shipping the 8086 and 8088, which could, for example, address a
whopping megabyte of memory.

Sussman and company did one or two generations of silicon, but from what I've
heard the microcode one or the latter was flawed and never really worked. They
simply didn't have the resources to sufficiently simulate it before committing
to silicon (I'm told there was a bit of arrogance involved as well, which I
believe).

Intel won and continues to win in part because of success reinforcing success,
keeping their eyes on the ball, and _massive_ investments. Process (fab
lines), design, simulation, support for hardware developers (said to be a
critical factor in many 808x design wins over the 68000), etc. etc. etc.

I've thought of paths that might have allowed custom Lisp hardware to survive,
but they're in part 20/20 hindsight (people _really_ should have believed
Moore's Law, and made a minimum gates RISC chip ASAP), in part require a
quality of management that none of these MIT based efforts ever had, and
almost certainly in part wishful thinking (non-recurring engineering (NRE)
costs were a killer, especially combined with the need to make profits vs.
that keeping the ecosystem small by shipping small numbers of units).

Send me back in time with a zillion dollar budget and who knows? Short of
that, as agumonkey says, what's done is done. Me, I want to see tagged
architectures return (to enforce dynamic typing and make math fast by tagging
each word in memory; also has potential for security, and of course helping
GC).

~~~
jacquesm
I think part of the reason why generalized CPUs will win over specialized ones
every time is that hardware that does one thing well is invariably worse at
others and we're in a 'polyglot' environment these days. So if you specialize
a CPU for language 'X' it will almost automatically be worse at all the
others.

So the mass market will almost always favor general purpose CPUs and there may
be niches where you'll find more exotic engines suited for a particular kind
of computation, maybe even high level languages in hardware (greenarrays Forth
offerings for instance).

~~~
blt
Sadly can't remember where, but I once read an argument that today's CPUs are
'C Machines'. i.e. C code branches a lot, so we have branch predictors. C code
uses contiguous memory, so we have caches. I have no idea if it's a legit
argument, but I thought it was interesting.

~~~
sklogic
Yes, of course the modern CPUs are designed with C in mind, and C-based
benchmarks are always the first to be used to assess any new architectural
change. So calling them 'C machines' is legitimate.

The worst bit of being a 'C machine' is the fact that the integers of
unspecified nature and pointers are interchangeable, instead of forcing them
to occupy the different register files. Another similar consequence is the
lack of type tags, which could have been useful for many high level language
features - GC, smarter caches pre-fetching the pointers from structures (most
importantly for Lisp - pre-fetching the cons cells, depending on a way they've
been accessed).

