
Design of Lisp-based Processors (1979) - Cieplak
http://dspace.mit.edu/handle/1721.1/5731
======
wfn
There's a person devoted to basically (as I take it) re-designing the Lisp
Machine (<http://en.wikipedia.org/wiki/Lisp_machine>) from the bottom-up on
modern hardware (in the "let's design (say) a microcontroller using modern
hardware assemblage/etc. tools/knowledge" sense) - the blog is at
<http://www.loper-os.org> ; from the blog:

\- posts tagged under 'Hardware': <http://www.loper-os.org/?cat=7>

\- posts tagged under 'LoperOS': <http://www.loper-os.org/?cat=11>

\- the 'About' page (an interesting read, but not sure if it's the most
concise/on-topic expose to the project; but it presents the author's frame of
mind / angle of approach I suppose): <http://www.loper-os.org/?p=8>

~~~
erikj
His project seems to be dead by now, no updates in a long time.

~~~
wfn
That might be (though if you browse under 'Hardware' or 'LoperOS', you'll find
relatively recent posts; his updates are generally scarce throughout in any
case) - I'd say this is one of those really long-term (indeed, vaporware as of
now) projects that might not make it, but hey. In any regard, I'm quite sure
the author himself sees this as a really long-term affair and treats it as a
free-time-after-work kind of thing. [0] I mean for what he's set out to
accomplish, it's a _huge_ undertaking. Perhaps an over-ambitious overkill, but
I for one try to follow his (frustratingly infrequent) updates because (1)
it's a really interesting idea and (2) I like his perspective and I think some
of the people here might share a subset of his views (it's not difficult to
spot them, as he rants relatively often;).

[0] <http://www.loper-os.org/?p=562>

------
Symmetry
This reminds me of the Reduceron[1], a processor designed for Haskell. You can
do a lot of optimizations in hardware if you have certain guarantees about the
software running on it.

[1]<http://www.cs.york.ac.uk/fp/reduceron/>

~~~
taeric
Similarly, you can do a lot of optimizations in hardware if you can make the
assumption that it is used correctly.

I hate to sound like I'm arguing against this. I believe it is entirely
plausible more progress is made where more work is done. I have not the
knowledge to say if more progress is possible on either. Nor do I want that to
limit the ideas pursued.

------
rtpg
Very interesting, I was thinking about how to implement lisp hardware wise
just this week. Will read in detail.

The idea that this single document gives you the capability of fully
understanding a computing system is insane. If you're patient enough I imagine
you could even try building it.

~~~
Cieplak
<http://www.aviduratas.de/lisp/lispmfpga/index.html>

<http://opencores.org/project,igor>

------
S4M
Actually I was reading the interview of Ken Thompson on ` _Coders at work_. He
said about Lisp machines:

All along I said, “You’re crazy.” The PDP-11’s a great Lisp machine. The
PDP-10’s a great Lisp machine. There’s no need to build a Lisp machine that’s
not faster. There was just no reason to ever build a Lisp machine. It was kind
of stupid.

I don't know much about the topic, but I thought Lisp machines were about
enabling a programmer to code in a language that's high level enough to do
powerful things, but in the same time that can access to all the low levels
components of the machine. Can somebody explain me what I am missing?

~~~
ScottBurson
Permit me to disagree with the distinguished Mr. Thompson. There wasn't quite
"no reason".

First off, the notion of a personal workstation was just getting started back
then. It was entirely reasonable for the MIT AI lab to want to build some
workstations for its researchers, who had previously been sharing a PDP-10.
There wasn't, in 1979, any off-the-shelf machine with virtual memory and
bitmapped graphics that you could just buy. The original PDP-11 had a
completely inadequate 16-bit address space. The first VAX model, the 11/780,
was too expensive to be a single-user workstation. The 11/750, released in
October 1980, was cheaper, but I think still too expensive for the purpose
(though a lot of them were sold as timesharing systems, of course).

In any case, workstations started to be big business in the 1980s, and through
the 1990s. Apollo, Silicon Graphics (SGI), and of course Sun Microsystems all
enjoyed substantial success. The fact that DEC didn't own this market speaks
clearly to the unsuitability of the 11/750 for this purpose.

Also, the extreme standardization of CPU architectures that we now observe --
with x86 and ARM being practically the only significant players left -- hadn't
occurred yet at that time. It was much more common then than now for someone
building a computer to design their own CPU and instruction set.

None of that has to do with Lisp specifically, but it does put some context
around the AI Lab's decision to design and build their own workstation. If
they wanted workstations in 1980, they really had no choice.

And the tagged architecture did have some interesting and useful properties.
One of them was incremental garbage collection, made possible by hardware and
microcode support. We still don't have a true equivalent on conventional
architectures, though machines are so fast nowadays that GC pauses are rarely
onerous for interactive applications.

Another consequence was a remarkable level of system robustness. It soon
became routine for Lisp machines to stay up for weeks on end, despite the fact
that they ran entirely _in a single address space_ \-- like running in a
single process on a conventional machine -- and were being used for software,
even OS software, development. The tagging essentially made it impossible for
an incorrect piece of code to scribble over regions of memory it wasn't
supposed to have access to.

Obviously Lisp Machines didn't take over the world, but it wasn't really until
the introduction of the Sun 4/110 in 1987 (IIRC) that there was something
overwhelmingly superior available.

If Thompson had said simply that it was clear from the trends in VLSI that
there would eventually be a conventional machine that was so much faster and
cheaper than it would be possible to make Lisp Machines -- simply because
conventional machines would be sold in much greater volume -- that Lisp
Machines would be unviable, I would be forced to agree with him. But that had
not yet happened in 1979.

EDITED to add:

One more point. The first CPU that was cheap enough to use in a workstation
and powerful enough that you would have wanted to run Lisp on it was the
68020; and that didn't come out until 1984.

~~~
Symmetry
When Moore's law eventually runs out of steam it might be once again practical
to have that sort of specialization.

~~~
ScottBurson
I think it's actually a matter of having better CAD tools. When we get to the
point that an entire billion-transistor CPU can be laid out automatically
given only the VHDL for the instruction set (or something -- I'm not enough of
a chip designer to know exsctly what's involved), then it will be much easier
to experiment with new architectures.

~~~
gatherknwldg
The barrier to entry is low, now, and getting lower.

Prototyping your custom instruction sets on FPGAs and then commissioning a run
to stamp them to ASICs isn't prohibitively expensive, or hard.

In part, it's lack of imagination that has led us so far down the complicated,
twisty path into x86 hell.

Just because your chip _can_ do it doesn't mean it's good at it.

~~~
noahl
This is very interesting. I always thought that making an ASIC was
prohibitively expensive except for the largest companies. How much does it
really cost?

I would really enjoy playing with a Lisp chip. It might not be good for
performance computing, but it would be great for writing GUIs. The paper
suggests having a chip with a Lisp part for control and an APL part for array
processing - I think the modern equivalent would be a typed-dispatching part
for control and some CUDA or OpenCL cores for speed.

~~~
gatherknwldg
> I always thought that making an ASIC was prohibitively expensive except for
> the largest companies. How much does it really cost?

Full custom is still quite expensive.

But you can go the route I'm talking about (prototype on an FPGA, then get in
on one of the standard runs at a chip fab via MOSIS or CMP or a similar
service) for ~10,000 USD for a handful of chips.

~~~
defrost
I'm sensing some kind of universal price point for bleeding edge fabrication.

Adjusting for time, etc. that's pretty what in cost in 1991 to have a handful
of custom boards and firmware built about the TI DSP chips of the day in order
build a dedicated multichannel parallel siesmic signal processing array for
marine survey work.

------
cm3
Why don't we have tagged memory in today's architectures? Is nobody getting
inspiration from the LISP machine or Burroughs
(<https://en.wikipedia.org/wiki/Burroughs_large_systems>) architectures? Is it
because of the failed Intel iAPX 432?

~~~
opinali
Languages that need tagged memory can implement this in software, and on
current arch they will still run as fast as in a CPU that had that trick
implemented in hardware. More likely faster, because the software solution
enables much greater flexibility (e.g. look at Javascript VMs, they have
various ways to represent values, and that's a single language). Also because
compilers can often optimize out _all_ value-representation overheads (by
unboxing etc.). Stuff implemented in the CPU decoder (or even in microcode)
can never afford to make that level of optimization.

------
endlessvoid94
This is fantastic. I've always heard about lisp machines, but the subject
always felt foggy and intimidating. This is a excellent read.

Can you imagine a paper like this written about a language like Ruby or Scala?

------
xradionut
A colleague of mine worked in the fab, (South building on the Richardson
campus), that developed TI's Lisp chip. Years later we found a complete
Explorer system including documentation in a local surplus warehouse, bought
it and sold it to a collector the next week. Yet another indirect benefit of
the whole Lisp system era...

------
notb
Very good introduction to Lisp, as well.

------
c3d
Regarding other kinds of evaluation, like evaluating graphs, this is a bit the
idea behind XLR (<http://xlr.sf.net>). In that case, evaluation by rewriting
parse trees.

Example: a factorial is defined as

    
    
        0! -> 1
        N! -> N*(N-1)!
    

An if-then-else statement as:

    
    
        if true then X else Y -> X
        if false then X else Y -> Y
    

The core operator is the rewrite -> which means: rewrite the parse tree on the
left into the parse tree on the right. There are of course rules about binding
and stuff, and the type system is a bit of a challenge. But the idea may be
intestto whoever is still hanging around this thread :-)

------
pimeys
This is a joy to read. I'd love to get some tips for other good academic
papers like this.

------
dschiptsov
What's the point in bringing this here? To make a few people even mode sad?

~~~
neeee
It's a very interesting paper

~~~
dschiptsov
It is 30 years old, and those who could appreciate it know where to find AIMs.

~~~
S4M
Well, some people (the youngest ones) hear may never have heard of LISP
machines, but checking the article might make them interested in it.

~~~
pjmlp
Yes.

Leaving this here so that one can cry how advanced IDEs were on those days and
what we have lost.

Kalman Reti, the Last Symbolics Developer, Speaks of Lisp Machines:
<http://www.loper-os.org/?p=932>

Additionally

[http://www.sts.tu-harburg.de/~r.f.moeller/symbolics-
info/sym...](http://www.sts.tu-harburg.de/~r.f.moeller/symbolics-
info/symbolics.html)

<http://www.loper-os.org/?cat=10>

