
The CONS microprocessor (1974) [pdf] - kristianp
https://dspace.mit.edu/bitstream/handle/1721.1/41115/AI_WP_080.pdf
======
magoghm
It's interesting to see that, although what we call now a "microprocessor" had
already been invented, in that paper they use the same word to describe a CPU
implemented using microcode (and not a very small processor that fits in a
single chip).

Here is a picture of the author of that paper working on the CADR machine (the
successor to the CONS machine):
[http://www.computerhistory.org/chess/stl-431614f64ea3e/](http://www.computerhistory.org/chess/stl-431614f64ea3e/)

~~~
prestonbriggs
Here's a link to the CADR paper:
[http://www.unlambda.com/lispm/memo528.html](http://www.unlambda.com/lispm/memo528.html)
I like these; they describe a microcoded machine in more detail than I can
usually find.

------
zackmorris
A little background here - Lisp is an ideal language for genetic algorithms
because its syntax maps directly to S-expressions which can be represented as
a tree (warning, PDF):

[http://www.cis.umassd.edu/~ivalova/Spring08/cis412/Ectures/G...](http://www.cis.umassd.edu/~ivalova/Spring08/cis412/Ectures/GP.pdf)

This tree can be directly represented as a binary string which represents the
DNA of the genetic algorithm.

So evolution is easy to simulate by randomly mutating nodes in the tree. Other
operations like crossover (exchanging branches) and sexual reproduction are
also straightforward.

Personally, I think that genetic algorithms are more approachable than neural
nets and would like to see them emphasized more in machine learning
(especially with modern video cards). When you think about it, both approaches
are searching for local maximums/minimums in an effectively infinite search
space. They are basically solving a giant matrix to find the function that
maps inputs to outputs. Sure genetic algorithms generally take more computing
power, but that becomes less of a problem with each passing year. Given enough
time, evolution always wins!

------
prestonbriggs
Not much about Lisp here. It describes the design of a microcode-based machine
that could be used for all sorts of things. You could use it to implement a
PDP-11, an x86, or whatever (though it's biased towards 32-bit machines).

Google "microcode" and you'll get a start. Hennessy and Patterson will carry
you further.

------
tombert
I will admit that this stuff is way over my head, but it's things like this
that make me wonder why the Lisp machines/processors never really caught
on...can anyone here shed some light into this for me?

~~~
tachyonbeam
Wikipedia has a little snippet on this:
[https://en.wikipedia.org/wiki/Lisp_machine#End_of_the_Lisp_m...](https://en.wikipedia.org/wiki/Lisp_machine#End_of_the_Lisp_machines)

Basically, lisp machines were expensive workstation computers, and PCs killed
all workstation manufacturers. Sun SPARCStations and Silicon Graphics machines
bit the dust as well. Nobody could justify the cost anymore.

~~~
jackhack
>lisp machines were expensive workstation computers

In most cases, yes, but the language also found its way into some very
unexpected places. For example, a subset of Common Lisp, "L", was at the core
of Prof. Rod Brooks' Subsumption Architecture which enabled the amazing mobile
robots built in his lab. He ran Lisp on 16mhz 68332 embedded processors.
[https://www.researchgate.net/publication/2949173_L_--
_A_Comm...](https://www.researchgate.net/publication/2949173_L_--
_A_Common_Lisp_for_Embedded_Systems)

