Hacker News new | past | comments | ask | show | jobs | submit login

Yes this article is very old, and really only scratches the surface of Hillis' genius and puts a business failure angle on it.

Here's a great video describing the architecture of the CM-5


Note how similar the programming concepts are to CUDA (at an abstract level). Hillis also in the 80s published his MIT thesis as a book: The Connection Machine


An incredibly well written and fascinating read, just as relevant today for programming a GPU as it was for programming the ancient beast of a CM-2. It's about algorithms, graphs, map/reduce, and other techniques of parallelism pioneered at Thinking Machines.

For example, Guy Blelloch worked at TM, and pioneered prefix scans on these machines, now common techniques used on GPUs.



There's also been a lot of hum lately on HN about APL, much of Hillis' *Lisp ideas come from parallelizing array processing primitives ("zectors" and "zappings"), ideas that originating in APL as he acknowledged in the paper describing the language:


What's old is new... again.

One should note that CM-1/2 (which is essentially an FPGA turned inside out which you can reconfigure for every program step) has radically different architecture than CM-5 (which is essentially the same as modern many-CPU distributed memory supercomputers).

Also of note is that * Lisp described by Hillis' paper (xectors and xappings with more or less hidfen mapping to hardware) is completely different from * Lisp that was actually sold by TMC, which handled embedding of the problem geometry into hardware, but otherwise was Paris assembler (ie. what you send through the phenomenally thick cable from frontend to CM to make stuff happen) bolted onto Common Lisp. IIRC the commercial *Lisp got somehow opensourced and you can run it (in emulation mode) on top of SBCL.

You're right, he talks in the video I linked above about how different the CM-1/2 architecture is to the CM-5, but how the ideas of "data parallelism" on "virtual processors" maps onto both designs.

Thanks for the info, I have seen variants in old pdfs around that have the !! parallelism construct instead of using the algebraic forms of alpha, beta, and dot. I find the latter form as described in the book The Connection Machine to be very elegant.

I took Parallel Algorithms with Blelloch and it was mind-blowing. We used NESL and not *lisp, though.

I took it too something like ~20 years ago and am still surprised to reuse the concepts from the course. Guy rules and rocks those silk shirts.

What is a modern day version of NESL?

*lisp was created mainly by Steve Omohundro https://en.m.wikipedia.org/wiki/Steve_Omohundro

Digging a bit more I believe you're referring to this varient:


The *Lisp in the book The Connection Machine used a different sytax, there the operators α, β, and · where used to algebraically map and reduce lisp functions over parallel data structures as described in this paper by Hillis and Steele:


Unfortunately it doesn't seem like this language exists anymore.

There were a bunch of Lisp variants for the Connection Machine, but the commercial offering was StarLisp.


For more: http://www.softwarepreservation.org/projects/LISP/parallel#C...

Other variants were Connection Machine Lisp and Paralation Lisp.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact