Here's a great video describing the architecture of the CM-5
Note how similar the programming concepts are to CUDA (at an abstract level). Hillis also in the 80s published his MIT thesis as a book: The Connection Machine
An incredibly well written and fascinating read, just as relevant today for programming a GPU as it was for programming the ancient beast of a CM-2. It's about algorithms, graphs, map/reduce, and other techniques of parallelism pioneered at Thinking Machines.
For example, Guy Blelloch worked at TM, and pioneered prefix scans on these machines, now common techniques used on GPUs.
There's also been a lot of hum lately on HN about APL, much of Hillis' *Lisp ideas come from parallelizing array processing primitives ("zectors" and "zappings"), ideas that originating in APL as he acknowledged in the paper describing the language:
What's old is new... again.
Also of note is that * Lisp described by Hillis' paper (xectors and xappings with more or less hidfen mapping to hardware) is completely different from * Lisp that was actually sold by TMC, which handled embedding of the problem geometry into hardware, but otherwise was Paris assembler (ie. what you send through the phenomenally thick cable from frontend to CM to make stuff happen) bolted onto Common Lisp. IIRC the commercial *Lisp got somehow opensourced and you can run it (in emulation mode) on top of SBCL.
Thanks for the info, I have seen variants in old pdfs around that have the !! parallelism construct instead of using the algebraic forms of alpha, beta, and dot. I find the latter form as described in the book The Connection Machine to be very elegant.
What is a modern day version of NESL?
The *Lisp in the book The Connection Machine used a different sytax, there the operators α, β, and · where used to algebraically map and reduce lisp functions over parallel data structures as described in this paper by Hillis and Steele:
Unfortunately it doesn't seem like this language exists anymore.
For more: http://www.softwarepreservation.org/projects/LISP/parallel#C...
Other variants were Connection Machine Lisp and Paralation Lisp.
Second biggest was in 2009, complete with "seriously, how many times does this article has to be posted in HN?"
I’m half joking, but half not. Nvidia, Cray, etc need to put some blinken lights on these drab racks. Something with AI needs lights, and it looks sexy to Joe Public.
Like in classic sci-fi, a sentient AI machine would have columns of blinken lights, tended to by women with clipboards, lab coats, and high heels.
By the way many DEC Alpha boards had large amount of LEDs near the CPU (probably driven directly by the CPU) which shown state of PALcode (and thus blinked in entertaining way even when the system is up and running)
If your computer lives in a datacenter, far from view, there is little purpose on them. If, however, it's a smaller unit that lives on a desk or in an office, being able to quickly tell its state is interesting. Good visualization is an art.
I want to hear the maintenance calls for that one.
On an unrelated note, I bought "The Pattern on the Stone" by W. Daniel Hillis last year. He gives a lot of examples on Tic Tac Toe
They invited me in for an interview. At the end of the day, they weren't done. So we did another day of interviews. And another.
They interviewed me, phone and in-person, for over 20 hours.
Then we got to discussing what sort of compensation I would require... and then they decided that they didn't want to hire me.
I have a friend there who is pretty happy, but they can't tell me what they do except in the broadest terms.
I'm seriously thinking about building a cluster of ARM-based thingies and use LEDs controlled from each node to show usage of cores, NEON lanes (patching the Ne10 library) and so on. There are some octa-core big.LITTLE (I forget the new name) boards that would make the carrier boards simpler (only 4 per carrier needed, considering CPUs alone). The boards themselves would be simple, having only LEDs connected to the GPIO pins and power being fed to the nodes, which would be wired together using ethernet.
Another, way cooler but waaaaay dumber (because it'd be a shitload of work for me), would be to design a board around an ethernet switch and a bunch of Octavo SiPs (or, maybe, some Pi-like CPU Soc with PoP RAM on top, provided it has ethernet on board to reduce chip count). Having everything on a single board would avoid PHY transceivers and reduce board complexity, but it still would be a ton of work for someone who hasn't designed a PCB since the dawn of the SMD era. Also, the Octavo parts are single core and we'd need 32 of them per board to light up 32 LEDs in a meaningful way. I'd rather restart my hardware engineer career with something less megalomaniac.
The final, laziest, approach would be to get a cluster board and 7 SOPINE modules from the fine people at Pine64 and wire their GPIO lines to a couple LED matrix modules. With 28 cores per board, the use of 28 of the 32 lights would be simple to figure out, but we'd need something for the other 4 LEDs (2 could be from the on-board ethernet upstream port, but 2 still remain. Also, since the SOPINES stand perpendicular to the cluster board, spacing would be very tight.
We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded.
After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.
"That sounds like a bunch of baloney," he said. "Give me something real to do."
So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router.
Even more now, when we can see clearly how ahead of its time the CM's were.
note: I fixed the headline.
One has to be pragmatic while establishing a company, especially in tech. The pitch shouldn't be something like - "Inventing a time travelling machine that that will reverse the entropy of the universe while self replicating von Neumann contructors"