Hacker News new | past | comments | ask | show | jobs | submit login
The Rise and Fall of Thinking Machines (1995) (inc.com)
103 points by rbanffy on May 1, 2018 | hide | past | favorite | 53 comments



Yes this article is very old, and really only scratches the surface of Hillis' genius and puts a business failure angle on it.

Here's a great video describing the architecture of the CM-5

https://youtu.be/Ua-swPZTeX4

Note how similar the programming concepts are to CUDA (at an abstract level). Hillis also in the 80s published his MIT thesis as a book: The Connection Machine

https://www.amazon.com/Connection-Machine-Press-Artificial-I...

An incredibly well written and fascinating read, just as relevant today for programming a GPU as it was for programming the ancient beast of a CM-2. It's about algorithms, graphs, map/reduce, and other techniques of parallelism pioneered at Thinking Machines.

For example, Guy Blelloch worked at TM, and pioneered prefix scans on these machines, now common techniques used on GPUs.

https://www.youtube.com/watch?v=_5sM-4ODXaA

http://uenics.evansville.edu/~mr56/ece757/DataParallelAlgori...

There's also been a lot of hum lately on HN about APL, much of Hillis' *Lisp ideas come from parallelizing array processing primitives ("zectors" and "zappings"), ideas that originating in APL as he acknowledged in the paper describing the language:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108...

What's old is new... again.


One should note that CM-1/2 (which is essentially an FPGA turned inside out which you can reconfigure for every program step) has radically different architecture than CM-5 (which is essentially the same as modern many-CPU distributed memory supercomputers).

Also of note is that * Lisp described by Hillis' paper (xectors and xappings with more or less hidfen mapping to hardware) is completely different from * Lisp that was actually sold by TMC, which handled embedding of the problem geometry into hardware, but otherwise was Paris assembler (ie. what you send through the phenomenally thick cable from frontend to CM to make stuff happen) bolted onto Common Lisp. IIRC the commercial *Lisp got somehow opensourced and you can run it (in emulation mode) on top of SBCL.


You're right, he talks in the video I linked above about how different the CM-1/2 architecture is to the CM-5, but how the ideas of "data parallelism" on "virtual processors" maps onto both designs.

Thanks for the info, I have seen variants in old pdfs around that have the !! parallelism construct instead of using the algebraic forms of alpha, beta, and dot. I find the latter form as described in the book The Connection Machine to be very elegant.


I took Parallel Algorithms with Blelloch and it was mind-blowing. We used NESL and not *lisp, though.


I took it too something like ~20 years ago and am still surprised to reuse the concepts from the course. Guy rules and rocks those silk shirts.

What is a modern day version of NESL?


*lisp was created mainly by Steve Omohundro https://en.m.wikipedia.org/wiki/Steve_Omohundro


Digging a bit more I believe you're referring to this varient:

http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/l...

The *Lisp in the book The Connection Machine used a different sytax, there the operators α, β, and · where used to algebraically map and reduce lisp functions over parallel data structures as described in this paper by Hillis and Steele:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.108...

Unfortunately it doesn't seem like this language exists anymore.


There were a bunch of Lisp variants for the Connection Machine, but the commercial offering was StarLisp.

http://people.csail.mit.edu/bradley/cm5docs/nov06/GettingSta...

For more: http://www.softwarepreservation.org/projects/LISP/parallel#C...

Other variants were Connection Machine Lisp and Paralation Lisp.


You might also like Richard Feynman and The Connection Machine [1] written in 1989 by Daniel Hillis.

[1] http://longnow.org/essays/richard-feynman-connection-machine...


Biggest previous discussion was in 2014. Some great comments in there:

https://news.ycombinator.com/item?id=7121058

Second biggest was in 2009, complete with "seriously, how many times does this article has to be posted in HN?"

https://news.ycombinator.com/item?id=743170


Real computers have blinken lights. In an age of dull beige boxes, that’s what truly set them apart, at least to a layman.

I’m half joking, but half not. Nvidia, Cray, etc need to put some blinken lights on these drab racks. Something with AI needs lights, and it looks sexy to Joe Public.

Like in classic sci-fi, a sentient AI machine would have columns of blinken lights, tended to by women with clipboards, lab coats, and high heels.


(Somewhat) funny story: My wife used to work at TMC and said that initally the blinken lights actually indicated the utilization of each CPU but programmers spent so much time entertaining themselves by writing code to do animations in the blinking light matrix that customers demanded ThinkingMachines do something about it. So they changed the lights to blink randomly.


When TM was at its peak, I heard comments from people in the DoD/intel community that the primary value of having a Thinking Machines computer was recruiting value. At least one company set one up in a very prominent place so that new developer recruits would see it and think the company was cutting edge. They rarely if ever actually developed on it because they found it just wasn't practical to write (or rewrite) code bases to take advantage of the parallel architecture. (and DARPA had paid for it, so it didnt actually cost the company that much).


There is part of one in the National Crypto Museum, though the lights are rigged for a slow pattern. I'm sure it's just a gutted case.


There's something awe-inspiring of big halls full of identical racks fitted with identical machines, with thick, even bundles of colourful network cables straining against their strips. Each unit -- of row, rack, server -- anonymously, but with great power, quietly (underneath the roar of the fans) grinding away on some unknown, ephemeral subpart of a workload. From a distance, clam and regular, serene, even; but as you get closer, plenty of blinking lights, on harddrives and NICs, feverishly and with no apparent pattern giving a small hint of the fierce activity taking place inside the cool metal box.


I've watched blinken lights on my motherboard for about 2 seconds when it performed automatic overclocking. POWER, CPU, DRAM, CPU, DRAM, CPU, DRAM, TPU, BOOT. Quite entertaining.


I've seen some PC motherboards that contained integrated "POST card" in the form of bunch of LEDs or pair of 7segment displays. Sadly this seems to got replaced by few highlevel LEDs and bunch of meaningless ones, which includes stuff like backlighted PCB.

By the way many DEC Alpha boards had large amount of LEDs near the CPU (probably driven directly by the CPU) which shown state of PALcode (and thus blinked in entertaining way even when the system is up and running)


I built an Altair 8080 clone, just so I could play Star Trek (CPM) and watch the lights blink. Too cool.


Have you not seen a DGX Station?


The purpose of those lights was to give a glimpse of the status of the machine - which cores were idling and which were not. The upper 16 LEDs of each board were doubled. I assume the extra ones were used for diagnostics.

If your computer lives in a datacenter, far from view, there is little purpose on them. If, however, it's a smaller unit that lives on a desk or in an office, being able to quickly tell its state is interesting. Good visualization is an art.


Not in person. I don’t see any blinken lights. Water cooled rigs and case lights don’t count.


Only slightly related, but imagine, if you take a huge server rack- and you insert hidden behind the front - a glas jar with a silicon-brain, into which tons of glowing wires run.

I want to hear the maintenance calls for that one.


Hey, this was marked as dead. Looks like you've been shadowbanned. Most of your recent comments look quite ok.


Best blinking light machine I ever saw was the Nanodata QM-1. More than a thousand LEDs. It had a program called 'tsq' which was short for 'Times Square' which would display a scrolling message.


The CM-1 LEDs were inspired when Hillis saw Wargames and the WOPR


I have a friend who works on Ab Initio software. Sheryl Handler is still the company CEO. The made him sign strict NDA.

On an unrelated note, I bought "The Pattern on the Stone" by W. Daniel Hillis last year. He gives a lot of examples on Tic Tac Toe


I have one story to tell about Ab Initio.

They invited me in for an interview. At the end of the day, they weren't done. So we did another day of interviews. And another.

They interviewed me, phone and in-person, for over 20 hours.

Then we got to discussing what sort of compensation I would require... and then they decided that they didn't want to hire me.

I have a friend there who is pretty happy, but they can't tell me what they do except in the broadest terms.


It's interesting to see how some of what Connection Machines thought would happen in the future, has now come to pass, such as scientists renting computing capacity by the hour, with e.g. GPU rental on cloud computing.


Wasn't it how everything worked back in the early mainframes and timesharing days?


And has always worked with compute clusters too.


What with all the people drooling over the LEDS, how about a PCI board that shows the top bits of the address bus?


You can't see the cpu address bus from a pci(e) card sadly.


Ah yes, of course, it's more like a bunch of serial links. Ok, how about a vacant RAM slot then?


Well... Each board had 32 LEDs (the top 16 were doubled, I don't know for what), each cube had 16 boards, 8 on one side, a couple in the middle without LEDs and doing communications, IIRC, and 8 more on the other side. Not sure it had LEDs on the back cubes.

I'm seriously thinking about building a cluster of ARM-based thingies and use LEDs controlled from each node to show usage of cores, NEON lanes (patching the Ne10 library) and so on. There are some octa-core big.LITTLE (I forget the new name) boards that would make the carrier boards simpler (only 4 per carrier needed, considering CPUs alone). The boards themselves would be simple, having only LEDs connected to the GPIO pins and power being fed to the nodes, which would be wired together using ethernet.

Another, way cooler but waaaaay dumber (because it'd be a shitload of work for me), would be to design a board around an ethernet switch and a bunch of Octavo SiPs (or, maybe, some Pi-like CPU Soc with PoP RAM on top, provided it has ethernet on board to reduce chip count). Having everything on a single board would avoid PHY transceivers and reduce board complexity, but it still would be a ton of work for someone who hasn't designed a PCB since the dawn of the SMD era. Also, the Octavo parts are single core and we'd need 32 of them per board to light up 32 LEDs in a meaningful way. I'd rather restart my hardware engineer career with something less megalomaniac.

The final, laziest, approach would be to get a cluster board and 7 SOPINE modules from the fine people at Pine64 and wire their GPIO lines to a couple LED matrix modules. With 28 cores per board, the use of 28 of the 32 lights would be simple to figure out, but we'd need something for the other 4 LEDs (2 could be from the on-board ethernet upstream port, but 2 still remain. Also, since the SOPINES stand perpendicular to the cluster board, spacing would be very tight.


That'd be one very impressive lightshow :)


Replace the LEDs with laser diodes if you want something truly impressive.


I don't want to get killed while running benchmarks.


I definitely need to negotiate a bigger home office with my wife...


Not sure if this will help but you can tell her that you have my vote ;)


This was the company where Richard Feynman's job was to paint walls and buy office supplies.


He did more than that, but painting a wall, or other manual labor that doesn't require concentration, gives time for deep thinking. I'm sure he didn't mind it.


Hillis gives a written account of it. He wanted to do stuff that actually needed to get done; not BS.

http://longnow.org/essays/richard-feynman-connection-machine...

""" We were arguing about what the name of the company should be when Richard walked in, saluted, and said, "Richard Feynman reporting for duty. OK, boss, what's my assignment?" The assembled group of not-quite-graduated MIT students was astounded.

After a hurried private discussion ("I don't know, you hired him..."), we informed Richard that his assignment would be to advise on the application of parallel processing to scientific problems.

"That sounds like a bunch of baloney," he said. "Give me something real to do."

So we sent him out to buy some office supplies. While he was gone, we decided that the part of the machine that we were most worried about was the router that delivered messages from one processor to another. We were not sure that our design was going to work. When Richard returned from buying pencils, we gave him the assignment of analyzing the router. """


Who wouldn't pay to work there?

Even more now, when we can see clearly how ahead of its time the CM's were.


Great writing by Gary A. Taubes who is now better known for writing about carbohydrates and sugar through books like "Good Calories, Bad Calories"


How is "Good Calories, Bad Calories" as a book? What does it talk about?


Haven't read his book but I've listened to him talk on podcasts. I think he makes some excellent criticisms on the horrible state of dietary science. He can be a bit digmatic himself and sometimes makes unhelpful statements but he also has some great points. I'd probably credit him with the chain of events that resulted in me deciding to go low carb which has turned out to be a great decision so far for me.


Reading this on mobile I got really confused: this was published in 1995.


Yes. In those days spirits were brave, the stakes were high, and supercomputers looked super. ;-)

note: I fixed the headline.


Yeah, I read this with a weird sense of deja vu.


Cool and awesome does not pay. The top companies may be using cool stuff to accomplish things, but the things being done are ultimately pedestrian. Google sells ads, Amazon runs an online marketplace, Apple makes personal communicators and Facebook is a place for chatting with friends. None of them are doing anything like "searching for the origins of the universe."


Cool and awesome pays just fine. Ask NVIDIA. Many of the ideas pioneered in the Connection Machine have been validated by time. Hillis was only a couple of decades or so ahead of his time.


It doesn't pay when it is ahead of its time :)


I actually laughed hard at your last sentence. But you have made a very valid point.

One has to be pragmatic while establishing a company, especially in tech. The pitch shouldn't be something like - "Inventing a time travelling machine that that will reverse the entropy of the universe while self replicating von Neumann contructors"




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: