
The Rise and Fall of Thinking Machines (1995) - lispython
http://www.inc.com/magazine/19950915/2622.html
======
JPKab
The guys from Thinking Machines (at least quite a few of them) ended up
starting a very weird enterprise software company called Ab Initio. They're an
odd company who doesn't even get included in Gartner surveys because they
won't release any information about their product. The only reason they get
any business is because they participate in "bake-offs" with big companies
like international banks and blow the competition (IBM, MS, Oracle, etc) away
with fully scalable Proof of Concepts.

I work with Ab Initio software sometimes, and its weird and annoyingly
"enterprisey", but due to the parallel computing background of the guys who
started Ab Initio, the software was capable of scaling up storage and
processing long before anyone came up with Hadoop. Obviously its extremely
closed-source stuff with a proprietary language used to configure it.

I hate enterprise software companies, but Ab Initio is certainly pretty cool
compared to the other ones, and the people I meet there who are old Thinking
Machines guys are super brilliant and interesting.

Another interesting point: the author of this article is Gary Taubes, who is
the metascience guy who got big into low carb, high fat diets and the science
that supports them.

~~~
rst
A whole lot of TMC alumni also wound up at Sun Microsystems. These included
Guy Steele, Greg Papadapoulos (for a time, Sun's CTO), John Rose (a major JVM
developer), Lew Tucker (who was running their cloud computing division for a
while), and a whole lot of hardware engineers who were key in a bunch of
different capacities.

~~~
ScottBurson
Also, Brewster Kahle, founder of the Internet Archive, is a TMC alumnus.

------
look_lookatme
Anyone have a clear video of the CM-2 LEDs in action? Can't seem to find
anything on Youtube. It is a beautiful piece of hardware:

[http://www.mission-base.com/tamiko/cm/CM-1_500w.gif](http://www.mission-
base.com/tamiko/cm/CM-1_500w.gif)

While search I found this awesome post about someone rescuing a panel from a
CM-5:

[http://www.alternet.us.com/?p=1272](http://www.alternet.us.com/?p=1272)

~~~
markc
If you're ever near Maryland head over to the National Cryptologic Museum
where you can see a CM-5 still running. I just saw it a couple days ago.

------
chiph
What were the compute cells like in the Connection Machine? The article uses
the phrase "single-bit processor", but that doesn't sound all that useful -
you'd want something with at least a little memory and a decent instruction
set.

Too small a cell and it runs out of work and spends most of it's time waiting
on the network. Too big and you'll run into packaging problems.

~~~
EdwardCoffin
Guy Steele talked briefly about this in an interview with Dr. Dobb's magazine
[1]. Here's the relevant excerpt:

As my leave of absence was ending, I was still involved at Carnegie Mellon and
showed up for a seminar given by Danny Hillis
[[http://www.edge.org/digerati/hillis/](http://www.edge.org/digerati/hillis/)]
about the CM-1 Connection Machine
[[http://mission.base.com/tamiko/cm/](http://mission.base.com/tamiko/cm/)]. He
went on quite enthusiastically about artificial intelligence and neural
networks but thought the CM-1 was less suited for numerical programming.
Whenever you tell me that you can't make a computer do something, it's a red
flag. I believe the Turing hypothesis. The only question is the efficiency of
the calculation. I believe in crunching the numbers, doing the calculations,
and finding out whether an assertion is true or not.

So while Danny finished the lecture, I scribbled calculations on the back of
an envelope. Okay, suppose you did the arithmetic bit serially, like the
PDP-8S? A 32-bit floating-point multiply you have to multiply two 24-bit
significands, that's going to take 600 time steps. For the additions, a
logarithmic number of shifts, each will take 24 time steps...It turned out
that a floating-point add would take about 1/2 a millisecond and a multiply a
millisecond. Sounds terrible, but you've got 64,000 processors.

DDJ: So you could do 64,000 of these calculations in a millisecond.

GS: That's up in the range of 100 million operations a second, equivalent to a
Cray 1, which was the supercomputer of the day. So after the lecture, I went
up to Danny and said, "You said this is a terrible numeric machine, but I can
tell it's approximately the equivalent of a Cray 1." And he replied, "Can we
have dinner tonight?"

[1] [http://www.drdobbs.com/jvm/a-conversation-with-guy-steele-
jr...](http://www.drdobbs.com/jvm/a-conversation-with-guy-steele-jr/184406029)

~~~
greenyoda
Being able to _theoretically_ do 100 million operations per second doesn't
make a machine competitive with a Cray 1. You also have to consider stuff like
the memory and I/O bandwidth. If you can only get 1000 floating point numbers
into memory in a second, being able to run at 100 megaflops doesn't help much.
Also, you can't really use all those processors unless you have an algorithm
that's massively parallel. Any time your algorithm hits a piece of code that's
sequential rather than parallel, you've gone from a Cray 1 back to a PDP-8.

~~~
EdwardCoffin
The fundamental assumption they were making, though, was that they would be
using parallel algorithms. They intended to do things like finite-element
analysis where each element would be pinned to a particular processor, and
would only need to communicate with neighbours, which was exactly the kind of
thing this architecture was designed to do.

------
pge
An entertaining article by Hillis about Richard Feynman's involvement in
Thinking Machines: [http://longnow.org/essays/richard-feynman-connection-
machine...](http://longnow.org/essays/richard-feynman-connection-machine/)

~~~
elteto
You beat me to the punch, came here to post exactly this. A very entertaining
story and I couldn't stop laughing when they sent Feynman to get office
supplies (!!!) because they didn't quite know what to with him.

------
smoyer
Interesting that nCube [1] was also mentioned in the article. I remember HRB
Systems (here in State College PA - and later part of E-Systems, now Raytheon)
bought nCube computers that were based on the HyperCube architecture. I was at
C-COR in 2005 when we purchased nCube from Larry Ellison's holding company and
added their VOD servers to our product family. Now that C-COR has been merged
into ARRIS, the legacy of nCube lives on in that company's VOD/IA servers [2].

[1] [http://en.wikipedia.org/wiki/NCUBE](http://en.wikipedia.org/wiki/NCUBE)

[2]
[http://www.arrisi.com/products/product.asp?id=17](http://www.arrisi.com/products/product.asp?id=17)

------
juliendorra
When I was a teenager, back in 1993, I played with this this genetic image
generator by Karl Sims, running on a Connection Machine with 32768 processor:
[http://www.karlsims.com/genetic-images.html](http://www.karlsims.com/genetic-
images.html) (At the Pompidou Center, a contemporary art museum).

It obviously blew my young mind, with the 16 screens, the ability to select an
image as the seed for the new series, the endless playability and the vertigo
of not being able to ever go back and find any series again.

I don't know if a Cray would have been less or more suited to that kind of
interactive installation. But it surely was a incredible feat for the time.

------
f2f
this is the fat-tree connection diagram for one of the cabinets of a CM-5. it
was posted on a wall at one of the national labs in the US which had the
pleasure of working with a CM-5 around the time War Games was in theatres.
People still gushed about how nice that machine was to program 10-15 years
later. SyCortex tried to do something similar with their computers,
unfortunately they failed and with them the last interesting connector
architecture...

[http://i.imgur.com/jHtf4QB.jpg](http://i.imgur.com/jHtf4QB.jpg)

------
badsock
I'm conflicted about Hillis. On one hand there's stuff like this, and
especially his involvement with the Long Now foundation (if you haven't
checked out his design for the 10,000 year clock, you really should; clever
solutions to engineering challenges that almost no one has ever thought about
before): [http://longnow.org/clock/](http://longnow.org/clock/)

On the other hand he's part of Intellectual Ventures:
[http://www.intellectualventures.com/inventor-
network/senior-...](http://www.intellectualventures.com/inventor-
network/senior-inventors/w.-daniel-hillis)

------
milhous
I always loved their motto:

"We're building a machine that will be proud of us."

------
kken
>Hillis is what good scientists call a very bright guy -- creative,
imaginative, but not quite a genius.

What an odd classification.

~~~
ScottBurson
That's my assessment too.

When I learned about the CM, I thought it was crazy. How could a SIMD machine
work for AI? Intelligence emerges from the interaction and integration of
multiple, somewhat independent "agents" \-- not really the right word but I
can't think of a better one. This is _Marvin Minsky 's own theory_, to which I
happen to subscribe, but even if I didn't or even if you don't, I still can't
understand how Marvin got sucked into this (except that Danny was engaged to
his daughter Margaret).

The key point is how SIMD machines handle conditionals. The conditional
"branch" is emitted, and those processors on which the test fails simply don't
execute the next _n_ instructions. This means that for a complex conditional
that's rarely satisfied, only one or two processors out of the tens of
thousands the machine might be built with would actually be executing. This is
why you wouldn't run a compiler, for example, on a SIMD machine: most of the
time, most of the machine would be idle.

Anyway, I wasn't involved in TMC, though I knew people who were. Nobody asked
my opinion, nor would they likely have given it much weight if they had (I'm
sure there were skeptics with much better credentials than mine who also were
not listened to). But I was right. As I once heard someone else comment, the
CM turned out to be useful for approximately the complement of the set of
problems that it was intended for. It wound up being pretty good for number
crunching, at least for a while until better things emerged, but useless for
AI.

~~~
e12e
> How could a SIMD machine work for AI?

I guess it depends on how you define AI, from the excellent article linked
above:

[http://longnow.org/essays/richard-feynman-connection-
machine...](http://longnow.org/essays/richard-feynman-connection-machine/)

"Part Two of Feynman's "Let's Get Organized" campaign was that we should begin
a regular seminar series of invited speakers who might have interesting things
to do with our machine. Richard's idea was that we should concentrate on
people with new applications, because they would be less conservative about
what kind of computer they would use. For our first seminar he invited John
Hopfield, a friend of his from CalTech, to give us a talk on his scheme for
building neural networks. In 1983, studying neural networks was about as
fashionable as studying ESP, so some people considered John Hopfield a little
bit crazy. Richard was certain he would fit right in at Thinking Machines
Corporation.

What Hopfield had invented was a way of constructing an [associative memory],
a device for remembering patterns. To use an associative memory, one trains it
on a series of patterns, such as pictures of the letters of the alphabet.
Later, when the memory is shown a new pattern it is able to recall a similar
pattern that it has seen in the past. A new picture of the letter "A" will
"remind" the memory of another "A" that it has seen previously. Hopfield had
figured out how such a memory could be built from devices that were similar to
biological neurons.

Not only did Hopfield's method seem to work, but it seemed to work well on the
Connection Machine. Feynman figured out the details of how to use one
processor to simulate each of Hopfield's neurons, with the strength of the
connections represented as numbers in the processors' memory. Because of the
parallel nature of Hopfield's algorithm, all of the processors could be used
concurrently with 100\% efficiency, so the Connection Machine would be
hundreds of times faster than any conventional computer."

edit: This made me wonder how Moore's greenarray forth computers might be used
for neural nets...

~~~
ScottBurson
Okay, fair enough, neural nets (often called "connectionist AI") might have
been the one kind of AI that involved enough number crunching that the CM was
a favorable choice for running it.

But if you wanted to do, say, genetic evolution of your neural net topology --
not sure that had been thought of then, but people do it now -- the CM
wouldn't have worked very well.

Anyway, if my rather faint recollection is accurate, the hot application I
heard TMC people talking about the most back then was ray tracing.

------
sokoloff
Thanks for posting. Really enjoyed the historical read...

I wish I'd known the backstory of the antique fire engine that I'd
periodically see driving around the Lechmere area (back when Lechmere was
still "a thing" :) )

------
caycep
How successful are the "neuromorphic chips" (or even GPU's) in doing what TM
tried to do?

------
hga
From what little I saw, Sheryl Handler indeed had no business skills.

My experience was trying to be the 1st or 2nd person to fill a particular job.
I had a great interview process (except for Danny, a friend, initially giving
me a hard time for not being in school ($$$)) ... and then nothing. So long,
that it was more than a month later, when I'd accepted and started another
job, that they contacted me to proceed.

As I understand it (I knew practically everyone who could do this job), it
took them three tries to fill it.

The worse story, though, which was well sourced, was that they once contacted
an applicant's _current workplace_ to get recommendation info....

In part, I suspect they're another example of "Death by Lethal Reputation" in
hiring:
[http://www.asktheheadhunter.com/halethalrep.htm](http://www.asktheheadhunter.com/halethalrep.htm)

------
skorgu
"Hillis envisioned his machine eventually becoming a sort of public-
intelligence utility into which people would tap their home PCs, thereby
bringing artificial intelligence to the world."

Jesus can you imagine AT&T as an AI? Shudder.

~~~
IvyMike
Isn't that a description of Google?

~~~
greenyoda
Not really, since Google doesn't do anything related to AI (OK, maybe self-
driving cars, but that's not part of their public internet services). IBM's
Watson might turn into a public AI utility, however.

~~~
IvyMike
I once typed "That thing where a boat is replaced one part at a time" and
Google came back with the right answer.

If that's not AI I have no idea what is.

(And if you don't think that Hillis envisioned his AI doing things like
"answers random questions" and "translating between human languages", what
_do_ you think he had in mind?)

~~~
greenyoda
My guess is that Google's search algorithm has no idea what a "boat" is, and
that it just has some clever algorithm that matched your query against some
words in one of its indexed documents. That doesn't sound like it would
qualify as AI.

An AI program would have some kind of knowledge representation[1] and
inference engine that would allow it to reason about boats. If your query was
"that thing where a boat gives birth to a tree", it could tell you your query
didn't make sense, as opposed to just returning nothing relevant.

By the way, when I tried my nonsensical query on Google, the first result was
about a Syrian woman giving birth on a boat. So Google doesn't even understand
the basics of grammar, let alone that a boat is an inanimate object that can't
give birth to anything. That doesn't sound like "intelligence" to me. There
were AI programs in the 1980s that could do better than that.

[1]
[https://en.wikipedia.org/wiki/Artificial_intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence)

~~~
tensor
Given that we don't know how google search currently works, that's some
serious speculation on your part. We've made quite some progress since the
80s.

For example, neural word embeddings learn which words are synonyms without any
input (and google recently release a faster algorithm for doing this called
word2vec). There are algorithms for parsing sentences, and numerous databases
and ontologies representing manually curated 'concepts'. Google could well be
using some of these within it's search.

That it doesn't come back and say "that doesn't make sense" might only be
because the search is designed to always try to find _something_ , even if
it's the wrong thing. That's because it's posed as a ranking problem.

In any case, what is intelligence? One of my favourite sayings on this topic
is this (and I forget who to attribute this to, but it's not mine):

"We will never have artificial intelligent. Every so often we come up with a
problem to test for intelligence. When a computer finally beats a human at
this problem, detractors are quick to backtrack and claim 'well that's not
really intelligence' and proceed to come up with a new test for intelligence."

I suspect that this will likely go on for a very long time. In the meantime,
we've been using intelligent algorithms for decades already.

------
ColinWright
OK, now I get the complaints about
[https://hn.algolia.io](https://hn.algolia.io) that I was reading earlier.
First glance it's absolutely terrible. I have no idea if it will get better if
I persevere, but I've just tried searching for things that have "rise" and
"thinking" in the title so I could review the discussions from the past
gazzilions of times this has been posted.

Nightmare. Absolute nightmare.

I really, _really_ hope they take on board the feedback they've had, because
this is horrible.

I hate to be so negative. Maybe it'll grow on me, although I feel seriously
disinclined to make the effort.

 _Added in edit:_ I can't even work out how to use the search function to find
that discussion. Eurgh.

 _Added in further edit:_ OK, now found it, but I'm reluctant to say anything
more as my impressions are so unremittingly negative. I hope you guys have
thick skins and serious programming ability to make it usable quickly. For
now, there's no way I'll be using it, so the search box is now useless.
Bookmarking HNSearch.

 _Final edit:_ OK, I'm surprised, this has only been submitted on five
previous occasions under this title. I thought it was more.

~~~
redox_
All feedback are read and welcome :) We've already deployed a few UX
improvements and are currently working on improving the usability. We'll
deploy it soon, preview available here:
[https://hn.algolia.com/beta](https://hn.algolia.com/beta).

~~~
ihnorton
A couple requests:

1\. sort by date. this is very useful to find discussions which have dropped
off the front page but might still have interesting updates.

2\. verbatim search (right now "algolia" -> "algorithm")

3\. fix the fact that _this thread_ doesn't show up when I search for
"algolia" with [last 24h] checked.

~~~
redox_
1\. Sorting by date and not by points often produces ugly results (especially
on long period) so we introduced the "last 24", "past week" and "past month"
options filtering the resultset but keeping the default ranking. Would love to
have more feedback about it, because even it's different than before, we think
it fits quite well.

2\. verbatim search is something we do not provide yet, I opened a feature-
request our side :)

3\. Have you check [All] or [Comment] instead of default [Story]?

(I've just updated hn.algolia.com with last ranking/design tweaks)

~~~
ihnorton
1\. I disagree, but ok. Comment rankings may or may not be very interesting,
and if there are many comments in a 24h period it is totally unhelpful (want
to find latest comments with [x] keyword)

2\. cool :)

3\. I had checked [All], but the results look much better now (and this
discussion shows up).

~~~
ndessaigne
1\. You're right, it doesn't fit your use-case. We'll implement a strict
ordering in the next iteration.

