
MIT professor Marvin Minsky wins $540,000 award - wslh
http://www.bostonglobe.com/business/2014/01/15/mit-professor-marvin-minsky-wins-award/aSiCSHIjlGycOGYmeLSZ5L/story.html
======
ScottBurson
I still remember a story I heard Gerry Sussman tell about Marvin, from when
Gerry was a grad student. Gerry was programming a neural net (or something
like it; I forget exactly what) and told Marvin he was planning to use a
random number generator to initialize the weights in the network "so it won't
have any biases". Marvin replied, in his careful, laconic way, "Of course it
will have biases. You just won't know what they are."

This comment has never seemed as stunningly insightful to me as it evidently
did to Gerry, but it's probably one of those things that is obvious once
stated, but not before.

Anyway I'm glad to see Marvin winning this prize.

~~~
eli_gottlieb
You know, I just now finally understood what was going on there: _a randomized
map matches no territory_. Enlightenment achieved.

~~~
Houshalter
But it _does_ match a territory. It's just a random one

~~~
Double_Cast
Let's suppose you're lost in the Amazon. Would you not prefer a
professionally-drawn map of the Amazon over a "map" I whimsically scribbled
while blindfolded? After all, my map might not correspond to the Amazon, but
it must correspond to _something_.

~~~
Houshalter
I'm not sure if you are responding the the right comment or if you understood
what I said. The point is when you initialize weights randomly, you aren't
starting with a blank map and then adding stuff to it. You are starting with a
random map and then trying to correct it. Random bias is still bias.

~~~
Double_Cast
My bad. The way I interpreted your comment was

> _But (a random map is useful because) it_ does _match a territory (somewhere
> out in the actual universe), it 's just a (location that's hard to find)._

I don't know what I was thinking.

------
pdevr
From Wikipedia: "Isaac Asimov described Minsky as one of only two people he
would admit were more intelligent than he was, the other being Carl Sagan."

------
davi
_“I don’t know what they think I do,”_ Minsky said. _“I make up theories of
how the mind works and when I’m lucky enough, I have some students who make
careers out of that.”_

As a guy with a lab, the idea of thinking of how the mind works from time to
time and then having students make a career of that idea is fairly humbling.

~~~
leoc
It's not exactly the whole story. Even in the '60s and '70s the AI labs faced
big battles to retain their funding, and IIRC Minsky has a reputation as a
pretty rough rider in those struggles.

------
kt9
My favourite minsky story is the one where he commissioned a grad student for
a summer to solve "the computer vision problem once and for all".

Its a great anecdote to illustrate how so many have underestimated the
difficulty of problems in computer vision in robotics and AI.

~~~
gahahaha
I guess this was before Moravec's paradox was "discovered"
[http://en.wikipedia.org/wiki/Moravec%27s_paradox](http://en.wikipedia.org/wiki/Moravec%27s_paradox)

~~~
SatvikBeri
Minsky was one of the ones who formulated Moravec's paradox, and the
difficulty of computer vision probably played a big role.

------
sib
Random Marvin Minsky story: About 15 years ago I invited Guy Kawasaki to speak
to a student group at MIT and he accepted. I was able to schedule in about an
hour to tour him through the MIT Media Lab, in the evening. He and I were
wandering from area to are when we came upon Minsky, working in a lab. Being a
former CS undergrad from a different school, I was pretty excited, so I
decided to go for it and introduce myself and Guy to this famous person. Guy
said, "Wow - you're the Marvin Minsky?" Without batting an eye, Minsky said,
"Wow - you're the Guy Kawasaki?" And then proceeded to spend ten minutes
walking us through what he was working on.

------
yodhe
I am sure there are a lot of (under)graduate research programs that could use
that money better, than some eminent professor at the end of his days. Not
that the recognition and praise is undeserved, just the money could of been
better utilized imho.

~~~
tgb
Completely agree. If I were trying to inspire research through monetary
rewards, I wouldn't be giving them out to someone in Minsky's position,
regardless of how deserving. I'd look instead to recognize someone up-and-
coming who could use that to further their career and inspire others to follow
in their footsteps. Minsk is already a legend.

------
samg
This article is a few months old: January 15th, 2014.

------
craigching
I see so many references to Marvin Minsky in the texts and books I read. Glad
to see him recognized like this!

------
yc-kjh
Idiocy:

My thermostat has three possible beliefs: 1) It is too warm in here. 2) It is
too cold in here. 3) It is just about right.

Minsky is wrong about AI in the same way that McCarthy was wrong. McCarthy
really "believed" that his thermostat had three possible beliefs, and that his
thermostat really "believed" one of them at any given time.
[http://Books.Google.com/books?id=yNJN-
_jznw4C&pg=PA30&lpg=PA...](http://Books.Google.com/books?id=yNJN-
_jznw4C&pg=PA30&lpg=PA30&dq=Minsky+thermostat&source=bl&ots=rAa-9Bw0Zl&sig=658QHYEb-2ilLDWs-
CM2DUGbAlU&hl=en&sa=X&ei=oXQuU-
jkC4fooATsjoGwDQ&ved=0CEQQ6AEwAw#v=onepage&q=Minsky%20thermostat&f=false)

Searle debunked him, but the message doesn't seem to have gotten out. We are
still wasting money on AI.

We are wasting money on AI because we are following the materialist
hypothesis: that there is nothing in the universe besides matter and energy,
and the interactions between matter and energy. To reject this hypothesis is
unthinkable for many, even for most, because the only alternative hypothesis
would be that some sort of non-material (spiritual?) stuff must exist.

But the evidence is overwhelming. The evidence cannot be denied.

Linguistics, for example, has always been divided into syntax and semantics.
No linguist has ever challenged this taxonomy. Both syntax and semantics are
very real.

Computers are syntactic engines. They do syntax. They can only do syntax. It
matters if a symbol is present, or not, and it matters in what order symbols
are arranged. But the computer does not, and indeed cannot, associate any
meaning (semantics) with any symbol. The only way a computer, being only a
syntactical engine, can appear to do semantics, is if a human has first been
clever enough to have found a mapping in some natural language between syntax
and semantics [and such mapping must exist in the first place, for him to find
it], and then clever enough to exploit it. The computer is still doing only
syntax, even while appearing to do semantics.

Searle showed this also with his Chinese Room analogy. But the "cognitive
scientists" have not been paying attention. Or they are still in denial.

But humans really do semantics. Nobody questions this, or challenges it,
because it is self-evident. You are doing semantics right now, as you read my
comment.

Because humans really do semantics, and computers cannot, humans and computers
must be fundamentally different sorts of creatures. The idea that the human
mind is software, running on the hardware of a human brain, must _necessarily_
be false. (If it was true, then humans couldn't do semantics either, but they
do!)

If this wasn't enough, Nagel (of "What is it like to be a bat?" fame) has
shown that the materialistic hypothesis is almost certainly wrong, in his
recent book "Mind and Cosmos". [http://www.Amazon.com/Mind-Cosmos-Materialist-
Neo-Darwinian-...](http://www.Amazon.com/Mind-Cosmos-Materialist-Neo-
Darwinian-Conception/dp/0199919755)

But the world pays no attention to Nagel either. To do so would be to have a
Kuhnian revolution of epic proportions, and that is not "scientifically
correct".

So the cognitive scientists, the AI researchers, the biologists, and pretty
much everybody in science today, toe the politically correct line. They
celebrate Minsky.

They ought to be bringing up the hard questions. That is what real scientists
do.

It is easier to be an idiot, because that doesn't put your funding in
jeopardy.

I conclude that there are very few "real" scientists. Cue the "no true
Scotsman" jokes. But deal with the issue I've raised. Be intellectually
honest.

~~~
bermanoid
Are you seriously bringing up Searle's Chinese Room as an argument against AI
research? According to Wikipedia, "The Chinese room argument is primarily an
argument in the philosophy of mind, and both major computer scientists and
artificial intelligence researchers consider it irrelevant to their fields."
Right on.

First off, philosophy of mind is 100% irrelevant to modern AI research, which
is more concerned with creating algorithms that _act_ as if at a human level
of intelligence than creating algorithms that recreate human states of mind.
You guys might not get that, but every person working on AI does.

Even given that, it's allowing for a very charitable interpretation of
Searle's "work": most of us consider Searle to be a fucking idiot at best, a
troll in the most likely case. The Chinese Room analogy is tortured, and
pretty much assumes dualism from the start - to me, if a dude in a closet
pushing papers around could fake understanding Chinese as far as an outside
observer is concerned, we'd have solved strong AI, so I don't care whether
Searle thinks we've succeeded or not.

Re: Nagel, I don't know his stuff, but having read
[http://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F](http://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F),
I'm not too interested, there's so much vagueness there that I feel like this
is just more bullshit questioning whether we have achieved "real"
understanding or just a mechanical approximation. And again, I don't care. I
want a program that acts as if it's intelligent, and it needs to pass most
normal people's bar for intelligence, not some dipshit philosopher's bar for
being human.

Philosophers have always been misinterpreting AI research's goals, which is
why nobody in AI has ever paid them any attention, and which is also why
they'll never be relevant to anything. Even if they're right, they're not
asking questions that anyone cares about.

~~~
kylebrown
I like this deconstruction of Searle's Chinese Room, by Scott Aaronson. He
calls Searle's argument a "non-insight":
[http://www.scottaaronson.com/democritus/lec4.html](http://www.scottaaronson.com/democritus/lec4.html)

~~~
demallien
That link was the most interesting thing I've read on HN all day. Thank you!

My favorite quote: "As a historical remark, it's interesting that the
possibility of thinking machines isn't something that occurred to people
gradually, after they'd already been using computers for decades. Instead it
occurred to them immediately, the minute they started talking about computers
themselves. People like Leibniz and Babbage and Lovelace and Turing and von
Neumann understood from the beginning that a computer wouldn't just be another
steam engine or toaster -- that, because of the property of universality
(whether or not they called it that), it's difficult even to talk about
computers without also talking about ourselves."

~~~
kylebrown
Sure thing. I should have mentioned that you don't need to read the whole
thing (Searle's Chinese room argument is dicussed in only one section of the
linked lecture).

You'll probably enjoy his other material. Scott Aaronson is a prolific
expositor, he's been blogging for nearly a decade (since before he was hired
by MIT's CS dept).

