
The Man Who Would Teach Machines to Think - jonbaer
http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
======
cs702
"Gödel, Escher, Bach" is one of my favorite books, and I have a tremendous
amount of respect and admiration for Hofstadter... so I'm really disappointed
and saddened to read that he (quoting from the article) _" hasn't been to an
artificial-intelligence conference in 30 years. 'There's no communication
between me and these people,' he says of his AI peers. 'None. Zero. I don't
want to talk to colleagues that I find very, very intransigent and hard to
convince of anything. You know, I call them colleagues, but they’re almost not
colleagues -- we can't speak to each other.'"_

Hofstadter should be COLLABORATING with all those other researchers who are
working with statistical methods, emulating biology, and/or pursuing other
approaches! He should be looking at approaches like Geoff Hinton's deep belief
networks and brain-inspired systems like Jeff Hawkins's NuPIC, and comparing
and contrasting them with his own theories and findings! The converse is true
too: all those other researchers should be finding ways to collaborate with
Hofstadter. It could very well be that a NEW SYNTHESIS of all these different
approaches will be necessary for us to understand how complex, multi-layered
models consisting of a very large number of 'mindless' components ultimately
produce what we call "intelligence."

All these different approaches to research are -- or at least should be --
complementary.

~~~
ttt_
I agree. While reading the article I can't help but, sort of, empathize with
modern AI programs. Me and Watson are very similar, Watson can win Jeopardy
but has no understanding why, I can recognize a handwritten 'a' and I too have
no understanding why.

When I look at my daughter developing, from baby to infant to child. Hasn't
that been a constant, intensive training? As she recognizes stuff, I give
feedback. After a while she starts correlating stuff, and signals for me to
give feedback. By the time she's an adult, she will have full control of her
intelligence, but also no understanding.

Maybe what we are missing is just the algorithm for information storage and
retrieval. If we can master Genetic Algorithms, why not Celular Databases? Or
Chemical Procedures?

~~~
loup-vaillant
> _Me and Watson are very similar, Watson can win Jeopardy but has no
> understanding why, I can recognize a handwritten 'a' and I too have no
> understanding why._

So, you and Watson are "very similar" just because both systems don't have a
perfect understanding of themselves? You don't know that. Your premises look
true, but your conclusion don't follow from them (or at all). Actually you
probably _know_ that no matter how you spin it, you and Watson are very
different.

So don't say you aren't, it's misleading. Not only to others, but to yourself
as well. Try to find a meaningful similarity instead.

~~~
ehsanu1
I find myself doing poor pattern recognition at times (eg always choosing the
wrong key for a particular door), and realizing just after that a machine
learning library could well make the mistake I just did. This isn't a new
insight, but it still feels like an epiphany when you realize it as it
happens.

------
stiff
So Good Old Fashioned AI[1] is the new hot underdog AI thing now? I seriously
don't understand the praise of Hostadter in the article and in the comments
here, and the criticism of the mainstream AI research, especially it is very
hard to find any precise details of what he does and what are the outcomes.

There have been attempts to understand intelligence with intelligence (logic,
symbols, reasoning etc.) for 30 years, to not much effect, now AI and machine
learning are advancing quite steadily, so why the snark? All evidence suggests
that the way the brain itself learns things is statistical and probabilistic
in nature. There are also new disciplines now, like Probabilistic Graphical
Models, which are free of some of the traditional downsides of purely
statistical methods, in that they can be interpreted and that human-
understandable knowledge can be extracted from them. This is something that
really seems promising, and to some extent is an union of the old and new
approaches, despite the claims of a big division, but it is hard to see much
premise in purely symbolic methods invented merely by some guy somewhere
thinking very hard.

I for one am very happy that people seek inspiration in the way human brain
works, that's what science is, if you just come up with things without
consulting the real world it's not science, it's philosophy, the one
discipline that has yet to produce a single result.

[1] [http://en.wikipedia.org/wiki/GOFAI](http://en.wikipedia.org/wiki/GOFAI)

~~~
maaku
> it's philosophy, the one discipline that has yet to produce a single result.

I agree that the signal-to-noise ratio in philosophy is very low. (I also
strongly agree with the rest of your comment.) But let's be fair: it was
philosophy that produced

1\. Formal logic 2\. Occam's razor 3\. Scientific method 4\. Church-Turing
thesis

~~~
gjm11
1\. I suppose the earliest system of formal logic was the syllogistic, but
that's a long way from what we call formal logic now and it's not at all clear
that it ever did anyone any good. Formal logic of the modern kind has a
history going something like: Leibniz (mathematician), Boole (mathematician),
Frege (both mathematician and philosopher), Peano (mathematician), Russell
(both mathematician and philosopher), etc. (by Russell's time most of the
architecture of modern formal logic is in place) and it looks to me as if --
if we really must engage in these boundary disputes -- it's more down to
mathematicians than to philosophers.

2\. Yeah, William of Ockham was a philosopher. Score one for philosophy.

3\. It looks to me as if almost everything important in the history of the
scientific method is down to scientists rather than philosophers -- though,
since the word "scientist" wasn't coined until the 19th century and
disciplinary boundaries used to be more porous than they are now, they were
often called "natural philosophers" and often did a certain amount of what-we-
now-call-philosophy as well as what-we-now-call-science.

Francis Bacon is the usual chief suspect for introducing something close to
the modern scientific method. He was an experimental scientist as well as a
philosopher (though, it seems, not a particularly good one). Galileo was a
scientist. Newton was a scientist. I suppose you might want to go back to
Aristotle (though I wouldn't) -- but, actually, Aristotle was trying to do
science as well as philosophy.

4\. Since both Church and Turing were both trained and employed as
mathematicians, it seems rather strange to credit the Church-Turing thesis to
_philosophy_. (So far as I can tell, all the other important people in its
history -- Goedel, Kleene, Post, Rosser, etc. -- were mathematicians too.)

You might consider these topics philosophical by definition. If so, the
conclusion would seem to be that even _philosophy_ is often best done by
scientists and mathematicians, which doesn't speak well for philosophy as a
discipline.

~~~
nooneelse
"Leibniz (mathematician)"...

Leibniz was top notch in many, many things. Philosophy, math, linguist,
lawyer, diplomat, engineer, psychology and sociology... lots of things. He was
as particularly bad way to start your list which is supposed to support an
argument that it was mathematicians who developed formal logic, since he
shatters entirely the distinction you are trying to draw upon.

~~~
gjm11
> since he shatters entirely the distinction you are trying to draw upon.

I think you have misunderstood my argument, which was _not_ that formal logic
was developed only by mathematicians with no philosophers involved (that would
be nuts) but that saying it was done by philosophers _as opposed to anyone
else_ is quite wrong. And that argument would go through just the same even if
we classified Leibniz exclusively as a philosopher (which would be just as
wrong as classifying him exclusively as a mathematician; my apologies, by the
way, for being sloppy about that).

I have to admit I can't imagine how what I wrote turned (on its way into your
mind) into an attempt to draw a sharp dichotomous distinction between
mathematicians and philosophers, but evidently it did and I'm sorry that I
evidently wasn't clear enough. Yes, people can be both mathematicians and
philosophers, or both scientists and philosophers, or all three; yes, the
boundaries are fuzzy sometimes; it was no part of my intention to imply
otherwise.

------
ssivark
Norvig and co. are like drunk men searching for their lost key under a
streetlight. It might not be where it lies, but that's the only place where
they think think could find something, or at least make some tangible
progress. Hofstadter doesn't mind taking the long shot... feeling his way
about in the dark, in the hope of inching forward and making progress towards
artificial intelligence.

This comparison between complementary approaches is an apt analogy for most
fields, where the focus shifts every once in a while, when one of the
approaches largely hits a wall and most people switch to the other one. A
while later, the trends will almost inevitably reverse and draw inspiration
from other approaches. The unfortunate thing is that there's no dialogue
between the two camps, which makes it that much harder to port good ideas from
one context to the other.

I could provide examples from physics research, or for that matter, trends in
static-vs-dynamic blogs :P Also, the more "applied" the field, the shorter
these cycles are.

Ref:
[https://en.wikipedia.org/wiki/Streetlight_effect](https://en.wikipedia.org/wiki/Streetlight_effect)

~~~
stiff
Hofstadter is about as much of a science researcher as Goethe was in his day.

------
drcode
Douglas Hofstadter is important because most AI work right now focuses either
on (1) big-data-style statistical analysis or (2) emulating brain anatomy.

DH is the most well known guy of a small, stubborn group of AI developers who
still believe that "human thought" can be reasoned about and can be understood
in isolation, and that we can build intelligence without simply reducing it to
statistics or to brain anatomy.

I applaud his efforts, and find some of the programs he's written both
creative and refreshing.

~~~
joshuaeckroth
Members of what you might call that "stubborn group" have started a yearly
conference: Advances in Cognitive Systems. See the top of their FAQ:
[http://cogsys.org/faq](http://cogsys.org/faq)

~~~
joe_the_user
Nice group but they are definitely swimming against the currently standard
approach.

That said, I think one really interest article here is: "Human-Level
Artificial Intelligence Must Be an Extraordinary Science" by Nicholas L.
Cassimatis

Brings a lot of challenges into focus.

[http://www.cogsys.org/pdf/paper-1-5.pdf](http://www.cogsys.org/pdf/paper-1-5.pdf)

www.cogsys.org

------
stiff
This is a bad article, especially for a technical audience. It romanticizes
things a lot, as journalists have to, to keep up the readership rates, but it
doesn't make for a very balanced judgement. This kind of debate is going on
and on, you can read a much more reasonable account here:

[http://norvig.com/chomsky.html](http://norvig.com/chomsky.html)

I find the analogy to Einstein at the end of article especially funny. I think
it's much more likely that people will look upon current defenders of "good
old fashioned AI" like they now do upon people who still looked for ether
after Einstein's discoveries.

~~~
upquark
100% agreed. GEB is a great book, but Hofstadter hasn't been relevant in this
field for a while now (AFAIK). Good old fashioned AI approaches show what a
giant blind spot our minds are for our species, how little introspection goes
on even within the brightest minds. Modern approaches to AI/ML have yielded
tangible results and moved the field in the right direction, dismissing those
results makes anyone look ridiculous in 2013.

~~~
Daniel_Newby
Pure ML approaches are doomed to end up in a tar pit of AI mysticism. A
statistical learning model can and will conclude that the thunder gods make
the crops grow. People like Douglas Hofstadter and Eliezer Yudkowsky will have
to use classical AI approaches to train our AI children in science and
rationality.

~~~
upquark
Not sure what AI mysticism is, nor how you arrived at the conclusion that
statistical learning models will come up with thunder gods.

~~~
drdeca
My uninformed guess is that they mean that a statistical AI system might come
to similar conclusions as humans such as concluding that there exist thunder
gods, while a non statistical type (one with just "reason") would not come to
that conclusion about thunder gods, and as such would not as accurately model
humans.

------
aethertap
I've been enjoying this series from MIT OCW on Gödel, Escher, Bach:

[http://ocw.mit.edu/high-school/humanities-and-social-
science...](http://ocw.mit.edu/high-school/humanities-and-social-
sciences/godel-escher-bach/video-lectures/)

~~~
fsckin
Me too.

------
nathansnyder
Love'd this quote "...the trillion-dollar question: Will the approach
undergirding AI today—an approach that borrows little from the mind, that’s
grounded instead in big data and big engineering—get us to where we want to
go? How do you make a search engine that understands if you don’t know how you
understand?...AI has become too much like the man who tries to get to the moon
by climbing a tree: "One can report steady progress, all the way to the top of
the tree.”"

~~~
jonbaer
Surprised no mention of the Chinese Room in this article ...

~~~
tunesmith
Me too. I wonder what Hofstadter's responses to it is.

Edit: Googling, it seems that Hofstadter's response is along the lines of
Haugeland. That by describing the translator as a man, we are improperly being
asked to identify with him, when in the actual metaphor, the man is only an
implementer in a larger system. The larger system actually _does_ understand
Chinese. So the claim is that the Chinese Room thought exercise is actually a
fallacy.

------
xerophtye
Ok so here's an attempt to clear up the feud. As i see it, what Hofstadter
wants is an anti-gravity elevator. The "Modern" (aka practical) AI approach is
ladders, stairs... and eventually mechanical elevators. Now ofcouse, progress
along the "practical" approach will NEVER lead to an anti-gravity elevator, as
the fundamental principles are completely different. But they get the job
done.

See, that's the point, as incredibly awesome and useful as the anti-gravity
elevator might be, mankind can't wait around for someone to invent it, just
raise stuff, or travel in the vertical dimension. And hence all our modern AI
systems (including google, siri, robots, warehouse management systems, etc
etc) are powered by this approach.

So should we scrap stairs an elevators in pursuit of anti-gravity? Certainly
not, we NEED them right NOW. But does this mean we should dreaming about, and
working towards anti-gravity? HELL NO!! We need that too.

And hence, as much as i LOVE Hofstadter (i have had the same approach to AI
ever since i was a kid), i still have a very PROFOUND respect for modern
approaches because they help me create some functionally amazing software.

------
duwease
Considering the large and growing bank of research that highlights areas where
the brain's output is flawed or plain wrong when compared to the consensus
"optimal solution", I think I'm with the "AI establishment" as it's painted by
this article. It doesn't seem self-evident to me that the inner workings of
the human mind are the only or even optimal implementation of intelligence for
every task.

If anything, the human mind seems to me to be a particular algorithm that is
flexible, but trades that flexibility for capability in certain problem areas.
Using a transportation metaphor, it's like walking versus air travel. Walking
is incredibly flexible when it comes to where you can go, but air travel is by
far the optimal route to get from coast-to-coast, although you are limited to
travelling between airstrips. I feel like focusing on the human brain as the
"true" intelligence is like claiming that walking is the only true
transportation, instead of focusing on optimal routes for each problem.

~~~
hharrison
Maybe the article doesn't get it across, but the anti-establishment crowd
doesn't think that doing things the "human way" is always the best solution.
In fact, they acknowledge that it's often, even usually, not. Hofstadter does
not want to replace (e.g.) GPS AI with human-style intelligence, or even more
ridiculous, replace the AI that flies airplanes with one that gets bored and
does crossword puzzles instead of paying attention. Instead he wants to
understand human-style intelligence because that opens doors to tacking really
interesting scientific/philosophical problems like personality, self,
consciousness, autonomy, etc.

In short, the camps have different goals. One is trying to solve problems
optimally. The other is trying to understand what intelligence means,
biologically. AI as a sub-discipline of Engineering, vs. AI as a sub-
discipline of Cognitive Science, perhaps.

~~~
csense
> replace the AI that flies airplanes with one that gets bored and does
> crossword puzzles

This would be a great scenario for a sci-fi novel / movie. An AI that
threatens the human race not by achieving sentience and attempting world
domination, but by achieving sentience and playing Sudoku instead of doing its
job...

------
MichaelMoser123
Also of interest:

Hofstadters lecture about analogy on youtube
[http://www.youtube.com/watch?v=n8m7lFQ3njk](http://www.youtube.com/watch?v=n8m7lFQ3njk)

Also some earlier work on the subject

[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.307....](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.307.5740)
[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.7...](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.766)

I have also written a review on this very interesting book "Surfaces and
Essences: Analogy as the Fuel and Fire of Thinking"

[http://mosermichael.github.io/cstuff/all/blogg/2013/10/15/po...](http://mosermichael.github.io/cstuff/all/blogg/2013/10/15/post-1.html)

------
sinkasapa
I really enjoyed this talk of his on analogy in human language.

[https://www.youtube.com/watch?v=n8m7lFQ3njk](https://www.youtube.com/watch?v=n8m7lFQ3njk)

------
atlanticus
I think a big part of the problem with AI is you are trying to map a digital
model onto an analog system. There was a story on HN last year, I can't seem
to find, that used a genetic algorithm on analog circuits to evolve optimal
pattern matching for certain images. The results were good but when they went
to build another one it didn't work right because of unmeasured EM feedback
and subtle differences between individual circuits meaning every circuit would
have to run its own evolution, negating most of the usefulness of the project.
Maybe an analog model would be more appropriate.

------
dnautics
_When you read Fluid Concepts and Creative Analogies: Computer Models of the
Fundamental Mechanisms of Thought, which describes in detail this architecture
and the logic and mechanics of the programs that use it, you wonder whether
maybe Hofstadter got famous for the wrong book._

I cannot recommend "Creative Analogies" more. I have purchased no less than
four copies (two for myself; two for others, including K. Barry Sharpless, who
once made a remark about AI that was reminiscent of some of the ideas in CA)
over the years. It's even better than "Surfaces".

------
ArbitraryLimits
It's interesting to see this article portray Hofstadter as the last of the
dying breed of GOFAI researchers.

When I was in college (and GOFAI was still alive) GOFAI researchers themselves
portrayed him as very much an outsider.

------
jmilloy
I'm not convinced that Hofstadter is pursuing computers that think like
humans, so much as computers that _appear_ to think like humans. He abstracts
certain observable behaviors of the human mind (e.g. analogy), but there's no
guarantee that what a brain can observe about itself is what a brain is
actually doing. Does it make sense to ignore the underlying behavior of human
brains, and instead try to directly emulate a particular abstraction? We can't
let our romantic notions of what brains "do" get in the way.

~~~
hharrison
I agree with your point, but the solution isn't to instead model the
underlying biological nuts and bolts (assuming this is what you mean by
"underlying behavior of human brains"). Even if those approaches approximated
human behavior (which they don't), it wouldn't be all that informative. Just
as creating "artificial" intelligence by growing a brain in a petri dish
wouldn't be informative either. (Well, it wouldn't be informative as to what
the brain's job description is, although we could presumably learn plenty from
of other things from the exercise).

The better solution is to come up with creative experiments (with human
experiments) that can arbitrate between competing implementations. With modern
technologies like virtual reality, you can create almost any counterfactual
situation and that to decide between theories that make the same predictions
"in the real world".

------
cjbprime
This is the article referenced as pending publication in
[http://www.aeonmagazine.com/living-together/james-somers-
web...](http://www.aeonmagazine.com/living-together/james-somers-web-
developer-money/), which is incidentally my favorite article about startups.

------
9wymanm
_The father of psychology, William James_

I was under the impression that Wilhelm Wundt was the father of psychology.

~~~
hharrison
Well you might say Wundt was the father of structuralism and James the father
of functionalism - a deep divide which is still felt today. I suppose this
shows the author's bias, implicit or not.

------
mempko
The saddest bit of all of this is that global warming will derail any AI
future breakthrough

