
Moravec's paradox - kristiandupont
https://en.wikipedia.org/wiki/Moravec%27s_paradox
======
tgflynn
It's an interesting observation and certainly something one needs to be aware
of in thinking about intelligent systems but I'm not convinced that the notion
that perception is intrinsically harder than logic is quite true. I think the
difficulty of a problem is highly dependent on the representations and models
used. Computers were developed based on logical and mathematical principles so
it makes sense that they are, in some sense, "good" at these kinds of
problems. On the other hand traditional logic is incredibly inefficient at
dealing with probabilistic and perceptual problems.

It's also not clear that there's really a valid comparison here. In order for
computers to recognize objects we need to program them to learn recognition on
their own (because programming them explicitly to do it would be far too
hard). When we program a computer to solve a logic problem the computer isn't
learning to solve that problem, it's the programmer, not the program that
"knows" how to solve it.

Trying to teach a neural network to play chess is probably much harder than
teaching it to recognize images (at least my very limited experiments suggest
this to be true).

~~~
jerf
"I'm not convinced that the notion that perception is intrinsically harder
than logic is quite true."

But... where's the room for doubt? This isn't an article from 1930 about
theoretical possibilities of what may happen someday when we have a lot of
computational power. This is an _observation_ about how perception has proved
to be much more complicated than more pure reasoning, one quite old, robust,
and well-established. The question is more about _why_ that is true than
_whether_ it is true.

In a further comment you reply about how you could not convince your neural
net to play chess... but again, we are not theorizing that computers _may_ be
good at chess someday. We live in a world in which, if they are not already
simply better than humans, we only a couple of years away. Certainly better
than all but the absolute very best. Whereas we still get excited when we see
a robot that can walk up or down a normal, rocky hill _at all_.

Also, it baffles me how you think that explaining _why_ logic is easier than
perceptual problems is somehow disproof of that very fact. It doesn't matter
that it's "really" the programmer that knows how to play chess (even if I'd
observe the machine is still doing it better) when we still can't hardly make
machines walk at all, regardless of whether the programmer or the computer is
the one "knowing", a concept that in this context comes perilously close to a
tautological assertion that if a computer can do it it must not be true
"knowing". If we were really good at both sensorimotor and logic, but with two
radically different toolkits, that might be an interesting point, but that's
not the world we live in.

~~~
cmccabe
Come up with an AI that can play the game of Go well. Then we'll talk.

~~~
jerf
That's just a snide comment, not a useful one. The difficulty of
symbolic/logic problems can be made unboundedly great by construction. Nobody
ever denied that. The point is that something like crossing a room and picking
up a pencil _seemed_ easy, and to this day is still a significant achievement
for robotics, still requiring fairly controlled circumstances.

It can be difficult to understand this from our present-day perspective where
we have so thoroughly internalized this idea that it has apparently passed
into invisibility for some of us. Go back and read Asimov's robots work, in
which he has robots walking, talking, socially interacting with humans, even
pondering great ethical conundrums, while at the same time it requires massive
resources to attain the raw numerical computational power available to a
Commodore 64.

~~~
cmccabe
Just because you discovered a few cool hacks that happened to work well in
very specific problem domains doesn't mean you've "solved" symbolic thought.
The way a chess program works is not the way humans play chess. It does a
brute-force search, rather than learning patterns.

Go is a good example of a game that can't effectively be brute-forced. There
are a lot of other examples out there; that was just the one I happened to
pick.

 _The point is that something like crossing a room and picking up a pencil
seemed easy, and to this day is still a significant achievement for robotics,
still requiring fairly controlled circumstances._

Just because you failed to do one thing doesn't mean you succeeded in doing
another.

------
jpdoctor
Funny thing: I was a peon in Rod Brooks' group right at this transition. The
robot I worked on was a small monster, which would beam data offboard to a
lisp-machine for analysis (machine-vision problems, etc). Just after I left,
Rod decided to go down the path of having small simple FSMs connected
together, which it turns out displayed some pretty complex behavior. Then
Colin Angle showed up.

Boy did I leave at the wrong time. :)

------
saulrh
Roboticist here. This is actually a huge problem in robotics and causes all
sorts of problems for real robots. For example, my current project is to teach
robots how levers work.

~~~
joe_the_user
Well,

Teach as in "program", right?

It seems like the problem is that computers and robots don't learn in same
fashion as human and thus the process of "teaching" them things in their not
learning concept X in same fashion as humans - ie, they don't learn
generalizations such a way that are "ready at hand" to use when the
appropriate situation arises.

~~~
saulrh
That's one of the underlying causes of Moravec's Paradox, yes. Humans have a
very difficult time with precision and regularity, which are exactly what
current computer systems are best at. Computers, by contrast, have a hard time
with generalization, and creativity and physically interacting with the real
world are some of the the most general systems possible.

------
baddox
This seems too obvious to be considered a paradox. The article says that the
discover ran contrary to traditional assumptions, but I wonder if this is
true, or if so, why it would be the case. Perhaps I just have the luxury of
hindsight, but it seems like after the advent of electronic computers, it
would quickly become obvious that computers could vastly outperform humans in
things like multiplication, or counting words in a large text document, or
finding a correct path through a digital maze.

Besides, the distinction between "high-level reasoning" and "low-level
sensorimotor skills" seems fairly weak. Checkers already starts to blur the
line: the problem space can be modeled as pattern recognition and tactics
(like how humans model their own gameplay), or it can be modeled as a "dumb
search" through a decision tree (like how a computer algorithm might play).
Then you get to something like chess, which has a prohibitively large decision
tree to do a "dumb search," then face recognition, then natural language
processing, etc.

~~~
shmageggy
It's definitely true (that "the discover ran contrary to traditional
assumptions"), and they give supporting evidence for this in the article. The
history of AI bears this out quite explicitly.

Also, the decision tree for chess might be intractable to exhaustively search,
but a "dumb search" is exactly how it's done, and it is currently vastly more
powerful than any competing method. And you really can't just jump from chess
to face recognition and NLP -- they are _completely_ different problems. To
wit, chess has perfect information; the entire state of the game is known at
all times, and the representation is clearly defined and compact, where none
of this is true for sensory tasks. This means the range of techniques
available to solve each type problem are completely disparate, and tat there
is almost nothing that you can take from a chess program and apply to "lower
level" AI tasks.

------
slacka
This is exactly the kind of flawed behaviorist centric thinking that has held
AI back for the past 50 year. Until AI researchers get over their obsession
with mathematical constructs with no foundation in biology, like symbolic AI
and bayesian networks, we will never have true AI. Once you realize this, you
see that this “paradox” isn't really a paradox at all.

The brain is not anything like the Von Neumann architecture of a CPU. It is a
massively parallel prediction machine. High level thought occurs in cerebral
cortex whose base functional unit is Hierarchical Temporal Memory composed of
about 100 neurons. Once we have figured out how these units are wired in the
brain, “difficult” problems like pattern recognition will be trivial, and
“trival” problems like playing checkers will require many hours of training
and many HTM units just like in a real human brain.

For anyone interested in this, I highly recommend Jeff Hawkins' book, “On
Intelligence”. <http://books.google.com/books?isbn=0805078533>

~~~
CJefferson
This is insulting to AI researchers in the extreme.

They know they are not making as good progress on "true AI" as they would
like. But many researchers have tried very hard in many directions.

If you are so sure true AI is fairly trivial, why not do it? Even small
amounts of progress would make you very famous.

~~~
martinced
"This is insulting to AI researchers in the extreme."

Well, yes and no. If you want the entire book "On Intelligence" (which I
highly recommend too) is insulting to AI researchers.

The book basically explains why we're on the wrong track.

To me AI is about reasoning and creativity and none of the stuff we currently
have comes anywhere close to that.

------
tathastu
This may also be an artifact that in terms of "intelligence" we have focused
mostly on supervised learning as opposed to unsupervised learning; recent
reversals in this trend have been shown to be surprisingly powerful [1].

Consider this: While reading, most humans use a different part of the brain to
register consonants and vowels [2]. No matter how much we like to _think_ we
learn language in an orderly fashion, that is not the case. Our reading and
speaking skills are simply built over time and experience by having neurons
connect as we experience visual words and other people talking; formal
language instruction probably plays a secondary role of attaching labels to
already built neural networks.

[1] [http://www.nytimes.com/2012/11/24/science/scientists-see-
adv...](http://www.nytimes.com/2012/11/24/science/scientists-see-advances-in-
deep-learning-a-part-of-artificial-intelligence.html)

[2]
[http://www.nature.com/nature/journal/v403/n6768/full/403428a...](http://www.nature.com/nature/journal/v403/n6768/full/403428a0.html)

------
Confusion
What does it take to have a program come up with the theory of relativity,
given the knowledge that was available in 1904?

I think there are all kinds of reasoning skills that we've never been able to
test, because they depend on perception and motor skills. It seems possible
those would take many more computational resources. I find it hardly
surprising that feeding a program abstractions and allowing it to reason about
those abstractions is simple. It's the interacting with the real world,
correlating abstractions with the real world and coming up with useful new
abstractions, that's hard.

I don't think we'll ever have an AI until something is built that can freely
interact with the world, freely gather data and freely modify itself to
enhance all its abilities. An AI without pressure sensors that ever touched
sand will never understand the universe.

------
joe_the_user
I would say that this is true only as long as this "adult intelligence" pretty
much remains within a single "logical frame"
([http://en.wikipedia.org/wiki/Frame_%28artificial_intelligenc...](http://en.wikipedia.org/wiki/Frame_%28artificial_intelligence%29)).

I would claim that while a single logical frame is easy to simulate on a
digital computer, creating and balancing _multiple_ , not-necessarily
consistent frames is very difficult and requires as much computer activity or
brain activity as the also difficult activities of raw input processing.

One might argue that neural system began as very different systems from
digital computers but the evolution of the large human has allowed them
emulate discreet, including a computer's digital logic while still doing the
balancing of multitudes of environmental constraints which neural system have
excelled at for millions of years. And letting "us" conceive-of and to even
build computers perfecting this discreet logic. Pretty amazing.

------
thyrsus
Restatement in pictures: <http://abstrusegoose.com/496>

------
c8ion
This doesn't seem very paradoxical to me, and at a layman's level I think one
good parallel between a human's "operation" and a computer's operation with
regard to this question is as follows . . .

Even as a non-programmer (I am a programmer) I might relate well to an
ordinary desktop/laptop running my Excel spreadsheets. I can create a
spreadsheet, enter data in cells, enter formulae, format the content
beautifully, specify and view charts of the data I'm entering and information
I'm computing. I might be able to respect and appreciate the beauty and
complexity of how the spreadsheet program was implemented in an abstract
sense. I might describe to another person my ideas about how the spreadsheet
program was created, its major features and concerns, and its obvious
complexity. What I'd be missing though, likely, is (a) the complex interface
between what I see and what supports that experience behind the scenes; and
(b) the 50+ years of computing technology under the hood that has evolved to
support my narrow and visible relations with my Excel spreadsheets.

From the user's view, the Excel spreadsheets, Windows Explorer, the Start
button, etc., are the aspects of the computer analogous to a human's thought
processes. They're visible and explainable. The user might have some vague
notion that files are stored on disk, that there's something called a CPU,
that does the computer thing, etc. The user has no clue, though, that the
Excel spreadsheet program itself contains but a very small portion of the
effort to make its visible manifestations happen. There's an enter support
system from file system, CPU, memory, buses all over the place, GPU, video
display, chips, specialized interfaces to I/O and other subsystems, ASIC's,
semiconductor physics, electricity, magnetism, etc. The hardware, firmware,
and software for the latter have had 50 years to evolve and mature. To a
normal user these aspects aren't understandable. They understand Excel.

And so for us, we can understand and describe human thought and cognition in
an abstract way. But most thought is below the level we're conscious of, and
supporting that thought is an entire interface with the physical elements of
the body, its nervous system and autonomous function, and the interface of
these with the brain.

------
tocomment
I'd say programming is a high level skill that computers can't do. Counter
example?

~~~
kragen
The hard (for humans) part of programming is already being done by your
compiler, linker, version control system, IDE, interpreters, etc. All that's
left for you is the parts that are hard for computers: deciding what the
program should do, guessing which libraries are worth the risk that they might
be buggy, discovering gaps between what you decided the program should do and
what its users need, figuring out how to make it understandable to users,
getting people to try it, and so on.

------
Retric
It's all in how you think about the problem.

Brains are vary good at fuzzy highly parallel tasks and bad at sequential
ones. Computers suck at those fuzzy parallel tasks, but are rather good at
accurate sequential ones. People are easy to train individually, computers
take a lot of up front effort but after that it's easy.

------
ilaksh
Sounds like people posting or contributing to an article like this are aiming
at understanding artificial general intelligence. So just search for that.
Artificial General Intelligence or AGI.

~~~
martinced
It's a bit sad that instead of acknowledging that AI has nothing at all to do
with intelligence, AI researchers instead decided to coin a new term ("strong
AI").

So you now have two terms: "AI" and "strong AI" (AGI if you wish) that are
totally unrelated.

Had people been honest "strong AI" would be "AI" and the "AI" we have now
should be called "fake AI" or "pink-unicorn AI" ; )

------
minoru
Sorry for going a little bit off topic here, but certainly there are works of
science fiction based on this paradox — after all, it’s more than thirty years
old! Any suggestions?

------
justhw
Certainly true, but perception and mobility as basic as they are, are a lot
complex in nature than intelligence.

------
osetinsky
an interesting example of this (machine listening):

<http://en.wikipedia.org/wiki/Source_separation>
<http://en.wikipedia.org/wiki/Cocktail_party_effect>

------
msandford
It's only difficult because computers are digital devices and all the "hard"
things for computers/robots to do aren't done in hardware. Trying to have a
single CPU do all of the things a robot needs to do is asinine.

Making a robot hand pick up an egg and a cup of coffee and turn a wrench isn't
difficult provided that you build in similar feedback loops and low-level
"firmware" that the brain does for us, unconsciously.

But if you did that it would take tens of kilowatts to run a halfway decent
robot. And that's clearly ridiculous! So nobody does it.

