

Your Brain on Metaphors - jawon
http://chronicle.com/article/Your-Brain-on-Metaphors/148495/

======
NotAtWork
> Since computers don’t have bodies, let alone sensations, what are the
> implications of these findings for their becoming conscious—that is,
> achieving strong AI? Lakoff is uncompromising: "It kills it."

This just makes me think he both doesn't understand brains or AI.

I also don't get the insistence on a 'body'. If we weren't planning on having
the AI totally isolated, and intended to say, talk to it in order to see if it
was an AI, then we were already proposing to give it senses right from the
start.

In fact, I don't think I've seen a single proposal for an AI that didn't give
it at least one external sense and many internal ones. I don't see why we
would think it would have that much trouble building metaphores.

As Lera Boroditsky says:

> If you’re not bound by limitations of memory, if you’re not bound by
> limitations of physical presence, I think you could build a very different
> kind of intelligence system

> I don’t know why we have to replicate our physical limitations in other
> systems.

------
dragonwriter
Here's where things go wildly wrong:

> Since computers don't have bodies, let alone sensations

Computers are not non-physical, they definitely have bodies (the physical
machines which include the circuitry for executing their software and its
necessary support mechanisms); they also can, and often do, have sensory
systems providing inputs regarding the state of the world both external (e.g.,
cameras, microphones) and internal (e.g., temperature sensors) to their
"bodies".

It may be that the "bodies" of current computers are structurally dissimilar
to human bodies in ways which are detrimental to human style cognition -- its
certainly true that they aren't build on the same kind of biomechanical design
and it may well be that the web of biomechanical feedback loops in the body is
important to human intelligence and isn't readily simulated in systems using
the technologies used for modern digital computers. But even if that's true,
it doesn't say we can't have AI, it just means our AI may need to be built on
adifferent set of technologies, e.g., perhaps using biological rather than
silicon substrates. But engineering biological systems is something we _can_
do, and with increasing facility.

The belief that AI is physically impossible -- rather than just a very hard
engineering problem -- is equivalent to the belief that intelligence is,
itself, not a phenomenon governed by the laws of the physical universe, but
magic that intrudes effects into the physical universe from outside that
cannot be reproduced by physical means.

~~~
mr_luc
I agree with your comment, but it makes me think:

Even if we can have that kind of AI, it's not a given that it will mean any of
the things that the Church Of Singularity types think it will mean.

If it turns out to be something that's due to an irreducible set of
interacting natural processes, it may not be at all possible to modularize and
use in a way amenable to the visions of those who imagine a world of cleanly
AI that can be understand and reason about what we mean, learn to program,
outperform humanity, and suddenly ... Skynet! (Or singularity. Or whatever).

~~~
dragonwriter
> Even if we can have that kind of AI, it's not a given that it will mean any
> of the things that the Church Of Singularity types think it will mean.

Sure, but that's because the features that the Singularity relies on are
_different_ from the kind of strong AI that Lakoff, et al., argue that we
might not be able to have.

The Singularity is more about hybrid intelligence -- artificial systems that
interact with human intelligence to radically transform human society and
interactions. Strong AI -- in the sense of artificial systems that viewed
independently are indistinguishable in behavior from human intelligence -- is
neither necessary nor sufficient for the Singularity, its nearly a complete
irrelevancy. In fact, strong AI _itself_ provides _by definition_ nothing that
having humans in the first place doesn't provide, the only relevance it has is
that there is kind of an intuitive argument that a likely side effect of the
research necessary to get to strong AI would be learning a lot more about how
human intelligence works and how external influences can interact with it, but
you don't need to actually _get to_ strong AI to get that effect.

------
fensterbrett
> _If cognition is embodied, that raises problems for artificial intelligence.
> Since computers don’t have bodies, let alone sensations, what are the
> implications of these findings for their becoming conscious—that is,
> achieving strong AI? Lakoff is uncompromising: "It kills it." Of Ray
> Kurzweil’s singularity thesis, he says, "I don’t believe it for a second."
> Computers can run models of neural processes, he says, but absent bodily
> experience, those models will never actually be conscious._

Well, then let's simulate the body as well once we got the brain right.

~~~
thibauts
We're getting one step further but they're still missing out. Articifial
intelligence or consciousness have no reason to be precluded by a lack of
embodiment. Computers can have sensors and that's all that's needed. Some life
forms have sensors very different from ours and that most probably doesn't
prevent them from being intelligent or even maybe conscious.

What a lack of _human_ embodiement prevents in AI is strong communication with
_humans_. Why ? Because as Lakoff hinted on, they'll lack grasp on the
linguistic twists and metaphors that make our language relate to our everyday
experiences. The problem will be similar to the problem we would have trying
to communicate with a bird or an insect. We don't ground our communication in
the same things at all.

~~~
im3w1l
There are humans living _right now_ that have a different set of 'sensors'
from the majority. I'm talking about the blind. The deaf. Those without a
sense of smell. Those who cannot feel pain.

Yet we manage to communicate with them.

------
GrantS
I almost didn't read the article due to the headline but the fMRI studies
(which are really the focus of the article) are fascinating, and there is a
surprisingly deep discussion about the line between idioms and metaphors --
particularly how this affects whether the brain engages the motor cortex when
processing language and how it might vary by individual.

Note that the headline isn't actually implying machines can't be intelligent
but that their internal states won't correspond to human states unless their
cognition is grounded in a similar set of sensations. But this is already the
case with humans from vastly different cultural contexts. The words coming out
of your mouth only mean the same thing to others to the extent that they share
a similar history of experiences when interacting with the world.

So I can certainly believe that an AI whose internal concepts are based on
embodied/simulated experience would seem more relatable than one raised purely
on books, but that's true of humans too so no big surprise and not an
insurmountable barrier as one of the quoted sources in the article suggests.
Non-embodied agents will speak in idioms and embodied agents will speak in
metaphors.

------
Millennium
My personal take is that while humanlike AI may be developed, it won't happen
on computers as we know them today. The fundamental mechanics of computation
and thought appear to be different enough that I suspect an accurate
simulation of humanlike thought may very well be out somewhere in NP. This is
not to say that machines capable of humanlike AI won't be invented; they just
won't be recognizable as computers.

They might not even make computers obsolete. If a humanlike AI's fundamental
model of thought is closer to ours than to a computer's, then the AI might
turn out not to be very much better at math-oriented tasks than we are
(proportionally speaking). It would therefore still need to use computers in
mostly the same ways we do. Both types of machines might be incorporated into
a single unit -AI on the left, computer on the right, for example- to speed up
that process.

~~~
justinpombrio
So far as we know, it isn't possible to solve NP-complete problems quickly
using _any_ means. Even quantum computers, which are fundamentally more
powerful than the computers we have now, are not known to be able to solve NP-
complete problems in polynomial time (i.e., efficiently). So it seems unlikely
that simulating the brain is out in NP; it's probably no worse than BQP (what
quantum computers can do).

------
evunveot
A webcomic called _Nine Planets Without Intelligent Life_ had a clever take on
this topic (years ago). Quoting
[http://www.bohemiandrive.com/comics/npwil/19.html](http://www.bohemiandrive.com/comics/npwil/19.html)
(drag to read; the illustrations are highly entertaining):

How and why do robots eat?

The answer to the first part of this question is simple: Robots eat the same
way humans eat.

As to the why, it would be helpful to think of a saying of the late-human AI
programming community.

Building an artificial intelligence that appreciates Mozart is easy ...
building AI that appreciates a theme restaurant is the real challenge.

In other words, base desires are so key to human behavior that if they are not
simulated ... convincing artificial intelligence is impossible.

------
melling
Never is such a long time and the brain isn't performing magic.

~~~
mmmbeer
You are then assuming a purely materialist mind - that there is no "spiritual"
component or some part of it that can't be understood by physical and/or
chemical science (i.e. that there is no 'soul'). I personally believe in a
soul (and true free-will) so I don't think we'll ever have AI.

~~~
justinpombrio
What's a soul? (I'm actually curious what you think.) Does my roommate have a
soul? How can I tell? Do I have a soul? How can I tell?

~~~
arethuza
Also - does a cat have soul, what about a chimpanzee?

------
bwooceli
<tldr>Train the AI of the future on Amelia Bedelia books</tldr>

I disagree with how they draw a distinction between metaphor and literal
constructs in language. As we are all in our own heads experiencing the world,
language is an interface to pass meaning from one reality to another.

Over time humanity has used language to arrive at a collective consensus on
the meaning of words that describe shared experiences. At this point, all
language is on a metaphorical scale where the depth of a personal knowledge
determines the success of understanding the input. This is coupled to a
positive/negative reinforcement mechanism that builds a history of
interactions that helps determine what language will convey the intended
meaning in an appropriate context.

It does not seem that these two features, knowledge graph and track record,
are outside the realm of possibility for computation. Given a deep enough
knowledge graph and a means to query outcomes of past experience it seems that
this feature of "seeming human" would be possible.

------
Aqueous
I'm not sure I buy the premise that a brain even understands a literal
sentence completely by simulating it, in all cases. I think that is a
_component_ of understanding, and can help understand, depending on both the
sentence and our direct experience of the situation the sentence describes.

But I don't think that is the whole story because I don't think that permits
_partial understanding._ What explains our ability to understand (or
partially) understand a sentence describing a novel situation we've never
experienced involving objects or people we've never seen? As the article
points out sometimes we are able to understand sentences for which there is no
motor activity or visual experience associated.

A huge example of this is soon after we're born and start to develop we begin
to understand sentences even though we're not being formally taught a language
- only exposed to it. The simulation theory seems not to explain that process
of 'bootstrapping.'

------
tim333
>Of Ray Kurzweil’s singularity thesis, he says, "I don’t believe it for a
second." Computers can run models of neural processes, he says, but absent
bodily experience, those models will never actually be conscious.

Ironically, Kurzweil is big in to the stuff the article goes on about like
bodily sensory input, metaphor and using fMRI to see what is going on. From
his recent book:

"Inputs from the body (estimated at hundreds of megabits per second),
including that of nerves from the skin, muscles, organs, and other areas,
stream into the upper spinal cord." ... "Key cells called lamina 1 neurons
create a map of the body."

"A key aspect of creativity is the process of finding great metaphors -
symbols that represent something else. The neocortex is a great metaphor
machine, which accounts for why we are a uniquely creative species."

------
jbarrow
Jeff Hawkins is a huge proponent of strong AI, and I strongly encourage anyone
interested in the subject to read his 2004 book, On Intelligence. In it he
makes several cogent points about the future of true AI, a lot of which hit on
points brought up in the article.

He discusses the origin of thought and imagination as simulations, which is in
line with the article. He sees this in a different light, however: not only
are simulations necessary for brains to produce thought, but they are
achievable given the right computational system.

He also argues that embodiment may not (and in his view, likely won't) take a
humanlike form, Rather the AI, like a human, will be able to plastically adapt
to new senses (say, weather sensors) to understand the world in a way we can't
even fathom.

------
dghf
> Take the sentence "Harry picked up the glass." "If you can’t imagine picking
> up a glass or seeing someone picking up a glass," Lakoff wrote in a paper
> with Vittorio Gallese, a professor of human physiology at the University of
> Parma, in Italy, "then you can’t understand that sentence."

Taken to its logical conclusion, doesn't that imply that someone blind from
birth can't understand visual metaphors or idioms: e.g., "I see what you
mean?"

~~~
tomp
No. To "pick up the glass" means something in this world, and you can imagine
yourself doing it, which is why you "understand" it. "I see what you mean"
means nothing different than "I get it", which even a blind person can
understand. A better example would be, you can't really understand "to have
sex" before you actually experience it (and even then, only for your own sex).

~~~
marktangotango
This little interchange highlights something I find interesting: we don't
really 'get it' so why should machines? Ie all an AI has to do is fake it. How
much miscommunication and lack of understanding is there in human interaction?
A lot I think.

So all an AI has to do is get close enough. Talking to such an AI would be
like talking to a dull witted friend who's really good at arithmetic.

~~~
tomp
Au contraire, I think that to make good AI, an AI really has to "get it" \-
i.e. it has to have an accurate, funcional internal representation of the
world (i.e. "imagination"). Otherwise, the number of concepts that it would
need to memorize to be able to "fake it" would be just too huge; for example,
to know that "he went through the door" means that "he's not in the room any
more", and any number of such similar, trivial facts.

AFAIK, that's one of the main problems in AI right now. Computer vision is
getting really good, but at this point, it's completely useless - sure, the
computer is able to recognize a bottle on a table, but doesn't have an
internal representation of the 3D world that would tell him that glass bottle
can be lifted off the table (i.e. that it's not one object), and that it
should be handled carefully (as it can break, as opposed to e.g. a ball).

~~~
dghf
> Computer vision is getting really good, but at this point, it's completely
> useless

Driverless car? (OK, that's lidar rather than vision per se, but still surely
depends on constructing a model of the world based on external sensory input.)

~~~
tomp
Yes, but that's a really simple model; it's to true 3D vision what Wolfenstein
3D was to 3D graphics. The only thing driverless cars need to do is (1)
recognize obstacles, and (2) recognize some predetermined objects (traffic
lights, ...) and then use predetermined behavior.

------
stared
I am a big fan of conceptual metaphors. But I don't get this part: "why AIs
may never be humanlike".

We can simulate embodiment and simulate structures for generating analogies.
And, actually, it may be simpler that a Platonic approach of words having "an
objective meaning".

