Hacker News new | past | comments | ask | show | jobs | submit login
Your Brain on Metaphors (chronicle.com)
40 points by jawon on Sept 5, 2014 | hide | past | favorite | 48 comments



> Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: "It kills it."

This just makes me think he both doesn't understand brains or AI.

I also don't get the insistence on a 'body'. If we weren't planning on having the AI totally isolated, and intended to say, talk to it in order to see if it was an AI, then we were already proposing to give it senses right from the start.

In fact, I don't think I've seen a single proposal for an AI that didn't give it at least one external sense and many internal ones. I don't see why we would think it would have that much trouble building metaphores.

As Lera Boroditsky says:

> If you’re not bound by limitations of memory, if you’re not bound by limitations of physical presence, I think you could build a very different kind of intelligence system

> I don’t know why we have to replicate our physical limitations in other systems.


> If cognition is embodied, that raises problems for artificial intelligence. Since computers don’t have bodies, let alone sensations, what are the implications of these findings for their becoming conscious—that is, achieving strong AI? Lakoff is uncompromising: "It kills it." Of Ray Kurzweil’s singularity thesis, he says, "I don’t believe it for a second." Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Well, then let's simulate the body as well once we got the brain right.


We're getting one step further but they're still missing out. Articifial intelligence or consciousness have no reason to be precluded by a lack of embodiment. Computers can have sensors and that's all that's needed. Some life forms have sensors very different from ours and that most probably doesn't prevent them from being intelligent or even maybe conscious.

What a lack of human embodiement prevents in AI is strong communication with humans. Why ? Because as Lakoff hinted on, they'll lack grasp on the linguistic twists and metaphors that make our language relate to our everyday experiences. The problem will be similar to the problem we would have trying to communicate with a bird or an insect. We don't ground our communication in the same things at all.


There are humans living right now that have a different set of 'sensors' from the majority. I'm talking about the blind. The deaf. Those without a sense of smell. Those who cannot feel pain.

Yet we manage to communicate with them.


The embodiment problem is well discussed in AGI literature, despite there being no consensus. This does not raise problems for "Strong AI" (AGI), this raises problems for people who do not understand that there are no scope limitations for AGI.

I do find it frustrating that topics in AGI are discussed as though no one has thought about them. This year for example there were about 90 attendees at the AGI conference in Quebec, however there were numerous AGI topics discussed at both AAAI and the CogSci conference. So not only is the small group growing but more is being discussed in the more mainstream groups.


We're getting closer... "Berkeley Lab researchers hope to accelerate this needle-in-a-haystack hunt with an innovative search engine that simulates the way scientists think. " http://www2.lbl.gov/Science-Articles/Archive/sabl/2005/March...


Here's where things go wildly wrong:

> Since computers don't have bodies, let alone sensations

Computers are not non-physical, they definitely have bodies (the physical machines which include the circuitry for executing their software and its necessary support mechanisms); they also can, and often do, have sensory systems providing inputs regarding the state of the world both external (e.g., cameras, microphones) and internal (e.g., temperature sensors) to their "bodies".

It may be that the "bodies" of current computers are structurally dissimilar to human bodies in ways which are detrimental to human style cognition -- its certainly true that they aren't build on the same kind of biomechanical design and it may well be that the web of biomechanical feedback loops in the body is important to human intelligence and isn't readily simulated in systems using the technologies used for modern digital computers. But even if that's true, it doesn't say we can't have AI, it just means our AI may need to be built on adifferent set of technologies, e.g., perhaps using biological rather than silicon substrates. But engineering biological systems is something we can do, and with increasing facility.

The belief that AI is physically impossible -- rather than just a very hard engineering problem -- is equivalent to the belief that intelligence is, itself, not a phenomenon governed by the laws of the physical universe, but magic that intrudes effects into the physical universe from outside that cannot be reproduced by physical means.


I agree with your comment, but it makes me think:

Even if we can have that kind of AI, it's not a given that it will mean any of the things that the Church Of Singularity types think it will mean.

If it turns out to be something that's due to an irreducible set of interacting natural processes, it may not be at all possible to modularize and use in a way amenable to the visions of those who imagine a world of cleanly AI that can be understand and reason about what we mean, learn to program, outperform humanity, and suddenly ... Skynet! (Or singularity. Or whatever).


> Even if we can have that kind of AI, it's not a given that it will mean any of the things that the Church Of Singularity types think it will mean.

Sure, but that's because the features that the Singularity relies on are different from the kind of strong AI that Lakoff, et al., argue that we might not be able to have.

The Singularity is more about hybrid intelligence -- artificial systems that interact with human intelligence to radically transform human society and interactions. Strong AI -- in the sense of artificial systems that viewed independently are indistinguishable in behavior from human intelligence -- is neither necessary nor sufficient for the Singularity, its nearly a complete irrelevancy. In fact, strong AI itself provides by definition nothing that having humans in the first place doesn't provide, the only relevance it has is that there is kind of an intuitive argument that a likely side effect of the research necessary to get to strong AI would be learning a lot more about how human intelligence works and how external influences can interact with it, but you don't need to actually get to strong AI to get that effect.


I almost didn't read the article due to the headline but the fMRI studies (which are really the focus of the article) are fascinating, and there is a surprisingly deep discussion about the line between idioms and metaphors -- particularly how this affects whether the brain engages the motor cortex when processing language and how it might vary by individual.

Note that the headline isn't actually implying machines can't be intelligent but that their internal states won't correspond to human states unless their cognition is grounded in a similar set of sensations. But this is already the case with humans from vastly different cultural contexts. The words coming out of your mouth only mean the same thing to others to the extent that they share a similar history of experiences when interacting with the world.

So I can certainly believe that an AI whose internal concepts are based on embodied/simulated experience would seem more relatable than one raised purely on books, but that's true of humans too so no big surprise and not an insurmountable barrier as one of the quoted sources in the article suggests. Non-embodied agents will speak in idioms and embodied agents will speak in metaphors.


My personal take is that while humanlike AI may be developed, it won't happen on computers as we know them today. The fundamental mechanics of computation and thought appear to be different enough that I suspect an accurate simulation of humanlike thought may very well be out somewhere in NP. This is not to say that machines capable of humanlike AI won't be invented; they just won't be recognizable as computers.

They might not even make computers obsolete. If a humanlike AI's fundamental model of thought is closer to ours than to a computer's, then the AI might turn out not to be very much better at math-oriented tasks than we are (proportionally speaking). It would therefore still need to use computers in mostly the same ways we do. Both types of machines might be incorporated into a single unit -AI on the left, computer on the right, for example- to speed up that process.


So far as we know, it isn't possible to solve NP-complete problems quickly using any means. Even quantum computers, which are fundamentally more powerful than the computers we have now, are not known to be able to solve NP-complete problems in polynomial time (i.e., efficiently). So it seems unlikely that simulating the brain is out in NP; it's probably no worse than BQP (what quantum computers can do).


I think you meant to say NP-hard, not NP.

Many NP problems are easy so saying "out somewhere in NP" isn't saying very much.


So, uh, in what sense are you using "NP" here.


A webcomic called Nine Planets Without Intelligent Life had a clever take on this topic (years ago). Quoting http://www.bohemiandrive.com/comics/npwil/19.html (drag to read; the illustrations are highly entertaining):

How and why do robots eat?

The answer to the first part of this question is simple: Robots eat the same way humans eat.

As to the why, it would be helpful to think of a saying of the late-human AI programming community.

Building an artificial intelligence that appreciates Mozart is easy ... building AI that appreciates a theme restaurant is the real challenge.

In other words, base desires are so key to human behavior that if they are not simulated ... convincing artificial intelligence is impossible.


Never is such a long time and the brain isn't performing magic.


Well, consciouness from non-conscious materials is as close to magic as it gets...


Light from non-luminous materials is as close to magic as it gets.. (lamps)

Computations from non-computing materials is as close to magic as it gets.. (computers)

Movement from non-moving materials is as close to magic as it gets.. (engines)


Nope, just consciousness from non-conscious materials.

There are no non-moving materials. Engines move (e.g pistons), and there's molecular and electron flow. Now, if you had said movement from zero Kelvin, actually non moving materials, that would really be magic. Or, light without photons (which ARE luminous).

The only real valid analogy IMO is "computations from non-computing materials". But the materials are indeed computing -- computing just means having a state, and being able to keep it or change it under certain conditions.


Then 'as close to magic as it gets' is 'not magic', and there isn't really a conflict here.


Having magic in your worldview indicates your lack of understanding, not the existence of magic.


If you don't accept that there is something we haven't yet explained because we haven't found the right paradigm to explain it yet you're stopping the evolution of science at the current level.

At the moment science has a materialistic view, but for all we know in a 100 years they'll discover things that we're unable to imagine now as we are thinking within science's current principles.

So magic is just undocumented science :P


>At the moment science has a materialistic view,

And for all we know in 100 years we'll be able to causally interact with something nonphysical? Scientific epistemology is based on where and when we can have causal contact, not on assuming that things are "material".


No, it indicates neither.

It actually indicates an area that is not explained currently by science and that some people theorise about in a "non-scientific" or anecdotal way. This has been illustrated many times by herbs used by traditional healers (using anecdotal reasoning) which were ignored by science until the technology existed to test the herbs for medicinal purposes.

Traditional forms of human knowledge ("magic") may not be "scientific" but it does not indicate a lack of understanding. The movement of epistemology from magic to science is a spectrum not a series of hard stops. The world is grey, not black and white.


melling has figured out the secret to how the brain originates conciousness, transfers short term memory into long term, how much and what percent of each lobe controls our personality, has figured out if all aspects of cognition are computational or not, and has figured out how perception works.

And he deems there is no magic here. move along now.

/s seriously these are all areas of the brain where scientific opinions are changing constantly. To belittle the functions of the brain when there is still so much more research to be done is ignorant.


It doesn't matter whether all aspects of cognition are computational. If brains run on non-computational quantum gravity juju, then we can make machines that run on quantum gravity juju. To make AI impossible, it would take literal magic, a decree from Cthulhu saying that certain physical effects are only allowed inside human brains. And even then I guess we'd find a workaround eventually ;-)


There was no belittling in melling's comment. Even with the little knowledge we have today about how the brain functions, it is perfectly reasonable to hold the materialist position that everything happening in the brain is built on the laws of physics as we understand them today, and can therefore be simulated in principle.

Whether a perfect brain simulation, or a "truly thinking" AI[0] is ever built is therefore not so much a question of whether it is possible in principle, and much more a question of whether the (immense) resources required will ever be put in use to achieve this task. It is therefore a question of economics and political science rather than a question of neuroscience.

[0] Whatever that may mean.


No, I simply understand that "never" is a pretty big number. It's not 10 years, 100 years, 1000 years, or even 10,000 years. Never is way bigger than that. So, how long do you think it will take us to build the equivalent of the human brain?


You are then assuming a purely materialist mind - that there is no "spiritual" component or some part of it that can't be understood by physical and/or chemical science (i.e. that there is no 'soul'). I personally believe in a soul (and true free-will) so I don't think we'll ever have AI.


So basically, you believe in magic.

By the way, that 'soul' thingy, doesn't reside in our brains? Or do you believe in the 21 grams 'theory'? http://en.wikipedia.org/wiki/21_Grams#Title

I really don't want to pick on you, I am just really interested if people who believe that stuff are consistent in their beliefs, or they just mix and match.


There is only a purely material mind. In every god, deamon, spirit, angel, ghost, miracle or pantheon that has ever been put forth for scrutiny, a mundane non spiritualistic world presented itself as the cause. A scientist need not focus on fairy tales to explain the workings of nature.


I'm curious, how do you define "true free-will"? Most people define it as "the ability to freely decide what I want", but that just means "the ability for my brain to figure out what the best cause of action is, based on my previous experiences", which is just another way of saying "determinism" (even if it includes some non-material planes, e.g. "soul").


What's a soul? (I'm actually curious what you think.) Does my roommate have a soul? How can I tell? Do I have a soul? How can I tell?


Also - does a cat have soul, what about a chimpanzee?


I share your belief in a soul, that isn't adequately explained by neither contemporary physics nor chemistry.

I am more positive of the future though. Just because we can't explain it right now, doesn't mean we wont ever be able to explain it. Also, even if we aren't able to explain it, we may well succeed in AI anyway. By that I mean that we could create a computer that can perform the same mental feats as a human, without understanding if it is a p-zombie or not.


Does your "spiritual component" figure out metaphors? Because that's the problem we're trying to solve.


<tldr>Train the AI of the future on Amelia Bedelia books</tldr>

I disagree with how they draw a distinction between metaphor and literal constructs in language. As we are all in our own heads experiencing the world, language is an interface to pass meaning from one reality to another.

Over time humanity has used language to arrive at a collective consensus on the meaning of words that describe shared experiences. At this point, all language is on a metaphorical scale where the depth of a personal knowledge determines the success of understanding the input. This is coupled to a positive/negative reinforcement mechanism that builds a history of interactions that helps determine what language will convey the intended meaning in an appropriate context.

It does not seem that these two features, knowledge graph and track record, are outside the realm of possibility for computation. Given a deep enough knowledge graph and a means to query outcomes of past experience it seems that this feature of "seeming human" would be possible.


I'm not sure I buy the premise that a brain even understands a literal sentence completely by simulating it, in all cases. I think that is a component of understanding, and can help understand, depending on both the sentence and our direct experience of the situation the sentence describes.

But I don't think that is the whole story because I don't think that permits partial understanding. What explains our ability to understand (or partially) understand a sentence describing a novel situation we've never experienced involving objects or people we've never seen? As the article points out sometimes we are able to understand sentences for which there is no motor activity or visual experience associated.

A huge example of this is soon after we're born and start to develop we begin to understand sentences even though we're not being formally taught a language - only exposed to it. The simulation theory seems not to explain that process of 'bootstrapping.'


>Of Ray Kurzweil’s singularity thesis, he says, "I don’t believe it for a second." Computers can run models of neural processes, he says, but absent bodily experience, those models will never actually be conscious.

Ironically, Kurzweil is big in to the stuff the article goes on about like bodily sensory input, metaphor and using fMRI to see what is going on. From his recent book:

"Inputs from the body (estimated at hundreds of megabits per second), including that of nerves from the skin, muscles, organs, and other areas, stream into the upper spinal cord." ... "Key cells called lamina 1 neurons create a map of the body."

"A key aspect of creativity is the process of finding great metaphors - symbols that represent something else. The neocortex is a great metaphor machine, which accounts for why we are a uniquely creative species."


Jeff Hawkins is a huge proponent of strong AI, and I strongly encourage anyone interested in the subject to read his 2004 book, On Intelligence. In it he makes several cogent points about the future of true AI, a lot of which hit on points brought up in the article.

He discusses the origin of thought and imagination as simulations, which is in line with the article. He sees this in a different light, however: not only are simulations necessary for brains to produce thought, but they are achievable given the right computational system.

He also argues that embodiment may not (and in his view, likely won't) take a humanlike form, Rather the AI, like a human, will be able to plastically adapt to new senses (say, weather sensors) to understand the world in a way we can't even fathom.


> Take the sentence "Harry picked up the glass." "If you can’t imagine picking up a glass or seeing someone picking up a glass," Lakoff wrote in a paper with Vittorio Gallese, a professor of human physiology at the University of Parma, in Italy, "then you can’t understand that sentence."

Taken to its logical conclusion, doesn't that imply that someone blind from birth can't understand visual metaphors or idioms: e.g., "I see what you mean?"


No. To "pick up the glass" means something in this world, and you can imagine yourself doing it, which is why you "understand" it. "I see what you mean" means nothing different than "I get it", which even a blind person can understand. A better example would be, you can't really understand "to have sex" before you actually experience it (and even then, only for your own sex).


This little interchange highlights something I find interesting: we don't really 'get it' so why should machines? Ie all an AI has to do is fake it. How much miscommunication and lack of understanding is there in human interaction? A lot I think.

So all an AI has to do is get close enough. Talking to such an AI would be like talking to a dull witted friend who's really good at arithmetic.


Au contraire, I think that to make good AI, an AI really has to "get it" - i.e. it has to have an accurate, funcional internal representation of the world (i.e. "imagination"). Otherwise, the number of concepts that it would need to memorize to be able to "fake it" would be just too huge; for example, to know that "he went through the door" means that "he's not in the room any more", and any number of such similar, trivial facts.

AFAIK, that's one of the main problems in AI right now. Computer vision is getting really good, but at this point, it's completely useless - sure, the computer is able to recognize a bottle on a table, but doesn't have an internal representation of the 3D world that would tell him that glass bottle can be lifted off the table (i.e. that it's not one object), and that it should be handled carefully (as it can break, as opposed to e.g. a ball).


> for example, to know that "he went through the door" means that "he's not in the room any more"

> sure, the computer is able to recognize a bottle on a table, but doesn't have an internal representation of the 3D world that would tell him that glass bottle ... should be handled carefully (as it can break, as opposed to e.g. a ball)

To be fair, these are also true of human beings in the early stages of their development (object persistence; the cry of "oh dear" that every parent of a toddler comes to dread).


> Computer vision is getting really good, but at this point, it's completely useless

Driverless car? (OK, that's lidar rather than vision per se, but still surely depends on constructing a model of the world based on external sensory input.)


Yes, but that's a really simple model; it's to true 3D vision what Wolfenstein 3D was to 3D graphics. The only thing driverless cars need to do is (1) recognize obstacles, and (2) recognize some predetermined objects (traffic lights, ...) and then use predetermined behavior.


I am a big fan of conceptual metaphors. But I don't get this part: "why AIs may never be humanlike".

We can simulate embodiment and simulate structures for generating analogies. And, actually, it may be simpler that a Platonic approach of words having "an objective meaning".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: