
Artificial Intelligence and What Computers Still Don't Understand - phreeza
http://www.newyorker.com/online/blogs/elements/2013/08/why-cant-my-computer-understand-me.html
======
SCdF
I'm coming around to the idea that getting computers to act like people is
like building planes that flap their wings because that's what birds do.

You're working with a fundamentally different medium, you should be playing to
its strengths, not trying emulate the strengths of other mediums.

~~~
devcpp
But the reward is so great that we can't dismiss the possibility of finding
the special wings that are appropriate for that priceless plane.

Think about it: we dont' just get very cheap labor working 24/7 without moral
issues or personal needs. We can also make the robots take our jobs _very_
efficiently, and technological progress will be unstoppable (particularly if
they can improve themselves). It's the essence of singularity.

~~~
Houshalter
It's also _really, really_ dangerous. Imagine an amoral being without that is
hundreds or thousands of times more intelligent than humans. Do you think you
could control it?

That's not unrealistic. Computers already have clock cycles many thousands of
times faster than the human brain (our brains get away with it by being
massively parallel), have instant access to the entire world's knowledge base,
can be scaled up indefinitely (just add another processor), and most
importantly, can modify their code at will. If they see an improvement they
can make to their own intelligence, they instantly become more intelligent,
and get better at finding even more improvements, or designing even better
AIs.

~~~
hackinthebochs
>Do you think you could control it?

There always seems to be an assumption in popular thought that any intelligent
machine would necessarily be like us: ie, have drives, motivations, self-
preservation instinct, etc. This just isn't the case. Even in us, our rational
nature is an appendage to our more base drives and instincts. Intelligence
does not come with these motivations and drives, they are completely separate.
There is no reason to think that a general AI would have any of these things,
thus concerns about it deciding that its better off without us are completely
unfounded.

There is a concern however, that someone would program an AI specifically with
these motivations. In that case we do have everything to worry about.

~~~
pdonis
_Intelligence does not come with these motivations and drives, they are
completely separate._

Do you know of any examples of intelligent beings that don't have _any_
motivations and drives? (Note that "motivations and drives" is a very general
term; there's no need for them to be _human_ motivations and drives. I agree
the motivations and drives don't have to be human ones; but that's not the
same as saying the AI has none at all.)

 _There is no reason to think that a general AI would have any of these
things, thus concerns about it deciding that its better off without us are
completely unfounded._

If an entity doesn't have _some_ kind of motivation and drive, how can it be
intelligent? Intelligence doesn't just mean cogitating in a vacuum; it means
taking information in from the world, and doing things that have effects in
the world. (Even if the AI just answers questions put to it, its answers are
still actions that have effects in the world.) So an AI has to at least have
the motivation and drive to take in information and do things with it;
otherwise it's useless anyway.

So given that the AI at least has to "want" to take in information and do
things with it, how do you know the things it will want to do with the
information are good things? ("Good" here means basically "beneficial to
humans", since that's why we would want to build an AI in the first place.) We
can say that we'll design the AI this way; but how do we know we can do that
without making a mistake? A mistake doesn't have to be "oh, we programmed the
AI to want to destroy the world; oops". A mistake is anything that causes a
mismatch between what the AI is actually programmed to do, and what we really
want it to do. Any programmer should know that this will _always_ happen, in
any program.

~~~
hackinthebochs
>Do you know of any examples of intelligent beings that don't have any
motivations and drives?

My computer, for certain definitions of intelligent.

> Intelligence doesn't just mean cogitating in a vacuum; it means taking
> information in from the world, and doing things that have effects in the
> world.

I agree with this; but its behavior does not have to be self-directed to be
intelligent. Again, computers behave quite intelligently in certain
constrained areas, yet its behavior is completely driven by a human operator.
There is no reason a fully general AI must be self-directed based on what we
would call drives.

>So an AI has to at least have the motivation and drive to take in information
and do things with it

I don't see this to be true either. Its (supposed) neural network could be
modified externally without any self-direction whatsoever. An intelligent
process does not have to look like a simulation of ourselves.

The word "being" perhaps is the stumbling point here. Perhaps it is true that
something considered a "being" would necessarily require a certain level of
self-direction. But even in that case I don't see it being possible for a
being who was, say, programmed to enjoy absorbing knowledge to necessarily
have any self-preservation instinct, or any drives whatsoever outside of
knowledge-gathering. All the "ghosts in the machine" nonsense is pure science
fiction. I don't think there is any programming error that could turn an
intended knowledge-loving machine into a self-preserving amoral humanity
killer. The architecture of the two would be vastly different.

~~~
pdonis
_for certain definitions of intelligent_

Yes, but I would argue that those definitions are not really relevant to this
discussion. You say...

 _behavior does not have to be self-directed to be intelligent_

...which is true, but the whole point of AI is to get to a point where
computers _are_ self-directed; where we don't have to laboriously _tell_ the
computer what to do; we just give it a general goal statement and it figures
out how to accomplish it. If we have to continuously intervene to get it to do
what we want, what's the point? We have that now. So this...

 _Its (supposed) neural network could be modified externally without any self-
direction whatsoever._

...is also not really relevant, because the whole point is to develop AI's
that can modify their own neural networks (or whatever internal structures
they end up having) as an ongoing process, the way humans do.

(Btw, one of the reasons I keep saying this is "the whole point" is that
developing such AI's would confer a huge competitive advantage, compared to
the "intelligent" machines we have now, that only exhibit "intelligent"
behavior with continuous human intervention. So it's not realistic to limit
discussion to the latter kind of machines; even if you personally don't want
to take the next step, somebody else will.)

 _The word "being" perhaps is the stumbling point here._

No, I think it's the word "intelligent". See above.

 _I don 't think there is any programming error that could turn an intended
knowledge-loving machine into a self-preserving amoral humanity killer._

I don't think you're trying hard enough to imagine what effects a programming
error could have. Have you read any of Eliezer Yudkowsky's articles on the
Friendly AI problem?

 _The architecture of the two would be vastly different._

Why would this have to be the case? Human beings implement both behaviors
quite handily on the same architecture.

~~~
hackinthebochs
I have a feeling our disagreement is largely one of terminology.

>the whole point of AI is to get to a point where computers are self-directed;
where we don't have to laboriously tell the computer what to do;

There is much in between laboriously tell it what to do and having a self-
directed entity traipsing in and out of computer networks. A system that is
smart enough to figure out how to accomplish a high level goal on its own
doesn't have to be a self-directed entity. It just needs to be infused with
enough real-world knowledge that the supposed optimization problem has a
solution.

>because the whole point is to develop AI's that can modify their own neural
networks (or whatever internal structures they end up having) as an ongoing
process, the way humans do.

I think you give us too much credit. We may guide our learning processes, but
the actual modification of our neural networks is completely out of our
control. The distinction here may seem useless, but in this case it is
important. A supposed unself-directed-but-intelligent being would simply not
have the ability to direct its learning processes. We would still have to
bootstrap its learning algorithms on a particular dataset to increase its
knowledge base. But this isn't contradictory to general AI, nor would it be
useless. In fact, I would say the only thing we would lose is the warm fuzzies
that we created "life". It's still just as useful to us if we're in control of
its growth.

~~~
pdonis
_A system that is smart enough to figure out how to accomplish a high level
goal on its own doesn 't have to be a self-directed entity._

But a system that can do this and _is_ a self-directed entity provides, as I
said, a competitive advantage over a system that can do this but isn't self-
directed. So there will be an incentive for people to make the latter kind of
system into the former kind.

 _We may guide our learning processes, but the actual modification of our
neural networks is completely out of our control._

As you state it, this is false, because guiding our learning processes _is_
controlling at least some aspects of the modification of our neural networks.
But I agree that our control over the actual modification of our neural
networks is extremely coarse; _most_ aspects of it are out of our control.

 _It 's still just as useful to us if we're in control of its growth._

No, it isn't, because if we're in control of its growth, its growth is limited
by our mental capacities. An AI which can control its own growth is only
limited by its own mental capacities, which could exceed ours. Since one of
the biggest limitations on human progress is limited human mental capacity, an
AI which can exceed that limit will be highly desirable. However, the price of
that desirable thing is that, since by definition the AI's mental capacity
exceeds that of humans, humans can no longer reliably exert control over it.

~~~
hackinthebochs
>No, it isn't, because if we're in control of its growth, its growth is
limited by our mental capacities

I don't know why you think this is true. In fact, we do this all the time.
Just about any decent sized neural network we train we are incapable of
comprehending how it functions. Yet we can bootstrap a process that results in
the solution just the same. As long as we are able to formulate the problem of
"enhance AI intelligence", it should still be able to solve such a problem,
despite our lack of intellect to comprehend the solution.

~~~
pdonis
_I don 't know why you think this is true. In fact, we do this all the time.
Just about any decent sized neural network we train we are incapable of
comprehending how it functions. Yet we can bootstrap a process that results in
the solution just the same._

That's because we can define what the solution looks like, in order to train
the neural network. We don't understand exactly how, at the micro level, the
neural network operates, but we understand its inputs and outputs and how
those need to be related for the network to solve the problem.

 _As long as we are able to formulate the problem of "enhance AI
intelligence", it should still be able to solve such a problem, despite our
lack of intellect to comprehend the solution._

You've pulled a bait and switch here. Above, you said we didn't comprehend the
internal workings of the network; now you're saying we don't comprehend the
solution. Those are different things. If we can't comprehend the solution, we
can't know how to train the neural network to achieve it.

If the problem is "enhance AI intelligence", then we will only be able to do
if we _can_ comprehend the solution, enough to know how to train the neural
network (or whatever mechanism we are using). At some point, we'll hit a
limit, where we can't even define what "enhanced intelligence" means well
enough to train a mechanism to achieve it.

------
valtron
Aside: The Loebner prize is a terrible implementation of a Turing test because
the judges have no idea what they're doing. You can't just try to have a
normal conversation and see which one sounds more human. You _have_ to try
tricky things like the article mentions; beside syntactically ambiguous
sentences, you might also try things that would be obvious to a human but that
a computer would easily get confused by unless it was programmed to check for
it. Likenotusingspaces. O.r. .i.n.t.e.r.l.e.a.v.i.n.g. .l.e.t.t.e.r.s.
.w.i.t.h. .s.o.m.e. .r.a.n.d.o.m. .c.h.a.r.a.c.t.e.r.

Also, both the participants being tested (AI and human) should be trying to
convince the judge that they're the human and the other is the AI. Given that
chatterbots that insert "one-liners" and other non-sequitors do well, I doubt
that's happening, so again, the Loebner test is useless.

~~~
seiji
"Is it an AI" tests based only on text are pretty silly anyway. There are
probably a few billion people alive who can't form complete written sentences.
If you have a job, you probably encounter dozens of people a year who can't
write coherent emails.

The Turing Test concept appeared when everybody thought intelligence was
mostly words, and we could probably easily make computers spit out meaningful
sequences of words. Nobody believes intelligence is how well you can write and
read anymore (except fad-based media reports "OMG SIRI IZ INTELLUGENT AI
SKYNET!"). The Turing Test concept persists because it's simpler than dirt to
explain, even though it has never been a valid method of evaluating anything.

~~~
feral
I don't think you are being fair on the Turing test, or on the intelligence of
its creator.

In describing the setting of the test, Turing first describes an imitation
game where a judge must tell apart a man and a woman. He writes "In order that
tones of voice may not help the interrogator the answers should be written, or
better still, typewritten. The ideal arrangement is to have a teleprinter
communicating between the two rooms. Alternatively the question and answers
can be repeated by an intermediary." So the text format is out of the
practicalities of defining a workable test, not because Turing thought
'intelligence is how well you can write and read'.

Your other objection is that there are humans who can't form written
sentences. Well, they could use the intermediary, if they cant write.

But, even if you mean that they can't put a verbal sentence together, the test
is framed such that a machine that can pass it can be considered to think, not
so that any machine (or human) that can think is supposed to be able to pass
the test.

Again, from Turing's paper: "May not machines carry out something which ought
to be described as thinking but which is very different from what a man does?
This objection is a very strong one, but at least we can say that if,
nevertheless, a machine can be constructed to play the imitation game
satisfactorily, we need not be troubled by this objection."

In other words, passing the test - succeeding at the imitation game - is
designed to be a sufficient test to demonstrate intelligence, but not a
necessary one.

~~~
seiji
You are quite right. Oddly, I think there's a distinction between the mass-
media version of what a Turing Test is and the actual imitation game.

Drifting culture impacts what counts as "passing" too. Way Back Then,
everything was slightly more formal, proper, and precise. These days, I have
SMS chats with some people who rarely reply with more than one or two words
(or maybe an emoji if they're feeling really communicative at the moment).

~~~
dandelany
Sorry, I normally hate nitpicking but since I saw it three times: it's Turing,
not Turning.

~~~
seiji
Wow, I didn't intend that at all (and in all three places too, that's quite a
failure). Thanks for noticing. Either my fingers are misbehaving or
autocorrect needs a bigger dictionary.

------
RyanMcGreal
Essentially, the article accuses AI researchers of searching for their lost
keys under streetlamps. Fair enough, but I wonder if the best use of AI
research is to get computers to solve problems that a person could easily
solve. I thought the point of, e.g. a search engine is to solve problems a
person can't easily solve.

~~~
fragsworth
> I wonder if the best use of AI research is to get computers to solve
> problems that a person could easily solve

That's pretty short-sighted. These "easy" problems are stepping stones to much
more difficult problems. For instance, you cannot tell a machine something
like "build my website" without it being able to solve the easy stuff first.

~~~
bad_user
Oh come on, sure you can tell a computer " _build my website_ ". You can use
language, or you can click a button.

Oh, but the result is not what you want? Even when communicating with a human,
you need to provide some specs, right? But then, natural language is way to
imprecise and open to interpretation, coupled with the fact that people don't
really know for sure what they want, so even when speaking with a developer,
you're going to go through several iterations. So for the ultimate flexibility
and automation, the user interface of such a smart appliance will end up being
essentially a programming language (being the same reason for why lawyers and
mathematicians have their own language that's anything but _natural_ ).

Truth of the matter is, computers answering to commands aren't really that
interesting, as they already do this. What we really want are computers
capable of _ideas_.

Also, natural language processing is hard because it's based on an incredible
amount of implicit context that life taught us ever since we were born and
when communicating to our fellow humans, it's never about just the actual
words being spoken. When saying something like " _build my website_ " your
spouse probably knows you're talking about a website meant for marketing
yourself and she probably knows your tastes and values too; she observed your
fears and dreams while living your mundane life, even if you've never talked
with her explicitly about such issues (speaking of women, they are incredibly
good at reading between the lines).

Solving natural language processing is basically akin to creating sentient
beings.

~~~
derefr
> So for the ultimate flexibility and automation, the user interface of such a
> smart appliance will end up being essentially a programming language

No; when a client asks me to build them a website, I don't demand a
specification. I _interrogate_ them in regular natural language about each
thing, until I'm reasonably sure which things they care about being a certain
way (I'll make those things that way to the best of my ability) and which
things they don't care how they end up (I'll make those things however I
"like" to make them, or randomly if I have no opinion either.)

I think the one thing we're lacking in software UX right now is the concept of
a _dialogue_ \--you and the computer _both_ asking questions to clarify your
mental model of what the other agent currently has in mind, and adding facts
to correct misconceptions until those models are aligned.

------
scotty79
My thoughts about [artificial] intelligence.

Senses do not provide the data for intelligence. Their sole purpose is to
provide syncing between reality and internal representation. Intelligence
perceives and operates on this internal representation only.

Language is just another sense that helps with syncing of internal
representation with some sort reality (might be physical reality, might be
reality of social relationships, might even be internal representation of
another intelligent being).

I've seen logs of conversations between AI and human, and human and human.
What struck me how much more verbose was the AI and human talking to it.
Humans between themselves used such shorthands that made the conversation
almost unintelligible to me just because I'm not american or even native
english speaker.

What useful AI systems up to date lack is the right internal representation.
Current interaction between real world and AI systems resemble interaction
between an application components and unit testing mockups. Interface is sort
of correct but mockups lack the actual meat inside.

AI research should IMHO go towards simulating reality, physical and social in
flexible enough way and finding ways of syncing this simulation to reality
using narrow data channels such as language, heavily filtered out (moving)
images and sounds.

~~~
hackinthebochs
I disagree that an intelligence operates on a distinct internal
representations of things. I believe that our minds mostly operate on our
memories of sensory data. When I remember a song in my head, I'm not iterating
through an internal representation, I'm remembering my sensory experience of
it. There is an important component of syncing internal representation when
communicating, but I think that internal representation is mostly just
recordings of sensory information; its not fundamentally different than what
we sensed at some point in the past.

~~~
scotty79
Memories don't work like that. Research on eye witness accounts indicate that
people remember very little. They fill in the gaps with information that they
acquired after the event, and what's more important for my point, with the
information reasoned from the things they actually remember. And they are not
aware that they filled anything in. They remember things they made up exactly
same way as the actual memories of the event.

Each time you remember something it's being reconstructed, re-imagined by your
brain. Method of treating people with stressful memories is based on that. You
allow them to remember the things that bother them in safe, positive
environment and after few such recalls, large part of stress associated with
this memory disappears because reading memory is sort of "destructive read"
and brain to keep that memory has to write it again after read but it's not
exactly the same as it was read.

When you look at what's in front of you, you don't see the whole image even
though you feel that way. You see only the thing you are focused on
(inattentional blindness). The rest is filled in for you. You might argue that
this comes directly from previous sensory input. I think you could design
experiment where you show a person a misleading image, while influencing brain
with TMS to interpret wrong way, focus attention of a person on some part
other then the misleading one and then stop influencing the brain so that it
could analyze the previously recorded sensory input properly. I'd bet that
person would still see the rest of the image as he was seeing when his brain
was influenced until you allowed him to focus attention on the misleading
parts. Only then he'd be able to readjust.

Perception of illusions also indicate existence of internal representation for
me. With
[http://en.wikipedia.org/wiki/Spinning_Dancer](http://en.wikipedia.org/wiki/Spinning_Dancer)
illusion you can feel the moment when your brain switches between two
completely different representations of ambiguous image.

When you hear sounds, what you hear and remember is not just sensory input.
What you hear depends on the context. Awesome demonstration of that effect:
[http://www.youtube.com/watch?v=8T_jwq9ph8k&feature=player_de...](http://www.youtube.com/watch?v=8T_jwq9ph8k&feature=player_detailpage&t=565)
When you hear the song backward for the first time you hear nothing. But when
you see the text you were supposed to hear you can actually hear those words.
I think that just presenting you with the text and telling you to remember if
you heard this text the first time wouldn't change what you remember hearing.
Try to pause when the text for reverse version shows up and see if your memory
of reversed song playing matches any of that text.

I think this effect happens because senses are very thin, and sensory input is
aggressively used to construct internal representation (with much of the
representation created from experience) and then discarded.

~~~
hackinthebochs
I don't disagree with any of your examples, but I would interpret them
differently. There is certainly a fair amount of "extrapolating" going on
subconsciously. Our brains attempt to extract higher level meaning from
sensory input (such as rotation or relative size of objects). This is a sort
of knowledge that is based on the totality of sensory input received up until
that point (i.e. the experience that a silhouette is likely a 3D object that
is spinning in a particular direction). But I don't consider this knowledge as
being distinct from the sensory input itself, rather an abstraction over a set
of similar inputs that give it meaning.

Personally, when I imagine a horse, I don't imagine some abstraction of a
horse. My subconscious minds pieces together chunks of images from my
experiences with horse-images and puts together something reasonably close.
The stuff of mental computation to an extent is our memories of sensory inputs
themselves, or abstractions over similar classes of inputs.

Thinking about it further, our ideas may not be as far apart as they seem.

Do you consider the "sensory input" as, say, the light waves hitting the
retina, or the set of neural states triggered that induces a "qualia"
experience of sight? In my explanation I was considering the qualia as the
sensory input rather than the frequencies of light. Perhaps you're using the
other definition?

~~~
scotty79
> Do you consider the "sensory input" as, say, the light waves hitting the
> retina, or the set of neural states triggered that induces a "qualia"
> experience of sight?

I consider sensory input everything from retina up to a point when you become
aware that the horse just passed you.

I think that only this high level information gets stored and is used for all
intellectual activity. Actual sight, sound and smell of a horse is just stored
to the extend that allows to recognize horses better in the future but it's
not the part of any reasoning you might have later of why the horse was there,
where was it going and whether it would be cool to own a horse. You use
abstract representation of a horse for all those thoughts.

> Personally, when I imagine a horse, I don't imagine some abstraction of a
> horse. My subconscious minds pieces together chunks of images from my
> experiences with horse-images and puts together something reasonably close.

You feel that but if you tried to draw or sculpt a horse you'd see how many
pieces you thought you recalled you actually made up or have no idea of how
they really look. If I'm not mistaken you admit that the horse you try to
imagine gets rebuilt from bits and pieces that are stitched together. In my
opinion foundation of that construct is that internal abstract representation
of a horse concept.

> (i.e. the experience that a silhouette is likely a 3D object that is
> spinning in a particular direction)

In my opinion brain doesn't switch between spin right, spin left, but between,
this person is slightly above me, this person slightly is below me. Change in
the direction of rotation is just what tells you very clearly that your brain
just switched. Not only perception of a 3d object changes but the whole scene,
relation between observer and the object.

~~~
hackinthebochs
>If I'm not mistaken you admit that the horse you try to imagine gets rebuilt
from bits and pieces that are stitched together. In my opinion foundation of
that construct is that internal abstract representation of a horse concept.

The way I imagine this works is that our sensory input fires some particular
set of neurons which accounts for our sensory experience of the horse. When we
recall a mental image of a particular horse, our brain attempts to recreate as
best it can the neural firing pattern from the actual sensory input. Of
course, this pattern gets distorted as we do not remember specific images as a
whole (unless one has a photographic memory), but pieces of images that
represent certain abstractions over portions of a subject. These patterns are
recreated by firing certain "bootstrap" neurons (memories units) that
downstream cause the recreated pattern.

Expanding on this further, I can imagine our image-storage system being
something like a many-dimensional quadtree, except instead of just spacial
dimensions it also extracts colors, shapes, patterns, textures, etc. So
different meaningful concepts are stored in different layers of the neural
network, and some approximation to the original can be recreated on demand.
This can certainly be considered an abstract representation, yet it is still
tied to and semantically similar to a raw 2d mapping of the image. The
difference is mainly storage efficiency due to compressing similar concepts
learned from our experiences.

------
dvt
_Levesque saves his most damning criticism for the end of his paper. It’s not
just that contemporary A.I. hasn’t solved these kinds of problems yet; it’s
that contemporary A.I. has largely forgotten about them. In Levesque’s view,
the field of artificial intelligence has fallen into a trap of “serial silver
bulletism,” always looking to the next big thing, whether it’s expert systems
or Big Data, but never painstakingly analyzing all of the subtle and deep
knowledge that ordinary human beings possess._

This is very true. But I think that recent AI experts (as opposed to those
doing this work in the 70s) have realized that trying to tackle linguistic
analysis is very very (very) hard. The problem with language (or more
correctly, _discourse_ ) analysis is that even outside the realm of computing,
it still hasn't been fully explicated.

A couple of months ago I took a graduate philosophy of language seminar
(taught by the brilliant Sam Cumming at UCLA) in which we looked at various
theories of discourse. It would be an understatement to say that these
theories vary wildly. We have the classical RST (Rhetorical Structure Theory)
by Mann and Thompson[0] (renowned linguists at USC & UCSB), Jan Van
Kuppevelt's erotetic model[1], Andrew Kehler's Theory of Grammar[2] and a
half-dozen or so more that I don't even remember.

So let's forget about computers for a second. We don't even know how _humans_
process discourse. My term paper was about the parallel relation which is a
very talked-about topic (almost as much as anaphora; see The New Yorker
article) in the academic community; not only are such linguistic phenomena
difficult to theoretically model, they are nigh impossible to practically
implement (say, in some sort of AI schema).

So I'm not even surprised most AI folks just started doing work on SVM's or
ANN's, or Markov Chains, or what-have-you. It seems more practical to do work
on stuff that could actually benefit from machine learning, as opposed to
trying to solve incredibly difficult (and mostly theoretical) problems like
discourse analysis.

The bottom line is that we're still a ways off from having computers like
those in Star Trek - computers that understand anaphora, parallelism,
ellipses, etc, etc.

[0] [http://www.sfu.ca/rst/](http://www.sfu.ca/rst/)

[1] [http://www.jstor.org/stable/4176301](http://www.jstor.org/stable/4176301)

[2] [http://www.amazon.com/Coherence-Reference-Theory-Grammar-
And...](http://www.amazon.com/Coherence-Reference-Theory-Grammar-
Andrew/dp/1575862158)

~~~
jliechti1
>> So let's forget about computers for a second. We don't even know how humans
process discourse...

>> It seems more practical to do work on stuff that could actually benefit
from machine learning, as opposed to trying to solve incredibly difficult (and
mostly theoretical) problems like discourse analysis.

The thing I am not sure about is how much it makes sense to attempt to create
discourse analyses vs doing more "practical" work. The reason is that a
complete and accurate discourse analysis seems nearly impossible. Having read
through a few reference grammars and phonological analysis of various
languages myself, it seems any rules linguists create are always wrought with
copious exceptions and special cases (though certainly not all! There are
examples of phonological rules that are very very regular across languages).
Some of these analyses were extremely intricate.

I got the impression that many of them suffered from the mistake of trying to
prescribe patterns to data that does not have any patterns.

~~~
dvt
I completely agree. Having worked with some AI systems in the past (mostly
ANNs although I'd love to even begin to understand SVMs), I think that there's
more "practicality" in just building robots/software that (based on various
pattern analyses/etc/etc) figures out where a malignant tumor is, for example,
rather than a robot that can understand coherence relations or what-have-you.

So I don't fault the CS community from stepping away from linguistic puzzles
and getting on with more useful stuff. After all, that's what engineering is.
It looks like Levesque disagrees, but I'm not sure he's right.

~~~
textminer
SVMs are lickety-split simple. You're drawing the best line/plane/hyperplane
between some points. You can turn this into a nice convex optimization
program, given some conditions. If this thing isn't fully separable, you can
fudge it a little with some penalty terms, or you can cheaply project these
points into some space where such a separation does exist. The hard part will
always be your feature extraction, labeled data collection, and the parameter
tuning for everything I just waved my hand at.

------
blauwbilgorgel

      Can an alligator run the hundred-metre hurdles?
    

[http://www.youtube.com/watch?v=pjslsKZYXQ8](http://www.youtube.com/watch?v=pjslsKZYXQ8)
(Can alligators jump and climb? Yes, they can!)

[http://www.animalquestions.org/reptiles/alligators/can-
allig...](http://www.animalquestions.org/reptiles/alligators/can-alligators-
climb-fences/) (many have thought was a myth, it is true that alligators are
in fact able to climb fences.)

These anaphora questions are better solved through natural language parsing,
for example turning them into predicate logic. The article is right when it
says that big data algo's have a problem with these sentences. But that is not
to say they can't help. In the article:

    
    
      The town councillors refused to give the angry 
      demonstrators a permit because they feared violence. Who
      feared violence?
    

You can use big data to calculate the semantic closeness between "town
council" and "fear violence". If that is closer than between "demonstrators"
and "fear violence" you can make a good guess.

From the Wikipedia article on anaphora:

    
    
      We gave the bananas to the monkeys because they were hungry.
      We gave the bananas to the monkeys because they were ripe.
      We gave the bananas to the monkeys because they were here. 
    

A quick Google search for "bananas were ripe", "monkeys were ripe", "monkeys
were hungry" and "bananas were hungry" and counting the results will solve
this.

Using semantic closeness can also work against you in the case of ambiguity:

    
    
      The robber sits on the bank.
    

Is "bank" a furniture here? A money bank? A river bank? Semantic closeness
might picture the robber sitting on a money bank. A second or third pass is
necessary: calculate the chance that a person will sit on a building vs. the
chance that a person will sit on furniture.

This is a decades-old hard problem that is close to philosophy. Is the robot
inside Searle's Chinese room intelligent? Is it really understanding what it
is saying? Or is it "faking" it?

~~~
lisper
> You can calculate semantic closeness between "town council" and "fear
> violence". If that is closer than between "demonstrators" and "fear
> violence" you can make a good guess.

Only for that particular example. It's easy to come up with examples where
that heuristic fails:

"The town council denied the protestors a permit because they were in a bad
mood."

"The factory failed to produce the car on time because it was undergoing
maintenance."

~~~
hackinthebochs
You're sort of validating the technique: in those examples the meaning of the
sentence is absolutely ambiguous. The fact that a google search wouldn't
return a conclusive result is expected.

~~~
austinz
Not for the second one; "car undergoing maintenance" will probably return more
results than "factory undergoing maintenance", but one parsing of the sentence
(the one where the factory is being maintained) makes far more sense than the
other.

------
eschaton
Interesting that there seems to be no mention of Cyc, which is explicitly
intended to be good at problems like those mentioned.

~~~
teddyh
But _is_ it any good at those problems?

~~~
seiji
It's best to just let Cyc people continue to think they matter and let them
stay busy expanding their knowledge base in the church of first order logic.
It'll never go anywhere, but it's better to keep them all contained in one
place rather than polluting other fields of study.

~~~
ScottBurson
See, this is what I mean (see my reply to eschaton).

I take it then, seiji, that you also completely disagree with Levesque's
program as expressed in section 4.4 of his paper?

~~~
seiji
Absolutely. 4.4 is a full endorsement of Cyc (and don't get me started on
"Cyc" vs "OpenCyc"). Second 4.4 looks a little like "graphplan reborn" or
"generic AI approaches from the 80s."

Discrete codified knowledge is not the stuff the universe is made of. The
problem Cyc will never solve is the "a picture is worth a thousand words"
problem. Describing everything in a relational, hierarchal, predicate calculus
arrogantly ignores the unrelated multidimensionality of, well, everything.

------
ilaksh
This guy should look into research labeled as artificial general intelligence
(AGI) or deep learning. If he really understood for example what Watson can do
or the leading research using things like autoencoders or hierarchical
temporary memory for natural language understanding then he would have a
better informed and less pessimistic attitude.

~~~
mturmon
The piece actually seems like a backhanded critique of the optimism shown by
the deep learning community (which of course Winograd is aware of). It also
seems like an appreciative backward glance at topics that interested Winograd
earlier in his career, but are less fashionable now.

------
iandanforth
I've thought about this a lot, here's my best answer as of today.

Part 1 - Existence proofs

We think we can build 'intelligent' machines because human intelligence
exists. We don't have a perfect idea of what it means to be intelligent, but
we know, whatever we have exists so it's not impossible to create. Some
disagree with this stance on philosophical or religious grounds, and others
resort to pointing out even if it's possible it's really really hard. Fair
enough.

Part 2 - Methods

Usually we start by picking the thing we think best typifies intelligence and
we work to solve that. Chess, Written Language Comprehension, Visual Pattern
Recognition, Spoken Language, etc. These are _all_ called AI. All well and
good, but so far we get exceptional single purpose systems (the narrower the
domain the better) and then look around and say 'but that's not _really_
intelligent.'

Lately people have started to think more about the process by which things
_become_ intelligent rather than just the end behavior. Somehow the machine
should naturally transition to being intelligent as opposed to being
explicitly programed with intelligence. Machine learning is so-hot-right-now
because of this. But, again, in the end you get a well trained system that
does whatever it used as it's error metric. Like differentiate cats and
sailboats. "But that's not really intelligent because it can't [play
mozart/read poetry/paint a picture]."

There are infinite criticisms, but they can be summed to a lack of
'generalness.'

Part 3 - Representation

What people searching for 'intelligence' are looking for is a system that can
process data from at least as many sources in as least as many contexts as a
human. The hard part there, and the one thing the brain does really really
well, is being able to relate sight to sound, touch to taste, past to present,
and present to future. In us there is a shared language of representation that
encodes experience.

In AI so far it's an unusual system that tries to relate many senses, keep a
life long memory, and work in a noisy and incomplete environment and
constantly make predictions about what will happen next.

Part 4 - Data

It is an unusual AI researcher who has all the data they want. As computer
people we are impatient, and so waiting 30 years for a robot to collect the
1.4 PB of visual information a human does by that age, or the 1.8 TB of audio
information just isn't done. We use existing datasets that are computationally
tractable (meaning you can run them in minutes, hours, or days).

And yet we do not have an existence proof that intelligence of the general
human-like kind can exist without years of exposure to the world. It's
reasonable to expect that it's possible, we just don't have pre-existing
knowledge of that fact.

Part 5 - The Future

So how will we get there from here? We're probably going to have to do it the
hard way. Create something that can sense the world in the ways that we can
comprehend, and painstakingly rear it, collecting data, automating and hard-
coding what we can, until we have a set of error metrics, motivations, data,
and environment in which we begin to see the thousand different skills called
'intelligence' that humans take for granted. In short, we'll probably end up
thinking a bit more like parents, and a bit less like computer scientists.

~~~
electrograv
_> In us there is a shared language of representation that encodes
experience._

What you express in the quote above is a very "computationalist" view to AI.
FWIW, in contrast, I'm more of a "connectionist" [1]. This means that when I
look at the same incredible "general intelligence" of humans -- this almost
"synesthetic" ability to abstract and connect ideas -- I do not see a powerful
translation and symbolic reasoning machine with some seemingly magical
universal language buried deep within. In fact, I believe there can be no such
language without severely compromising the reasoning system's generality.

Instead, I see a powerful "connection machine", where concepts, thoughts,
language, etc. are all the result of some incredibly versatile connective
learning/creativity process. Of particular importance, I see this connection
machine thrive and prosper within systems where there are no axioms, no ground
truth, no single agreed-upon notion of what separates "this" from "that"
(you'll see this in humans when studying philosophy). The analogy to this
notion from a computationalist perspective would be a machine whose
instruction set, or core reasoning language, is always in a state of flux.

I will freely admit though that one view does not necessarily have any more
explanatory power than the other; they both rely on some "magic" unknown
assumptions. For a connectionist, the "magic" is the connective learning
algorithm. For a computationalist, the "magic" is in this universal
language/symbolic system.

Disclaimer: I'm by no means an AI expert (still have a lot more to learn!),
but I always enjoy thinking/discussing these topics and the surrounding
philosophy.

[1]
[http://en.wikipedia.org/wiki/Connectionism](http://en.wikipedia.org/wiki/Connectionism)

~~~
maaku
Connectionism is not a predictive theory. Rather it is the manifestation of a
depressingly common fallacy in science: assigning a sacred mystery[1] to as-
yet unexplained phenomenon.

How does your connectionist interconnected networks of simple units actually
give rise to general AI? Answer that and you'll have the "shared language of
representation" that the OP was talking about.

[1]
[http://lesswrong.com/lw/iv/the_futility_of_emergence/](http://lesswrong.com/lw/iv/the_futility_of_emergence/)

~~~
electrograv
I completely agree that "super magic emergent intelligence" is not an
explanation, but a mystery. But I think it's worth noting that the same
applies for this "super magic universal language of representation" \-- it's
not an explanation, it's a mystery.

It's also important to realize that these aren't beliefs, or truth claims, or
scientific claims. They're philosophical perspectives; no more, no less. They
might guide the intuition, but have no bearing on the science itself. Someone
who doesn't clearly understand the distinction between the philosophy and
science of a topic may definitely risk either contributing to the
"depressingly common fallacy in science" you mention, or risk misinterpreting
a philosophical argument for a scientific one, and hence through blurred
vision believe they see fallacy when in fact there is none.

One way of looking at it is both views are different philosophical angles of
the same thing (or at least the same problem/mystery). A connectionist sees
this conception of a "shared language of representation" as assigning a sacred
mystery (what does this language actually consists of, precisely?) to an as-
yet unexplained phenomenon, in the same way a computationalist sees this of
the connectionist's learning algorithm (how does this learning algorithm work,
precisely?)

The reason I highlight this philosophical symmetry is to emphasize that these
are merely different intuitive mindsets developed towards approaching the
common mystery of general intelligence.

The bottom line is so long as human-like "general intelligence" is a mystery
(to the extent that we can't replicate it 100%+ effectively in computers),
it's going to be an "unexplained phenomenon", and thus any theories developed
around it will have some "magic" hole somewhere -- some key element devoid of
predictive power. (Because if there were no such hole, then by definition,
we'd already have it all figured out.)

------
eboyjr
I think we could help improve the field by making CAPTCHAs with these kinds of
questions.

~~~
saltvedt
Won't work. Assuming that the right answer is a word in the question, the
computer can just brute force the answer in a couple of guesses.

~~~
Houshalter
If you fail 50% of guesses, that should flag you as a spammer.

If you ask a few questions with multiple possible answers it's pretty unlikely
it would get them all correct through random guesses anyways.

~~~
nawitus
The bot can use only 1 guess per website.

------
MichaelMoser123
"The large ball crashed right through the table because it was made of
Styrofoam. What was made of Styrofoam? (The alternative formulation replaces
Stryrofoam with steel.) a) The large ball b) The table"

And a very large styrofoam ball can crash a comparatively small table made of
wood, all depends on relative definition of 'large' right?

I think the problem is that everybody can pick his favorite feature and define
it as the key of 'general intelligence'. Some say Anaphora, some say machine
learning; I like Hofstaedter he says that it is all about analogy.
[http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-
Thinkin...](http://www.amazon.com/Surfaces-Essences-Analogy-Fuel-
Thinking/dp/0465018475)

Also: The problem is that these statements can't be proven; it is all about
opinions and dogmas; I think the argument is s question of power: the one who
wins the argument has the power over huge DARPA funds, or whoever else gives
out grants for this type of research. The run after defining problems (expert
systems, big data) in AI might have something to do with the funding problem.

The 'Society of Mind' argument says that there are many agents that together
somehow miraculously create intelligence.
[http://en.wikipedia.org/wiki/Society_of_Mind](http://en.wikipedia.org/wiki/Society_of_Mind)
This argument sounds good, but it makes it hard to search for general
patterns/universal explanations of intelligence.

On the one hand they have to focus on some real solvable problem, on the other
hand that makes it very hard to ask and find answers to general questions; I
don't know if there will be some solution to this dilemma.

~~~
MichaelMoser123
Maybe western civilization is not very good at answering this type of general
big questions, maybe Indian civilization has a better chance, after all they
invented structural linguistics some 2500 years ago (so that was not Chomsky
at all ;-)

[http://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini](http://en.wikipedia.org/wiki/P%C4%81%E1%B9%87ini)

Maybe the problem needs and idle class of Brahmi who can ask questions and
ponder about them without end, without having to worry about questions of
funding ?

------
ahk
Didn't Google recently roll out what they called conversational search which
did the pronoun resolution thing?

The example I recall they were able to answer correctly was 'Who's the
president of the USA?' And then 'How tall is he?'

------
dnautics
I wanted to arrage for watson to go on "are you smarter than a 5th grader" and
give it all sorts of questions like these (but even easier).

For example:

"Think of a giant flightless bird. Now, name a color that starts with the same
first letter."

~~~
foobarbazqux
Eggplant. Or blue.

------
graycat
The article concentrates on the fact that current artificial intelligence (AI)
software doesn't really _understand_ the _meaning_ in natural language. Right.

So, then, maybe one approach to AI is via natural language -- program a
computer to understand natural language, just typed input should be sufficient
initially.

How to do that? My two kitty cats understand some natural language, and I have
to suspect that I roughly understand how they did that and how I could program
some of it.

Human babies learn natural language, and we have to suspect that the effort is
a _bootstrap_ where learn some really simple things \-- e.g., "Ma Ma" and "Da
Da" \-- and then build on those. "Nice". "Bad". "Ma Ma nice." "Da Da bad".
"Food". "Hungry". "Hungry want food.".

"When hungry and want food, go to master, reach up with front paw and use
claws to pull on shirt but don't pull on skin." \-- my kitty cats both already
figured this out either independently or learned from each other.

"I can see." "Can he see me?". Kitty cats know that very well, and if they
want to scratch or bite (one cat long ago, just rescued), know to wait until
the target can't see the claw or mouth about to bite.

So, to come in from the back porch, wait until there is noise indicating that
I'm at the kitchen sink and take a position on the porch so that can be seen
-- then I will let them back in.

Then build on such simple things.

That's what I thought long ago. Once I asked DARPA about it, and they had no
response.

The author of the OP has another article on how birds and babies learn to
understand language. So, maybe more than one person is thinking along those
lines.

Doing it first with just text input should show the core problems and be
sufficient.

One problem: Kitty cats have great internal 3-D geometry. E.g., if the mouse
runs clockwise around a packing box, then the cat can be smart enough to run
counterclockwise. So, the cat understands the 3-D box and paths in space.
They're not stupid you know! How to program that? Hmm ...!

~~~
hackinthebochs
How did you "ask DARPA"?

~~~
graycat
Found an appropriate DARPA problem sponsor and sent him e-mail.

------
jlhawn
In the first lecture of the Artificial Intelligence course at UC Berkeley, one
slide says: "A better title for this course would be: 'Computational
Rationality'"

------
timedoctor
I think the ultimate test (and application) for artificial intelligence will
be to tell the computer "Make me money" and then it figures out thousands of
ways to do it and starts executing on these strategies. Potentially very
dangerous outcomes however without morality.

Or another initial application of artificial intelligence I think is in
trading financial markets, and distilling every point of data to create models
of predicting markets and making obscene amounts of money.

~~~
kowdermeister
And an intelligent answer would be "go find a job" :) Or give you a
motivational quote from Tumblr :)

------
abecedarius
Levesque works in knowledge representation, and all the sample questions hinge
on some shallow inference from default knowledge. It's understandable to think
your own field of research is central -- and it was, in the 80s. What makes it
especially productive to focus on now? That's what I'd like to have seen
covered here.

The Turing test seems a red herring, since afaik it's not a big part of
research evaluation currently.

------
SeanLuke
This whole article, and [I think] Levesque's IJCAI article, seems to think
that artificial intelligence is just about language.

~~~
hackinthebochs
Its not that its just about language, but I do think that a system that could
pass the turing test would necessarily be AGI. To put it another way, to fully
solve the NLP problem one would have to have created AGI. To pass the turing
test a system would need to simulate or acquire the knowledge of an entire
life of experiences of a person and be able to convincingly converse about
those experiences with an actual person. This would undoubtedly be AGI.

~~~
SeanLuke
1\. I think "AGI" is a ridiculous recent rebranding of "Hard AI" and doesn't
represent any meaningful scientific or engineering pursuit.

2\. Even if one accepted that NLP is AI-complete: this is a sufficient
condition and not a necessary condition. But the claim being made in the
articles is that it is a necessary condition: that is, that other AI
disciplines are distractions because they do not lead to the AI grail that NLP
supposedly leads to. This is classic GOFAI hogwash.

~~~
SeanLuke
Hard AI -> Strong AI. Brain not firing on all four cylinders.

------
segmondy
Anyone interested enough in building an AGI (Artificial General Intelligence)
system? I've always been curious about this field, but never done anything
about it or have any experience. Nevertheless, if there are any hackers who
are into AI, curious or want to build such systems. Contact me, let's explore
the possibilities.

------
bsaul
isnt that criticism thr exact point chomsky was trying to make in his proxy
discussion with norvig from google ? i found the exemple given in the article
excellent though. it's another reason why computers won't be able to solve
grammar mistakes 100% correctly (at least in latin languages).

~~~
anaphor
Chomsky doesn't think we should be trying to observe how people use language
on a mass scale, so screw him imo. See:
[http://languagelog.ldc.upenn.edu/nll/?p=3180](http://languagelog.ldc.upenn.edu/nll/?p=3180)
"The other argument has to do with the methods of science: Chomsky argues for
"very intricate experiments that are radically abstracted from natural
conditions". "

------
eli_gottlieb
What, no actual experts are going to comment in here? I was looking forward to
the inevitable flamewars.

------
dnautics
any thoughts here on the parallel terraced scan, as implemented by Hofstadter
in "fluid concepts and creative analogies"?

------
techaddict009
So who is going to make out real AI Algo which is Artificial by nature and
Intelligent by working ?

------
iUnderstand
I have thought about this long and hard. There was a time in my life when I
thought I could create something that had intelligence. Here are some of my
thoughts. Intelligence isn't something that is instant, it is something that
is acquired. To acquire intelligence, the being must have an environment in
which to interact and learn. My idea was to mimic the way that animals and
plants become adaptive to their environment. They all can die, and only the
strong survive. So, each being, or thread, would have the ability to die.
Well, what happens if they all die? Then the experiment would be over and that
wouldn't be fun would it. So, threads would also have to have the ability to
reproduce. And, what good would it be if all the threads were the same? They
must be able to mutate. What about intelligence? Forget intelligence, it just
represents the ability to survive. All the threads will die off that can't
survive anyways, so what we are left with is a pool of survivors. Uh oh. The
environment exists in a virtual realm. So my threads wont be very beneficial
in the real world realm that you and I live in. We need eyes and ears into the
real world. OK, no problem, we will purchase cameras to capture light, and
microphones for sound. Now we have a money issue. I may be able to afford a
couple hundred micro controllers, microphones, and cameras. Not bad for a
little expirement, but wait "Scientists estimate that there are one
quadrillion (1,000,000,000,000,000) ants living on the earth at any given
time." according to hypertextbook.com
[http://hypertextbook.com/facts/2003/AlisonOngvorapong.shtml](http://hypertextbook.com/facts/2003/AlisonOngvorapong.shtml).
How can my intelligence be smarter than an ant, if there are this many ants,
who are trying to survive, and have the ability to reproduce and mutate? I
don't think I can. I began thinking about this more and more. We would have to
speed up the evolution of our intelligent being. What are different things we
could do to speed it up? Perhaps we could create more mutations, but not too
many, we don't wont our bot to become extinct. Oh crap, how is it going to
reproduce? I totally forgot about that. Each of our species would have to have
an electronics factory built inside it. I know the solution, don't try to
create your own intelligence. You can create as much ARTIFICIAL intelligence
like search engines all you want, but this is not intelligence in any way
shape of form. It looks like intelligence, but it is not. To achieve truly
something more intelligent than a human, you will need to take what nature has
already created and mutated, and then provide environment
enhancements(schools?). There I said it. Schools and all theses GNC drugs that
are supposed to make your brain function at an increasing rate. I am afraid we
are already trying to become more intelligent every single day in the world.
And yes, it is working, but we can only work as fast as nature allows us. On a
closing note, Computers will never be able to think like a human, because they
cannot reproduce. Nature is much more efficient.

~~~
Houshalter
What you describe is exactly the same as genetic algorithms (also genetic
programming) or artificial life (Alife.) If you are interested there is a lot
of information about these things available on the internet and plenty of
implementations you can experiment with. And it does work to do amazing
things. The problem is evolution is slow. Incredibly slow. It just tries
random things, it takes awhile before it finds something that actually works.
You say that nature is more efficient, but it's not. Not at all. It's simply
had way more time than humans, and it has population sizes in the billions
(way more for some species.)

But with the same amount of resources, human engineers are way better at
designing things than evolution. If you give humans enough time, we can figure
out how to make intelligence, and we can probably do a way better job than
nature. At worst we simply need to reverse engineer how nature did it, at best
we find an even better way.

