

Artificial Intelligence Could Be on Brink of Passing Turing Test - leejw00t354
http://www.wired.com/wiredscience/2012/04/turing-test-revisited/?utm_source=twitter&utm_medium=socialmedia&utm_campaign=wiredscienceclickthru

======
walexander
The Turing Test was a very clever way of describing an AI without having to
get into dead end philosophical arguments about what is or isn't intelligence
(Ants have very complex social structure and engineering abilities - but are
they intelligent?).

Turing picked something uniquely human and used it as a baseline.
Unfortunately, what we got is passing the cargo cult turing test, as described
in the Chinese Room Experiment.

I think what we all had hoped for, however, was HAL. What we are going to get
is more and more iterations of cleverbot.

~~~
Tuna-Fish
I have never actually understood the "chinese room experiment".

As formulated in WP, the point is supposed to be that while the room is
working, the man in it cannot understand Chinese, so similarly the computer
cannot understand it. But isn't this completely pointless, as despite what the
man and the computer can or cannot understand, the room or the program can?
The man (or the computer) is just a cog in a larger system, and cannot be
expected to understand the system, just as none of my individual neurons
cannot understand English.

> I think what we all had hoped for, however, was HAL. What we are going to
> get is more and more iterations of cleverbot.

And what is the difference? When we can build a program that can parse natural
language and use it to access information, it will open a whole new
technological revolution.

~~~
guccimane
Searle calls your objection the "systems reply", and his response can be found
here (2a): <http://www.iep.utm.edu/chineser/#H2>

~~~
bermanoid
Pretty much everything Searle has ever written on the subject can be predicted
by starting with the argument "but only humans can understand meaning!" and
working from there.

In part c of that link, he lays it out quite clearly: even if you manage to
build a detailed working simulation of a human brain, even if you then insert
it inside a human's head and hook it up in all the right ways, you still
haven't "done the job", because a _mere simulation_ of a brain can't have
mental states or understand meaning. Because it's not a human brain.

In other words, he's an idiot. Or at least he's so committed to being "right"
on this issue that he's willing to play the dirty philosophy game of sneakily
redefining words behind the scenes until he's right by default.

But in any case, he's not talking about any practical or mesurable effect or
difficulty related to AI. He's arguing that even if you built HAL, he wouldn't
acknowledge it as intelligent, because his definition of "intelligent" can
only be applied to humans.

~~~
gambler
Is it Searle who redefines consciousness because he's doesn't like computers,
or is it you, because you like them? His argument is quite brilliant, because
it's both _clear_ and _non-trivial_. Most of the self-appointed internet
philosophers lack both of these qualities.

For example, people who say that there is no difference between understanding
addition and merely running an addition algorithm are wrong. Dead wrong. You
don't need complex philosophy to show that. Yes, the results of computations
would be the same, but the consequences for the one doing computing are not.
We all know that a person who _understands_ something can do much more with it
than a person who merely memorized a process. Everybody agrees to this when it
comes to education, so why is this principle suddenly reversed when it comes
to computers?

~~~
zerostar07
_Most of the self-appointed internet philosophers lack both of these
qualities_ : What use is attacking the man here?

You are also misrepresenting Searle's argument. In the case of addition, the
machine would not only be able to perform it, but also _answer any conceivable
question_ that regards the abstract operation of addition. It would be able to
do everything a human would do, excluding nothing. The underlying argument is
that "understanding" is a fundamentally and exclusively human property (this
will not be fully rebutted until we discover in full the processes underlying
learning and memory in humans)

Granted, a huge list of syntactic rules will probably not result to any useful
intelligence, but a brain simulator would be exactly equivalent to a human
(and Searle's response to that argument is completely unfounded)

~~~
gambler
I don't think I misrepresent his argument. I just interpret it using different
examples. He uses a huge example, like speaking Chinese, which seems to
confuse a lot of people. I use something much simpler, like doing addition.

His argument is based on the notion that doing something and understanding
what you do are two different things. I don't see why this needs an elaborate
thought-experiment when we all have experienced doing things without
understanding them. We don't need to compare humans to computers to see the
difference.

Problem is, this difference becomes apparent only when you go beyond the scope
of the original activity/algorithm. And that's exactly where modern AI
programs fail, badly. You take a sophisticated algorithm that does wonders in
one domain, throw it into a vastly different domain, and it starts to fail,
miserably, even though that second domain might be very simple.

~~~
zerostar07
His argument is that, while a human can do something with or without
understanding it (e.g Memorizing), a machine can only do the former and will
never do the latter. The argument may hold for the currect (simplistic) AI,
but not for a future full brain simulator.

------
tomelders
One thing I've always wondered about turing tests: Wouldn't AI's need to lie a
hell of a lot in order to pass it.

For example, if I asked someone to tell me the capital city of every country
in the world, I'd be very surprised if they could. However, a half decent AI
could do this easily. But what if I pushed it further and started to ask
really complex maths questions (something computers are much better at than
humans) then It would become clear very quickly that I'm talking to a machine.

Also, humans have holes in their knowledge. For example, given the question
"Who is the prime minister of the Netherlands?" the answer for most people is
going to be, "I don't know". Or what about "Which team won the first ever FA
cup?". Despite not knowing the answer (The Royal Electrical & Mechanical
Engineers) most people would hazard a guess (Manchester United, Liverpool) and
be wrong.

Programming an AI to play dumb would be relatively easy. But what use is an AI
that lies? Passing the test may well be possible, but what use is Artificial
intelligence that pretends to as dumb as humans?

~~~
drcube
You assume that any AI worth the label will already be as capable as current
PCs.

But perhaps there is a tradeoff? Maybe becoming "intelligent" in the way we
understand it is incompatible with the "dumb calculator/encyclopedia"
capabilities of regular computers? Maybe true AI will _necessarily_ lose the
ability to look anything up instantly or calculate large columns of numbers?

I don't really believe that, but it is a possibility.

~~~
tomelders
I've sort of had the same idea. I've wondered wether the power required to
have a robot process all it would need to in order to move around and interact
with the world by responding to all the different stimuli (optical, audio,
kinetic) wouldn't leave many CPU cycles left for doing the super human things
we're used to computers doing.

------
invalidOrTaken
I find it hilarious that _half an hour_ after this is posted here, there are
no comments. Because, you're right, O HN Reader: they're not even close. But
you already knew that without reading TFA, didn't you? And just to satisfy
your curiosity, they're talking about Watson-style machines, but then
following this talk with quotes about "huge challenges " remaining, which are
apparently too insignificant to mention in the title.

------
stungeye
Like some others have said, the Turing Test is more a human mimicry test than
a test of intelligence or consciousness. We humans love to anthropomorphize,
so tricking us into believing a machine is human shouldn't be how we gauge the
effectiveness of our AI.

I ran across these "Fundamental Principles of Cognition" that might do a
better job:

Principle 1. Object Identification (Categorization)

Principle 2. Minimal Parsing ("Occam’s Razor")

Principle 3. Object Prediction (Pattern Completion)

Principle 4. Essence Distillation (Analogy Making)

Principle 5. Quantity Estimation and Comparison (Numerosity Perception)

Principle 6. Association-Building by Co-occurrence (Hebbian Learning)

Principle 6½. Temporal Fading of Rarity (Learning by Forgetting)

See: <http://www.foundalis.com/res/poc/PrinciplesOfCognition.htm>

Also, Hofstadter suggests some similar "essential abilities for intelligence".

1\. To respond to situations very flexibility.

2\. To make sense out of ambiguous or contradictory messages.

3\. To recognize the relative importance of different elements of a situation.

4\. To find similarities between situations despite differences, which may
separate them.

5\. To draw distinctions between situations despite similarities, which may
link them.

------
cs702
Despite its hyped-up title, this Wired news piece contains no real news of
scientific advances. In fact, I can summarize it in one sentence: "with more
and more data coming online, and with sophisticated techniques for collecting,
organizing, and processing all this data, computers might soon be able to fool
the Turing Test."

~~~
ppod
What's the difference between collecting, organizing, and processing data, and
intelligence?

~~~
zoul
The ability to solve problems that fall outside the envelope of your harvested
and processed data?

~~~
ppod
Do you think that you have the ability to solve problems that fall outside the
envelope of your harvested and processed data? I certainly don't think I have
that ability, and I'm human (i swear).

------
debacle
TL;DR: Artificial Intelligence Probably Not on Brink of Passing Turing Test

~~~
cpeterso
But Wired says it _could be_ on brink of passing Turing Test! ;)

------
leejw00t354
Here is one of my favourite articles on the Turing Test.

The method described in this article appears similar in its approach,
"Suppose, for a moment, that all the words you have ever spoken, heard,
written, or read, as well as all the visual scenes and all the sounds you have
ever experienced, were recorded and accessible".

<https://sites.google.com/site/asenselessconversation/>

------
drcube
I think of the Turing test not as an actual experiment that can be performed,
but more of a first crack at a working definition of intelligence.

Sort of like Shannon said "Let's leave 'meaning' to the psychologists, and
define 'information' based on properties inherent in the message itself" and
ended up revolutionizing information theory.

Turing is saying "Stop bickering over 'comprehension' and 'intent'. Can we
just agree that if a machine can fool an intelligent human being into thinking
it is also an intelligent human being - based only on its information output
rather than its physical shape - that machine deserves the label
'intelligent'?"

And I agree. Philosophers can argue about the internal state of that mind all
they want. But if I can converse and crack jokes with my new computer buddy, I
have no qualms about calling him intelligent. At least until he blue screens
and finally fails the test by spitting out a hex dump.

------
jakeonthemove
We already have a sort of AI that can be used for real world applications -
it's called the Internet. Using Google, Facebook, Wikipedia and dozens of
other sites, it's relatively easy to create a robot that can do quite a lot of
things - the problem is that creating the actual physical body of the robot is
expensive - humans are cheaper and still do everything better.

We don't even need AI, we need robots for specific tasks that would also be
programmed to work around any potential issues (most of which can be
identified and programmed if the field of application is narrow enough) -
making them create workarounds/solutions for new problems would be awesome and
all, but it's not necessary, IMO.

------
nextparadigms
A few months ago there was a rumor about a Google X AI project that passes the
Turing Test 93% of the time in an IM conversation:

[http://www.webpronews.com/is-google-x-all-about-highly-
intel...](http://www.webpronews.com/is-google-x-all-about-highly-intelligent-
robots-2011-12)

~~~
tomerv
Wouldn't passing the Turing Test 93% of the time mean that the machine does a
better job of pretending to be human than an actual human? I'd expect 50%
success to be the target.

~~~
batista
I could just mean the result number is readjusted based on the baseline
figure.

------
tambourine_man
The contemporary approach: no new theoretical breakthrough? Pick an old model
and throw more data at it.

------
bfrs
If siri 5.0 or Cyc 1000000.0 passes the Turing test, it will be the same story
all over again...the goal posts will be moved one more time. People will say
that passing the Turing test is not really a mark of true intelligence, its
just an imitation!

------
bandy
Ah, another rehash the "AI springs forth from complexity" argument. This is
entertainingly laid out in "The Moon is a Harsh Mistress", "Colossus: The
Forbin Project", "The Adolescence of P1", "Man Plus", etc.

------
nsomaru
Such a load of speculation, psssht. Why is this on the front page?

------
parsnips
As soon as Deep Blue was mentioned as AI, I closed that browser tab. Another
"journalist" trying to justify a paycheck.

------
aashu_dwivedi
Well I remember a post from a few months back saying artificial intelligence
probably needs a reebot.

~~~
leejw00t354
What exactly does it mean to reboot a field of study?

Take a step back and try to find new, possibly easier approaches?

~~~
excuse-me
It's generally instigated as an alternative scenario to merely buzzword
compliant ongoing forward dynamic enhancement using synergies between
interdisciplinary field approaches.

See - all you need is perl, rand and /usr/dict to create an AI that while it
can't pass as human can at least obtain a research grant

------
drallison
We are moving into a post-literate world. "Artificial Intelligence" is a field
of endeavor, not an entity that can try to pass a test. The headline is not as
bad as movie stars and news anchors using objective pronouns as the subject of
a sentence, but it still shows that the language is changing, and not for the
better, IMHO.

------
lectrick
This article is _wildly_ speculative. :/

------
iRobot
One of these posts was from an AI. Can you tell which one?

