
The Chinese Room Argument - fiaz
http://plato.stanford.edu/entries/chinese-room/
======
mindstab
Seriously, why won't this 2+ decade old argument die? It's sophomoric and
picked apart in second year cognitive systems classes by 19 year old children.
It glaringly misunderstands and misrepresents. If people are genuinely
interested in AI they should take a basic course in it at university or pick
up a real book on the subject.

~~~
Esspe
In my opinion, this argument have existential consequences: either it's not
possible to emulate mind on turing machine or we don't have qualia and
consciousness (it's just the illusion, that we have). And i don't like the
latter.

~~~
camccann
_we don't have qualia and consciousness (it's just the illusion, that we
have)_

An illusion experienced by whom?

What's the difference between having an illusion of consciousness and having
the real thing?

~~~
jerf
"What's the difference between having an illusion of consciousness and having
the real thing?"

Primarily, the word "illusion".

At this point, having read up on these issues for a while, anytime I see the
word "illusion" now I tend to just shut the book/leave the webpage/whatever.
Have you ever, once, seen someone sit down and say what they _mean_ by
"illusion", or give a way of telling you how to distinguish between "illusion"
or "reality"? Maybe you have (no sarcasm intended), but I've never seen it,
and without that the word basically just marks someone who is trying to sound
insightful without doing the hard work of actually being insightful and saying
something without leaving an enormous linguistic void right in the center of
the argument.

~~~
DrJokepu
Whether consciousness is only illusion is of no consequence. This path of
thoughts (just like Berkeley's subjective idealism or its modern form, the
Matrix movie) ain't leading anywhere (besides madness) hence it can be
classified as solipsism.

In my opinion, what matters that you experience consciousness, life, happiness
or sadness. Have great meals, love the people who love you, do cool stuff as a
'hacker'. Your life will be as 'real' as it could get, whether it's an
illusion or not.

------
pvg
Richard Gabriel has a nice write-up on this, well worth reading especially if
you're in the sputtering 'zomg this is SO dumb' camp. It's also a much quicker
read than the entirety of the Stanford page.

<http://www.dreamsongs.com/Searle.html>

"Searle's argument is subtle in a way that seems to confuse intelligent
readers. "

~~~
Herring
> _Most of the critics of Searle's argument fail to respond directly to his
> points. This is actually not all that surprising in that most of the critics
> are scientists and Searle's argument is philosophical, but it means that
> doing a point by point critique of the argument is unlikely to be very
> useful._

Well now, that says something about philosophy's relevance to science.

> _Many people consider the concept of gender inseparable from its fleshly and
> biological origin and nature._

And many people don't. The argument is just philosophical, what's the big
deal?

------
camccann
I've never been able to figure out how this (and most other philosophical
arguments against AI) don't apply just as well to individual neurons. Though I
suppose if one wants to frame it as an argument that neither computers nor
human brains are capable of intelligence, I might be persuaded.

In fact, for sake of argument, I claim that I am, in fact, just an elaborate
system of symbolic manipulation with no actual comprehension or conscious
experience, a bunch of meaningless neural impulses with no greater
understanding of English than the "Chinese Room" understands Chinese; and I
invite anyone to attempt to persuade me otherwise.

~~~
benpbenp
I think the fact that the argument is equally applicable to the human brain
gets at the heart of the general problem of consciousness, of which AI is just
a special case. Consciousness is _not_ an observable phenomenon (excepting
one's experience of one's own consciousness), so it seems unlikely to me that
we can ever understand it in terms of observable phenomena-- i.e. as a
function/result of things like neurons or transistors.

------
tlb
I think you could make the same argument for lots of things computers do.
Could Grand Theft Auto 4, including the character AI and per-pixel 3D
rendering, be implemented by people following instructions on cards? Yes,
because Turing machines yadda yadda. But it's inconceivable to non-
programmers. The Chinese room argument is convincing for the same reason:
doing something AI-like requires billions of steps and non-programmers can't
imagine building up something that complex from primitive operations.

~~~
lg
Well, it's inconceivable to Searle.

Searle's problem is that, much as we can't imagine how silicon can be
conscious, we can't imagine how squishy physical brains are capable of
conscious experiences either, even though we know they are, because we are.
His solution is that biological stuff just has these special properties,
that's a "brute fact" about biological stuff and it's not true of silicon
stuff. And I think most philosophers of mind (who are, by and large,
nonprogrammers) think that's too 'magical' to be true.

In fact a very popular idea now in philosophy is that consciousness is (in
some sense) a fundamental physical property, and when physical things are
arranged in the right way, they become conscious like us or any other animal.
And the proper arrangement of particles to invoke these high-level conscious
properties must help organisms survive, or else it'd be unlikely that we have
them. And so machines structured the right way can probably be conscious too.

------
rms
From my reading over the last few months, I think the unanswered question of
subjective experience/qualia comes down to the Born Probabilities.
<http://lesswrong.com/lw/py/the_born_probabilities/>

~~~
lg
Thinking out loud here I guess, but I'd like to hear Eliezer's opinion on the
interpretations of QM that do reconcile conscious experience with these
probabilities in a way that doesn't require splitting worlds, like these:
<http://arxiv.org/pdf/quant-ph/0603027>

------
jeffcoat
I don't understand why people are willing to even accept the premise. The
system as described (a book of instructions for manipulating Chinese symbols)
can't usefully answer the question "what time is it?"; why should I believe it
could carry on a lucid-but-very-slow conversation in Chinese?

Imagine that also in the room is a triangle with four sides. Now the Chinese
Room Argument disproves AI and geometry! What subject do you want to demolish
next?

~~~
neilk
I think this is the only real objection to Searle.

People get handwavey in these thought experiments, that we can just assume
that such a system can be built. Well, no, we can't, not even in theory. It's
trivial to show that a Chinese Room that could converse for more than 15
minutes would have to contain more elementary particles than the entire
planet.

There is an analogy here with Chomsky's linguistics. You can disagree with the
specifics of his theory, but he did show that there was no way that people
were learning sentences like parrots and repeating them. There had to be some
computational / grammatical process.

Anybody can imagine mechanical alternatives for doing any information
processing. To an extent, that drives the whole of the programming industry.
The whole question of intelligence is that we can do it with a few pounds of
biological material. Anything that doesn't address this is missing the point.

~~~
afterz
" It's trivial to show that a Chinese Room that could converse for more than
15 minutes would have to contain more elementary particles than the entire
planet."

I don't believe it, can you show it?

~~~
DougBTX
Since you could fit someone who speaks Chinese in the room... assuming people
are made of elementary particles.

~~~
afterz
I still don't believe you. So the numbers are made up? "15 minutes" -> "entire
planet"? How much for 10 minutes? And for 1 minute? Is that obvious too?

So you think it's very obvious that it's impossible to build a chatbot that
speaks chinese for 15 minutes in a computer like the one I'm using now (even
one that claims to be a child for example). And why couldn't we just clone a
Chinese person brain and say it's a computer? Why do you think a brain
snapshot would require the size of a planet?

~~~
DougBTX
_I still don't believe you._

Ah, sorry, the "more elementary particles than the entire planet" claim wasn't
made by me. My first comment was too vague, here is another try:

Yes, it sounds like a very strange claim, since isn't biological material made
from elementary particles, and clearly you could fit someone who speaks
Chinese into the room. Obviously that assumes that people are made of
elementary particles, but if someone is going to argue that, I'm going to need
a better definition of elementary particles to make any progress.

------
tybris
I have a grudge against written philosophy. The meaning of words is
subjective. It is defined by what relations you draw in your brains when you
read or hear that word. This not only makes words quite unimportant, they turn
your mind into a philosophical mine field. It is constantly under the
assumption that words have some universal meaning. People seeking for the
"intelligence" or "truth" that they have in their head, but they're constantly
redefining it based on new insights obtained in their search. As a result, all
these concepts seem unattainable.

If you take the focus off the word, much of your prejudice disappears. You see
the relationships, the structure, the observations, the logic, the nuance. You
can see you used two distinct meanings of the word "intelligence". One is a
set of expected reactions, the other is your consciousness. Now this gives
rise to a logical question: Are these two things the same? The Chinese room
shows that's not necessarily the case. However, might that feeling you call
consciousness be a side-effect of a the particular type of Chinese room that's
going on in your head? That's how you get interesting philosophy.

~~~
telemachos
_The meaning of words is subjective._

No. Or at least, not entirely, not in any significant way, and mostly just no.

If language were truly subjective, we should expect almost all attempts at
communication to fail. If language were radically subjective, then two people
speaking to each other in the same language would be for all intents and
purposes like a speaker of (only) English trying to communicate to a speaker
of (only) Chinese. It's manifestly the case that this isn't so. Beyond this,
if language were truly subjective, we wouldn't be able to think (since the
person-to-person communication problem would emerge even for one person).

The real world strikes again.

------
ilaksh
The field Searle is retardedly attempting to dismiss is now referred to as
artificial general intelligence (AGI), not AI.

Also, philosophers should try to realize that we have this thing called
science now.

