
The Chinese Room Argument - danso
http://plato.stanford.edu/entries/chinese-room/
======
ThomPete
The Chinese Room Argument is everything that is wrong with analytical
philosophy. The fact that it keeps popping up is depressing.

Of course the man in the room doesn't know Chinese, neither does your
individual neurons. It's the entire house that's the conscious part if
anything.

The real answer of course is we don't know.

~~~
hyperpape
I'm baffled...you give a capsule version of the argument that many many
philosophers have given, minus all the details that are necessary to have a
theory, and then conclude that this is the problem with analytic philosophy?

The vast majority of analytic philosophers think the Chinese Room argument is
unsound. Some think it is obviously and utterly unsound. Others think it is
subtly unsound.

~~~
ThomPete
Searle is an analytical philosopher and they have a tendency to do circular
reasoning based on simplified premises because of their reliance on clarity in
language.

There are in my view exceptions but mostly analytical philosophy is relying
too much on clear definition of language rather than investigating the premise
of the defintion and the accept that it's always a fuzzy line.

This often lead them down paths that are rational from the point of their
premise but with a simplified premise. (The man in the room isn't conscious of
Chinese therefore...).

Kuhn, Lakatos, Feyrbend are welcome exceptions although Lakatos and Feyrbend
perhaps went too far.

There are many good and useful things about analytical philosophy the chinese
room argument is just and example of what isn't so good about it.

The real answer to the chinese room argument simply is. We don't know. But
this to most analytical philosophers is heresy . Their reliance on language to
provide us with truth is too strong.

~~~
jsnk
Swooping generalizations, lack of rational arguments against the philosophical
claims but rather concerned about the appearance and fashion of an argument.
You sure sound like you enjoy reading Derrida </sarcasm>

~~~
ThomPete
I enjoy reading philosophers not obsessed with truth but instead perspective.

Sure I am generalizing. This is a comment in a thread not a philosophical
paper.

That does not mean my generalizations are wrong.

You are welcome to disagree with me about either the Chinese room or my
generalization about analytical philosophers. But why not then argue against
what I say instead of doing what you accuse me of?

------
Smaug123
The LessWrong counterargument doesn't seem to be represented on that page.
Broadly speaking, it asks the question, "where did the giant lookup table come
from?"

The answer to that question is, of course, "something conscious put it there",
so the Chinese Room isn't itself conscious but is something more akin to the
diary of a conscious person.

For a proper explanation of the argument, including replies to some obvious
counterarguments, see Book IV ("Mere Reality") of _Rationality: from AI to
Zombies_ [1], which may be downloaded free of charge. Specifically, Part R
("Physicalism 201"), in the neighbourhood of essay 224 ("GAZP vs GLUT"),
although of course there is a reason that the book is presented with Book IV
as the fourth book rather than the first, and Essay 224 as the 224th essay
rather than the first.

[1]: [https://intelligence.org/rationality-ai-
zombies/](https://intelligence.org/rationality-ai-zombies/)

~~~
erostrate
Isn't this counterargument similar to the reply labelled (1) in part 4?

Quoting the text:

(1) Some critics concede that the man in the room doesn't understand Chinese,
but hold that nevertheless running the program may create something that
understands Chinese. These critics object to the inference from the claim that
the man in the room does not understand Chinese to the conclusion that no
understanding has been created. There might be understanding by a larger, or
different, entity. This is the strategy of The Systems Reply and the Virtual
Mind Reply. These replies hold that the output of the room reflects
understanding of Chinese, but the understanding is not that of the room's
operator.

~~~
danbruc
I don't think they are the same and that the idea mentioned by Smaug123 is not
a refutation at all. A native Chinese speaker is also made and prepared by its
parents and teachers and so on to understand Chinese but if you placed him in
the room you would certainly not attribute its capability to understand the
questions and reply to them to its parents and teachers, they past this
capability to him and are no longer relevant. The same holds for the rule
book, its creator codified his understanding of Chinese and the world in it
but is itself no longer relevant. He is certainly not part of an entity that
does the understanding in this setup because he has no knowledge of the
questions past into the room and might as well be dead for a long time.

~~~
Smaug123
> you would certainly not attribute its capability to understand the questions
> and reply to them to its parents and teachers

Really? Try doing that with a native English non-Chinese-speaker, and you'll
see why perhaps you should consider doing so :)

~~~
danbruc
I am not sure which exact scenario you are suggesting but if someone taught me
to understand Chinese he would of course be part of the reason why I can
understand Chinese but he would not be involved in the process of
understanding a question handed to me because I obviously retain this ability
even after my teacher's death.

~~~
TeMPOraL
And so the book-person system retains the ability that was put in place by
whoever made the book. I think that this counterargument to Chinese Room
should be easy to understand for programmers - the book is just the "Chinese
language understanding" part of the brain/knowledge refactored out. The rules
of this mental experiment are vague enough to allow a rulebook _that_
complicated.

~~~
danbruc
_[...] the book is just the "Chinese language understanding" part of the
brain/knowledge refactored out._

Actually the rule book is the entire brain, not only the part understanding
Chinese. Decoding the Chinese symbols is kind of the easiest part but after
that you have to understand the meaning of the question, find the answer to it
in your model of the world and finally spell out the answer in Chinese again.

The focus on the Chinese language is kind of a red herring and obscures the
amount of information about and understanding of the world that has to be
encoded in the rule book to allow answering arbitrary questions. That the
questions are and the answers have to be in Chinese is really not much more
than a minor twist, a minor annoyance, the whole setup would not lose much if
questions, answers and the rule book were all in the first language of the
operator because his ability to understand the questions and think about them
is absolutely irrelevant if he blindly follows the rule book.

------
stickfigure
I listened to an audiotape of Searle's chinese room lecture some years ago. It
was excruciating. The obvious retort is _the room speaks Chinese_ (the
"Systems Reply"), which Searle rejects with a weak aesthetic argument. Searle
doesn't like the idea, but that doesn't make it invalid.

If a strong AI ever emerges, I expect that future generations will look on
this argument the way we today look at phrenology and the ugly history of
attempts to justify slavery as some sort of natural order. ST:TNG's Mr. Data,
who spends every other episode agonizing over wanting to be human, is equally
offensive. How arrogant are we as a species to assume that every thinking
thing in the universe would want to be us?

~~~
ThomPete
Exactly.

I have always found it ironic that while most scientifically minded people
accept the idea that out of inanimate matter came single cells then multicell
organisms, then simple plant and animal life, then more complex life and in
the end us.

But when it comes to the idea that this is somehow the end, that it's not
possible to be more than us then they start arguing like religious people did
with Darwins theories.

We are special but there is nothing what so ever that says that we wont be
superseeded. In fact if genes are somehow carriers of information it looks
like technology would be a much better carrier. And if you like me believe
technology is a part of nature then it's obvious were we are heading.

But for now we just don't know and thats ok. We will figure it out with time.
The Chinese Room however brings nothing constructive or conclusive to this
debate.

~~~
stickfigure
Using the word _superceeded_ feels like the other side of the same judgement
trap. We don't know what a strong AI would be like. _Better_? Hard to define,
let alone predict.

~~~
ThomPete
It doesn't matter what it would be like, what matters is that it's going to be
stronger than us in many ways no matter how we turn that around if it is
indeed going to appear.

------
ZeroFries
I was never really convinced of the counter argument "the combination of the
man plus the rules understand chinese, not either individually". To understand
something is to have a conscious experience of the feeling of understanding.
I'm sure most people would agree no conscious experience of understanding
would emerge from the combination of man + rule set. Many people involved with
computers like to discount qualia, but it is real and part of the thing we
call consciousness.

Furthermore, on a more classical computational, non-dualist level, one needs
to not only know the rule set but generally also has some meta knowledge of
the rules themselves when one says they understand a thing. I can program but
I can also explain why I'm writing the program I am as I'm doing it, how the
different parts will interact to produce the whole, etc. I have some knowledge
of my own inner black box, something the Chinese room setup lacks.

~~~
ekidd
_I 'm sure most people would agree no conscious experience of understanding
would emerge from the combination of man + rule set._

There's a subtle dishonesty in the Chinese room experiment, because it's
asking you to imagine a system which is many orders of magnitude too small to
have an intelligent conversation in Chinese (or in any other language).

Using some back-of-the-envelope numbers from Moravec, based on his research
into visual algorithms that duplicate the human retina, the absolute _minimum_
number of operations to do what the brain does would appear to be around 10^15
operations per second (and possibly much, much higher).

If we assume that man in the room can perform one operation per second, and
that we need to provide at least 10 seconds of normal conversation, that means
the man in the room will need to work non-stop for 310 million years. (If you
said, "No, more like 30 billion," I'd say, "Sure, that's totally plausible."
Or even 30 trillion.)

Now, if we assume that the man spends, say, 5 million years working down one
cortical column and up another, modelling complex patterns of visual
recognition, language analysis, vocabulary selection, and so on, then my
intuition no longer tells me whether or not there's an immense, glacial
consciousness playing out inside that room. While we wait for an answer new
phyla will evolve, entire species will arise and go extinct, and so on. If
we've even slightly underestimated the computation required, the sun will turn
into a red giant and wipe out life on the planet before the Chinese room
responds, "Hey, how are you doing?"

There's a second possibility here: Maybe the Chinese room isn't a very slow
computer. Maybe it's a giant lookup table, containing every possible Chinese
conversation. I'm pretty sure a lookup table isn't conscious. But if we use a
lookup table, then it's going to need to be _unimaginably_ larger than our
entire universe.

If Searle wants to appeal to intuition, he needs to make his appeals
realistic. He can't sweep 300+ million years under the rug, for example. Or
hide a lookup table which makes the entire universe look like a speck of dust
in comparison.

------
xiaoma
I've never thought much of this argument since I first heard it over a decade
ago. Kurzweil and Dennet both took it apart pretty convincingly.

------
hyperpape
A slightly more interesting question, in my opinion: is the United States
conscious?

[http://schwitzsplinters.blogspot.com/2011/05/group-
conscious...](http://schwitzsplinters.blogspot.com/2011/05/group-
consciousness.html)

[http://schwitzsplinters.blogspot.com/2011/10/is-united-
state...](http://schwitzsplinters.blogspot.com/2011/10/is-united-states-
conscious.html)

------
tim333
I find things like the Chinese Room Argument quite persuasive for the idea
that computers will be able to think and understand. If the opponents to that
idea have spent decades trying to disprove it and the Room argument is the
best they can come up with then there probably aren't any decent refutations.

~~~
throwupper247
There's a simple argument. Computers are machines and hence passive. They are
programmed, they are run, etc. but whatever they think, i came up with. If you
[read] this, it's logically and grammatically sound to assume, that's not the
display, the cpu or whatever speaking to you, but me, throwupper247, a person.
We are talking about personification and that's an issue much older than AI.

~~~
JoeAltmaier
But isn't the point of AI that it trains itself from input? Not from what you
tell it. I thought of having a baby and my wife agreed; whatever this baby
thinks, we came up with? Clearly not. It learns from the world on its own.

~~~
throwupper247
The NN doesn't train itself, you train it, whereas the baby is first of all
traines with intrinsic stimuli, its consciousness and emotions. Sure, you
might not exactly know how the NN works either, but there's a middle ground
between _deterministic shells driven by higher forces_ and _inexplicable
black-boxes_. ...

~~~
JoeAltmaier
Semantic quibbling. I have to hold my baby's hand at first too. Doesn't matter
who is the conduit for the inputs; the AI is learning from them and not from
me.

~~~
throwupper247
No you don't always have to, it has emotions all by itself, as you said. You
are unable to make a connection from _holding its hand_ to _being responsible
for its actions_? Actions include holding hands as well as thinking? What's in
question overall is the degree of activity.

Sure It's not black and white, but it won't ever be exacrly the same for
living beings and machines. As said before, _Animism_ is an old hat.

------
Retric
Individual neurons don't understand Chinese does that mean people are
incapable of thought?

~~~
Chathamization
The argument seems to try to deliberately obfuscate the issue. We generally
think of an individual as knowing a language, whereas we wouldn't consider a
non-conscious object (dictionary, translation program) to know a language,
even if it contains much of the information. So we put the two together in
order to produce an output that can only come from the two (the man on his own
is not producing the output in Chinese), and then ask only about the man
(because the idea of a book-man system knowing something or a book knowing
something goes against the way we use the word "know").

It tries to intentionally confuse the reader in an effort to make up for a
very weak argument.

A clearer example would be asking something like - does a company (the system)
now how to make product X, when no individual in that company knows how to
make product X (each individual knows only a small part of the production
process)? Of course, a clearer example isn't used because it wouldn't elicit
the desired response.

~~~
TeMPOraL
The company example is awesome! Thanks for bringing it up. It shows the gist
of what's wrong with Chinese Room Argument, but it doesn't sound so abstract.
That we should switch to considering the man+book system may not be obvious to
someone, but I think anyone living in our civilization will implicitly
understand that e.g. no single person in Boeing has enough knowledge to build
a 787, and yet the company as a whole knows everything they need to do it.

------
cryoshon
The violent reaction to the Chinese Room argument always startles me. There
are a lot of people who deeply, really hate Searle for coming up with it.

The field has been advanced a lot by responding to the Chinese Room problem!

~~~
lern_too_spel
In what ways? Responding to it appears to me to be a distraction from real
work, just the same as responding to other obvious fallacies.

------
jhallenworld
A small variant of this argument highlights the mystery of our perception of
now:

Does the universe exist in the past and future? Suppose the universe is akin
to a sequential state machine or computer. You have the previous state, some
rules which are executed by the machinery of the universe and the next state.
In principle you could record all of the sequential states of the universe-
each state is one page of a book for example. In all of this where does the
sense of now come from? Is it in the execution of the rules? Why? Or is the
existence of the state itself that leads to "now"? But this is implying that
one page of the book (the "now" page) is somehow more special than any of the
other pages. Why?

~~~
FeepingCreature
A page is "special" in the sense that a location in memory is special when
it's the current address. It's not the page, it's the relation between the
page and the words on the page that are referring to the page. "Now" is a
state of relation between a mind and a timeslice. This is confusing because
the state is two-dimensional - time is a line from past to future, but each
mind in each point on it contains a model of time that has its own past and
future. The sense of ordering of time that arises in conscious experience is a
side effect of causality - if you think about how brains work, the question
"why is the present the present and not the past or the future" is actually
incoherent, a confusion about mind instead of a confusion about reality. By
definition, we perceive the present as the point at which we are "currently"
introspecting about perception.

Imagine somebody moved the entire universe and all matter in it thirty minutes
into the future. Would you notice any difference?

Brains run on physics. Every story in your mind has, by necessity, a purely
physical narrative behind it.

------
tomlock
This argument was _kind of_ explored in the movie ex-machina! If these themes
strike a chord, I'd suggest you go watch it now!

------
rwmj
[https://www.youtube.com/watch?v=4tK8jNVX_4Y](https://www.youtube.com/watch?v=4tK8jNVX_4Y)
(9 mins 36 secs)

For British people of a certain age, here was how the BBC covered the Chinese
Room argument on Horizon back in 1987. I think it's Searle himself being
interviewed.

------
mcguire
Dualism is a funny thing. I'm sure Searle wouldn't agree that he's saying that
he has a tiny homunculus in his head that understands English.

------
Noughmad
The real question here is: what difference does it make? The person in the
room is useful for the people outside.

I like the Dijkstra quote here: "The question of whether machines can think is
about as relevant as the question of whether submarines can swim." No matter
how you define "swimming", the submarine works and is useful.

~~~
tim333
>what difference does it make?

I guess it touches on whether machines will be able to have conscious
experience similar to ours and if so will we be able to achieve immortality
through uploading.

~~~
cryoshon
Will machines be able to have a conscious experience that is similar to ours,
probably not, as they're working with different starting materials which are
handled in entirely different ways. Still conscious in some ways, probably
yes. Conscious enough to discuss it with us and iron out the subjective
differences? Maybe eventually.

Will we be able to upload ourselves and maintain our biological consciousness?
No, neuron systems are the only substrate on which we can currently confirm
that consciousness arises from-- the current problem with saying machines are
conscious still applies.

~~~
danbruc
_No, neuron systems are the only substrate on which we can currently confirm
that consciousness arises from [...]_

You are making an arbitrary choice here - why the neurons? Why not the whole
brain? Some structure within the brain? The molecules making up the neurons?

~~~
cryoshon
We know people can have only 50% of a full brain and yet be conscious, and we
also know that destruction of a few tiny pieces of the brain can result in a
loss of consciousness in the sense we mean it here. The commonality is that
neuron systems are required; individual neurons are not enough, nor is an
entire intact animal brain without certain structures.

The line is arbitrary, but it doesn't matter so much. The factual reality is
that neuron systems are what we can currently (currently! this may change)
prove gives rise to consciousness, regardless of philosophical attempts at
violence against the science.

~~~
ThomPete
Sorry but thats actually wrong.

Read this:

[http://www.rifters.com/crawl/?p=6116](http://www.rifters.com/crawl/?p=6116)

~~~
cryoshon
This merely supports my hypothesis that neuron systems are what is needed...
you will notice nobody is claiming that there are conscious people with 0%
brain matter remaining.

~~~
ThomPete
You wrote this:

"We know people can have only 50% of a full brain"

And so the question is how you define a full brain.

Neuron systems are network. It might as well be that a network is needed not
the neurons per se.

------
murbard2
Vitalism for the 21st century...

~~~
fixermark
And the late 20th. Chinese Room has been around for awhile.

~~~
murbard2
Yes, but that didn't have quite the same ring to it...

