
A Short Rebuttal to Searle (1984) [pdf] - headalgorithm
http://ai.stanford.edu/~nilsson/OnlinePubs-Nils/General%20Essays/OtherEssays-Nils/searle.pdf
======
omazurov
Almost 20 years before Searle, a short fictional story by A.Dneprov, "The
game", was published in the Soviet pop-science magazine "Knowledge is Power".
Not only had it described the essence of the Chinese room setup, it also came
to the same conclusion:

 _> If you, being structural elements of some logical pattern, had no idea of
what you were doing, then can we really argue about the ‘thoughts’ of
electronic devices made of different parts which are deemed incapable of any
thinking even by the most fervent followers of the electronic-brain concept?
You know all these parts: radio tubes, semiconductors, magnetic matrixes, etc.
I think our game gave us the right answer to the question ‘Can machines
think?’ We’ve proven that even the most perfect simulation of machine thinking
is not the thinking process itself, which is a higher form of motion of living
matter._ [1]

[1] A Russian Chinese Room story antedating Searle’s 1980 discussion
[http://www.hardproblem.ru/en/posts/Events/a-russian-
chinese-...](http://www.hardproblem.ru/en/posts/Events/a-russian-chinese-room-
story-antedating-searle-s-1980-discussion/)

------
mabbo
The last sentence seems to sum up nicely my viewpoint on Searle's Chinese
Room:

> For all I know, Searle may only be behaving as if he were thinking deeply
> about these matters. But, even though I disagree with him, his simulation is
> pretty good, so I’m willing to credit him with real thought.

------
Upvoter33
I've often thought Searle's Chinese box argument was vapid, and have been
unsure why so many seem to hold it in high regard. This article was helpful in
this regard; Searle seems to think that he can presume that are a set of rules
that allow for translation; without knowing those rules (i.e., the "program"),
the argument becomes uninteresting.

~~~
jimhefferon
> vapid

I'm not sure that is the right word, at least for me.

I teach, and every once in a while a person asks a question that I just do not
understand. It is a bad feeling; I am left standing in front of everyone
saying "Could you repeat it in other words? Could you give another example?"

I have that same feeling on reading Searle's paper, and on hearing him talk
about it on In Our Time. He says a number of things that I agree with. Then he
says, "Therefore obviously computers cannot understand Chinese." I'm left
slackjawed, wondering whether some kind of type error happened or am I just
too dumb? (Entirely possible, no doubt.)

I don't have the `vapid' sense that I understand the argument and it is
incorrect, rather I'm left with a sense of "What just happened?"

~~~
IWeldMelons
He stressed on many occasions that computers can understand Chinese, but not
the typical digital ones, programmed as simple symbolic manipulators.

~~~
naasking
Sounds like nonsense. There's no meaningful difference between digital
computers and other kinds of physically realisable computers.

~~~
IWeldMelons
Well, the burden of proof is on you. The Turing-Church thesis is not a
theorem, it is just a conjecture. It might be true, might be not true. Now, if
you define a physical computer as a Turing machine and nothing else than yeah,
we, humans might as well be not computers at all.

But that was not my point. Digital computers could be, in principle,
conscious, but the idea that you can construct a mind by making a symbolic
program is flawed. Human brain could be, in fact, a digital computer, but the
mind in it is not a result of some vulgar symbolic computation, but a result
of some mysterious computational process, that very possibly has nothing to do
with manipulations with some abstract symbols.

~~~
naasking
> Now, if you define a physical computer as a Turing machine and nothing else
> than yeah, we, humans might as well be not computers at all

We're not even Turing machines. The Bekenstein Bound establishes humans as
finite state automata.

> Human brain could be, in fact, a digital computer, but the mind in it is not
> a result of some vulgar symbolic computation, but a result of some
> mysterious computational process, that very possibly has nothing to do with
> manipulations with some abstract symbols.

So a magical computer? What purpose in a brains computation does the magic
serve exactly?

~~~
IWeldMelons
The very definition of a "symbol" presupposes that there is some entity, which
assigns meaning to that symbol. Well, unless you assume that the Nature itself
conscious, it does not operate in the terms of symbols and symbolic
computations. You can possibly claim that the elementary particles are the
symbols of the Nature, and all the physical processes are computations
involving such symbols but that would be a degenerate case, which won't help
your argument. Mind is a natural phenomenon, which arises as a result of a
certain physical process, in matter organized in a certain way. That is all.
Until we learn why it happens in that type of matter, we would not be able to
recreate minds.

~~~
naasking
> The very definition of a "symbol" presupposes that there is some entity,
> which assigns meaning to that symbol

No, that's not what's meant at all. Searle is making a distinction between
syntax and semantics, where computers process only syntax (symbols are
meaningless tokens), but humans can reason semantically, where propositions
carry meaning. Both perform symbol processing, so "symbolic computation" is
not a useful distinction here.

Further, Searle is asserting that semantics cannot arise from syntax. But this
is clearly false, as various methods of computational induction demonstrate:
given only syntactic analysis of output bits, you can build a semantic model
to predict subsequent bits.

So what gives the symbols of the semantic model meaning? The correspondence of
each symbol to real, observable objects.

> Until we learn why it happens in that type of matter, we would not be able
> to recreate minds.

You're assuming that consciousness requires some special property of matter.
There's no justification for this assumption.

Furthermore, you are arguing that we cannot create something that we do not
fully understand. This too seems false. We created fire long before we
understood it, for instance.

~~~
IWeldMelons
> Both perform symbol processing

Not sure about that.

> So what gives the symbols of the semantic model meaning? The correspondence
> of each symbol to real, observable objects.

And what decides what symbol corresponds to what real object?

> You're assuming that consciousness requires some special property of matter.
> There's no justification for this assumption.

Really? I thought that is something we have agreed upon. You are claiming that
any system structured as a digital computer (hence, has a special property) is
conscious, I am saying that it is not enough, it has to be structured some
special way.

> We created fire long before we understood it, for instance.

This is a laughable argument. "We reproduce (as living beings do). therefore
we create minds".

~~~
naasking
> You are claiming that any system structured as a digital computer (hence,
> has a special property) is conscious

Nowhere did I make that claim. You're clearly confused about the arguments
being made so I suggest your reread this thread.

> And what decides what symbol corresponds to what real object?

An inductive inference algorithm, like I said.

> This is a laughable argument. "We reproduce (as living beings do). therefore
> we create minds".

Exactly right. Therefore your claim that we must understand something before
we can make it is trivially incorrect.

------
erokar
Old and not a particularly interesting rebuttal. Searle later stressed how the
Chinese room argument demonstrates how programs, in and of themselves, cannot
generate consciousness. This important aspect is not addressed in the
rebuttal.

For an updated overview of the argument and replies, see:
[https://plato.stanford.edu/entries/chinese-
room/](https://plato.stanford.edu/entries/chinese-room/)

~~~
mannykannot
Searle claims that the argument demonstrates this, but his response to the
'systems reply' shows that he does not appear to understand the challenge that
it presents to the argument:

 _" My response to the systems theory is quite simple: let the individual
internalize all of these elements of the system. He memorizes the rules in the
ledger and the data banks of Chinese symbols, and he does all the calculations
in his head. The individual then incorporates the entire system. There isn't
anything at all to the system that he does not encompass. We can even get rid
of the room and suppose he works outdoors. All the same, he understands
nothing of the Chinese, and a fortiori neither does the system, because there
isn't anything in the system that isn't in him. If he doesn't understand, then
there is no way the system could understand because the system is just a part
of him."_

Searle does not seem to understand that the systems reply is not dependent on
where or how the components of the system are implemented. He apparently
cannot conceive that, while the subject's conscious mind is solely occupied
with mechanically performing the operations of the algorithm, the system that
her conscious mind is part of actually understands something, and that this
would be so even if the system were implemented entirely within the physical
body of the subject. It is an outlandish concept, but the premise of the
experiment is itself outlandish, and a philosopher should not expect ordinary
intuitions to be a reliable guide to what would happen in such cases.

~~~
anongraddebt
There are two types of expanded responses to the Systems Theory: the first is
a line of argument extending into modal logic and the second is a line of
argument that blurs the lines between the Chinese Room Argument and other
arguments that purport to show computation as insufficient for 'mind'. Of
course - as often happens over decades of discourse in analytical philosophy -
the boundary between even these two lines of argument that I just classified
is rather fuzzy (shoutout to Zadeh).

The SEP entry mentions Schaeffer (2009) and Nute (2011) for the line of
disagreement extending into a discussion of modality.

Regarding the second line of argument, the entry mentions Harnad (2012) and
says that he appears to follow Searle in, "linking understanding and states of
consciousness" as well as arguing "that the core problem of conscious
"feeling" requires sensory connections to the real world."

The primary issue in all of this is whether certain non-biological systems
(i.e. a computer) are sufficient for the understanding generated by certain
biological systems (i.e. humans). This has not much of anything to do with
physicalism or dualism, no matter what certain responses by non-experts
(Penrose, Kurzweil, etc.) would have some believe. The fact that Cole even
mentions Kurzweil is, arguably, a disservice to those outside the relevant
domain.

The take-away is this: in analytical philosophy, many famous thought
experiments are important not for their intended conclusions, but for the
expanded dialectic they generate, and which lasts for decades (if not longer).
The expanded dialectic brings additional clarity and raises important
questions that had either gone unnoticed or were impossible to see before the
thought experiment.

~~~
mannykannot
Decades of arguments attempting to short-circuit the scientific process, by
claiming to show that minds cannot possibly work this way or that way, do not
seem to have contributed much, if anything, to our understanding of how they
do work. It all seems very self-referential, in that there is a lot of arguing
about arguments.

~~~
anongraddebt
I think the suggestion that lengthy arguments in the philosophy of mind are
attempting to short-circuit scientific progress is a bit uncharitable.

There is a small minority of individuals on both sides that seem to want to
slow scientific progress (whether intentionally or not). Now, the vast
majority of the time it is presumed people like Searle fit this description.
People like Dennett fit this description as well, though. Anyone _truly_
making definitive claims about having resolved the question of (say)
conciousness, is not taking the right tack from a scientific perspective.

~~~
mannykannot
I did not intend to suggest that there is anything wrong with attempting to
cut the Gordian knot with an insightful analysis, it is just that, in these
cases, it does not seem to have led to anything useful. And Dennett's book
'Consciousness Explained' is certainly presumptuously titled, though he is at
least looking for explanations, when he is not confronting those who claim it
can't be done. Meanwhile, the state of technology has advanced to the point
where we can at least imagine Searle, in his room, competing successfully in a
Chinese-language game of Jeopardy.

No-one ever disproved vitalism, but its implicit threat to the progress of
biology simply dropped by the wayside.

------
Gormisdomai
The chinese room thought experiment is more like Searle's attempt to get you
on board with his intuitions about the philosophy of mind - but taken on its
own as an argument it's pretty unsound.

His real argument exists in a bunch of stuff he wrote about intentionality
(the real argument probably still fails - but it's more nuanced than the
Chinese room stuff everyone talks about).

If you want a really good paper about the problems around machines and minds
check out "Troubles with Functionalism" by Ned Block

~~~
sethev
Thought experiment is the right term for it. It applies equally to a brain if
you think that the chemical and electrical processes in the brain are
identical to understanding. Searle believes that consciousness is a
fundamental fact.

------
orangeeater
> While I agree that AI research has a long way to go (perhaps several
> decades) before it might produce responsible machines

...written in 1984. I do not believe we've made any progress in terms of
creating "responsible machines."

My general feeling about the Chinese Room argument is that it discusses the
"psychological phenomenon of understanding." We don't really know what that
phenomenon _is_ , it's a feeling or sensation from our perspective (or at
least from mine) and so I'm uncomfortable ascribing much importance to it.
Until we understand how our minds actually work, the "Turing test approach"
makes sense to me: if we can't tell the difference, then it's "responsible."

------
scandox
> Whatever the key to self-reflection turns out to be, it clearly will involve
> the processing of internal symbolic representations.

Clearly?

~~~
AnimalMuppet
Depends on your definition of "symbolic".

If you mean symbolic in the sense that computers use symbols, then no, it
won't involve that. Humans are self-reflective (at least some of the time),
and they don't think that way (with the possible exception of mathematicians).

But words can also be considered symbols, and when we think, we usually think
in terms of words. "I" is a symbol, and it's hard to self-reflect without
using it.

~~~
rusk
I think in this context he’s talking about a very specific meaning of
_symbolic_ , vs _connectionist_ the two main classes of models of thought.
Symbolic is something where you can make out the structure of what’s going on
(a requirement for analysis or “introspection”; hence clearly). The
alternative is bottom up modelling eg with neural networks where only the
outcome is known and the internal processes can’t really be analysed.

~~~
Gibbon1
My frank thought on this is the AI researchers that placed their bets on
symbolic logic were placing their bets on the idea that symbolic logic is at a
higher level than connectionist logic.

If you're not emotionally invested that argument is laughable. It's total
basis is that human suck at math.

~~~
YeGoblynQueenne
The original artificial neuron, the Pitts-McCulloch neuron, was a
propositional logic circuit, conceived entirely on the basis of a model of
biological neurons as logic gates, with a threshold function that controlled
their true or false value. It was even called the Threshold _Logic_ Unit. The
paper where it was first described is titled "A _Logical_ Calculus of the
Ideas Immanent in Nervous Activity". In fact, the perceptron too was a logic
circuit, a function with two outputs, "positive" and "negative ... or "true"
and "false".

AI researchers always used the tools that all scientists use to model their
subject, in this case, intelligence: maths. Logic is maths, just as
probabilities, calculus, algebra, etc etc.

------
roflc0ptic
After reading "I Am A Strange Loop" several years ago, I got on an ouroboros
joke kick. I wrote one about John Searle. I'm posting it it here because so
far, underneath this pdf is the only place it has ever seemed apropos.

What did the ouroboros say to John Searle?

"Let me out of here, you know I can't speak chinese!"

