Hacker News new | more | comments | ask | show | jobs | submit login
The Chinese Room Argument (stanford.edu)
29 points by headalgorithm 19 days ago | hide | past | web | favorite | 63 comments



I have a lot of strong thoughts and feelings when I think deeply about this experiment. But one thing bugs me just about the way it's posed. You're told to just assume you've got a functional database of Chinese symbols; maybe it's just a big shelf of old dusty boxes with helpful headings. And you should just assume you've got this book of instructions; maybe it's so big you need a stepladder to read its big pages. But it is, as you've been instructed, to be just a book. And when you use them together, just assume they do something amazing.

Haha, I fooled you dummy, something has to be missing here; the thing it does can't really be THAT amazing, because you were told it was just boxes and books; you can get those anywhere.

You're told right away to imagine a set of comprehensible mundane objects, assume they can do this thing in the form you've been given, and as a result, the thing must be mundane and incomplete.

I know there's more depth to it than that, but I don't think you need to go that deep to find large flaws.


The analogy is supposed to emphasize that AI proponents (and others) presuppose there's a fully materialistic explanation for consciousness. There's no wizard behind the curtain: it's all just deep neural networks ("boxes and books").


Yes, and then the analogy is used to refute this notion through a thinly disguised "but it is obvious! [that this isn't real]" fallacy (the fallacy is that it is not at all obvious that the Chinese Room is not sapient).


Yes, that's one way out: claiming that the Books+Person Room-System is itself sentient.

You can (and philosophers have!) make that case, but it seems very counter-intuitive to the way we experience conscientious as a unitary, embodied being.


> it seems very counter-intuitive to the way we experience conscientious as a unitary, embodied being

I don't see how it's counter-intuitive to anyone who is aware of various brain functions, for example. It's abundantly clear by now that it's not a unitary thing, but a set of traits.

I think the reason why people keep coming up with those "it's not a real thing" retorts is the same reason why (other) people reject the notion that some animals might be sapient enough for us to care about. We like being uniquely sapient, and that's just impossible with any sort of rational - i.e. mysticism-free - approach to intelligence and self-awareness and all that stuff.


Maybe others disagree, but I think it's quite obvious that there's a spectrum of sentience: animals like chimpanzees are very similar to ourselves, something like a cat or a dog is less sentient, and a barnacle or clam having very little sentience.

It's much less convincing to me to say that a computer (or an algorithm running on that computer?) is somewhere on that same spectrum.


Well, we are literally working on simulating living beings on computers, e.g.:

https://en.wikipedia.org/wiki/OpenWorm

If we do that, wouldn't said simulation be an algorithm that ends up on your spectrum?

But also, I'm curious why your spectrum is defined the way it is - i.e. why a clam can have sentience, but not a laptop? Because one is organic, and the other one is not? But doesn't that imply that your definition of sentience inherently excludes anything artificial?


Theoretically, yes, I would assume a 100% accurately-simulated brain to be as sentient as the real thing (of course, then you can never turn it off without committing murder). Given that we don't fully understand the properties of a single neuron yet, I am deeply skeptical of the accuracy of these simulations.

Even in that wikipedia article you linked:

> These early attempts of simulation have been criticized for not being biologically realistic. Although we have the complete structural connectome, we do not know the synaptic weights at each of the known synapses. We do not even know whether the synapses are inhibitory or excitatory. To compensate for this the Hiroshima group used machine learning to find some weights of the synapses which would generate the desired behaviour. It is therefore no surprise that the model displayed the behaviour, and it may not represent true understanding of the system.

The common thread running through many of these comments is people who don't appreciate just how complex biology is, and how much we don't understand, are nevertheless confidently asserting that some Turning machine they can half-imagine will obviously be able to "think" the way a brain does.


The common thread is materialism. If you don't introduce any mystical entities such as a soul, it is inevitable that it's possible to construct something that operates indistinguishably from what we consider natural life - it's a question of engineering and biological research at that point. The question of whether it's 100 years removed from my laptop, or 1000 years, is a completely different one. It took humans literally millennia to get from horses to cars, yet both are on the same spectrum.

The Chinese Room argument essentially tries to make a point that it's impossible at any level of complexity. The only way that can be true is if you reject materialism entirely, and adopt the notion that there's something else that "bestows" intelligence on a brain, that cannot be replicated by any artificial physical means (but is somehow still replicated in utero). It's not really a verifiable claim, which is evident from the very design of the experiment - it says that there are things that appear intelligent to any possible test, but still aren't, because common sense, essentially.


The things can move just through the application of chemicals or air-pressure is counter-intuitive.

It seems like you've got a philosophy that is taking a person through a series of reasoning based stringing together back intuitions and ending up in the same place religion takes us.


Sure, I guess. Most of the responses in this thread are computer people trying to skirt around the "hard problem" of consciousness: http://www.consc.net/papers/facing.html

If you want to claim that what your brain does is basically a more complicated version of what your laptop does, then you're free to make that case, but I don't think you honestly believe it yourself. You do recognize your own inner life.


Well, while I'd both disagree with "hard problem" arguments and not see them as interesting, at least "hard problem" proponents will say directly there's some magic in humans that a computer can't have. Indeed, the "philosophical zombie" idea shows the magic is so transcendent that no evidence can determine whether a person has the magic (of qualia) or whether they are a dreaded Zombie.

In contrast, the Chinese Room argument jumps between "hard problem" metaphysics and the appearance of a materialistic perspective.


I recognize my inner life and consciousness, of course, but how and why would that preclude believing that they're manifestations of something that is only quantitatively different from my laptop? Quantity does matter.


Well, IMHO, it's counter-intuitive because a room full of index cards that can generate a reasonable reply to any Chinese sentence is counter-intuitive. I.e., unrealistic thought experiment is unrealistic.


The index cards are irrelevant. Pretend he wrote "a really fancy computer" if you like that better.


But the index cards are relevant! If we rewrite it as thus:

> There is a really fancy computer, and a person who has the switch. When the person turns on the switch, you can converse with the computer in Chinese. Does this fancy computer understand Chinese?

Then, well, at least I would say "Well, wasn't that the exact question you were trying to answer in the first place? What changed?"


No, it isn’t at all counterintuitive. E.g. my brain is a system of neurons and that system experiences consciousness. That is perfectly intuitive.


It could be a weaker argument: One can theoretically make a fully materialistic device acting in a way indistinguishable from consciousness.

Which still preserves the question: if you can reliably fake knowing Chinese, do you know Chinese?


Yeah, the thing about the argument is it mixes together two close but distinct arguments in a way that makes it harder to argue with either.

On the one hand, there's the implication that because it's just books and boxes & whatever, it can't do the impressive feat of speaking Chinese.

On the other hand, there's the implication that if it is a "room" or "computer" or something doing the speaking of Chinese, it can't "really" be doing it because it doesn't have the spark of life or a soul or whatever magic one might name.

And because the argument has a bit of both these claims to it, it is hard to address since a proponent can go back and forth between them.

That this argument was taken seriously by philosophers was rather disheartening to me when I discovered this, when it became obvious there really wasn't any more there than a colorful (but not particular relevant) metaphor.


I don't understand the premise of the Chinese Room argument. The human in the room is a red herring - he's acting as a computer executing a provided program. A computer can be made from integrated circuits, discrete transistors, vacuum tubes, mechanical components, or even a man in a room following simple instructions and writing down the results.

The man is not the source of the intelligence, the program is (or rather, the execution of the program in an emergent manner - obviously, if the program does not get executed, it cannot "understand" Chinese). Remove the program and the man is unable to process the Chinese input, just like a modern CPU if you remove a program from memory.

The choice of conducting the experiment in Chinese is just another misdirection. The only thing that the experiment is truly positing is, assume a human is executing a program that passes the Turing test rather than a traditional silicon computer.


The Chinese Room is a reductio against Strong AI. Strong AI is the idea that a program that programme that is able, for example, to converse in Chinese thereby understands Chinese. The Room, if successful, drives a wedge between "acting as if one knows Chinese" and "understanding Chinese". This may be applied to the any number of programs. But with such a wedge, we do not actually have Strong AI—the appearance of intelligence (grounded in syntax) is different from actual intelligence (presumably grounded in semantics).

In other words, assume strong AI. Strong AI implies machines have understanding. But the Chinese Room argues machines can never have understanding. So, strong AI is not possible.

In the background is Turing's attempt to provide an account of intelligence. Turing skirts the issue in his famous paper "Computing Machinery and Intelligence"[1]: instead of defining intelligence, Turing proposes a test. But the Chinese Room argues that even if we had a machine pass this test, we would not thereby be bound to say that the machine was intelligent.

[1] https://www.csee.umbc.edu/courses/471/papers/turing.pdf


You have presupposed the answer.

Replace a book with a person in that example and have the human walk up to the person and ask a question. Now in that case would saying the person moving the note does not understand something mean the room does not contain something that does understand? Nope.

Therefore, saying the human in the original example does not understand something does not in fact answer your question. My lungs don’t speak English, but I can’t speak English without them.


I do not believe I have presupposed the answer; I have simply articulated how I believe the reductio to work.

The key, of course, is whether we think that the person in the Room actually understands Chinese. The inference Searle wants us to draw, based on our intuition, is that the person does not understand Chinese just because the person is following a lookup table. The way to get around the argument is to claim either (i) that the inference doesn't work—that is, Strong AI does not imply machines understand, or (ii) that the Chinese Room does not imply machines do not understand.

I think (i) is a reasonable claim that follows from understanding what is intended by the term "Strong AI." (ii) is the tricky one. It seems to me the best route out is to find a way to substantiate a claim alluded to by another commentator, viz. the property of intelligence does not exist (though I would say "understanding does not apply" or something like that). The thing is, it does seem to me that understanding is a reasonable category for this case; and anyone who thinks this is likely to feel the force of the argument.


> The key, of course, is whether we think that the person in the Room actually understands Chinese.

As I said that’s no more relevant than if the paint on the walls understands Chinease. You can’t answer the question of if the room understands something by saying if a single element understands something or not.

Or consider this, does Microsoft the company understand French? It seems like a simple question, but you can easily support yes or no. In some situations it can respond to a French speaker, but not all situations.


To be fair, I did not ask the question of whether "the room understands." The question was whether "the person in the room understands," which seems perfectly reasonable.

In order to grasp what you are saying about "Microsoft the company understand French" one needs to define what we mean by "understand." As you (correctly) say, our answer will depend on that definition.

But to say everything depends on our definition of understanding is to miss the point of the Chinese Room. The point of the argument is to support the claim that the sort of thing we normally classify as understanding—such as when we say someone understands Chinese—is not a property of the person following a lookup table (or by analogy, a machine with instructions). Thus, neither the person in the room, nor the machine, understands in the same sense as when we say "this person understands Chinese."

This is how the Chinese Room is supposed to work against Strong AI. Strong AI supposes that when you have appropriate instructions, a machine is said to understand in the same sense in which you say a human understands. The Chinese Room argument is meant to prompt the claim that the machine does not understand—or at the very least does not understand in the same sense that a human understands.


While you say you are only talking about “the machine” it’s rather pointless. Fine, my hair is not sentient either again it’s not an argument.

Strong AI is a system not a machine. The person using a lookup table is just a portion of the system and thus can’t be used to limit the entire system.

Twins don’t nessisarily understand the same languages based on their training. Referring to the Human in “the room” as the machine following specific instructions is the same as saying human DNA don’t understand English. It’s might be true, but it’s definitely irrelevant.

Machine code is binary data used by computers that ideally maps 1:1 with ASM. A human can know ASM and with the aid of a computer produce, read, edit, etc Machine code they don’t directly understand. In effect they are part of a system that understands something that they themselves don’t.

Thus “the Chinese Room” has an inescapable though easily missed failure in logic. Saying the machine is nor the room, seems like an unreasonable objection. But, talking about the machine it’s just as relevant as saying people without exposure don’t know specific languages.

PS: Starting with a glass of water you can separate it into smaller places up to a point. But in the end water is H20, if you just look at the H that’s not water because you reached the point where subdivision results in a different substance. Saying H is not water thus water does not exist is a silly argument.


The argument is not about simple decomposition.

The argument is non-speaker+room doesn't write Chinese but lungs+rest-of-person does speak Chinese.

The Chinese room is intended to be decomposition that shows something different than decomposing a person.


The argument is exactly of the form: A person follows instructions to build an ikea desk. They don’t understand the instructions only individual steps.Therefore they don’t understand the construction of the desk. Therefore the system of them plus the desk does not understand how to construct a desk.

Except in this case it’s more clear the person does understand how to get the outcome. Just follow the instructions, thus the system of them plus the instructions understands how to build a desk.

Sure, computer hardware just like human DNA for example does not understand English. But, that says nothing about systems created by human DNA or computer hardware.


Wait, so you're claiming that being able to mechanistically follow instructions to get to desired result X counts as the "system" of instructions+follower understanding X?

So if I follow a React.js tutorial and get to a working Todo App example, that means me + tutorial page "understands" React.js? What am I missing?


Can get the outcome of the tutorial is not the same as understanding React.js.

Basically, cookbooks allow people to create food that they don’t know how or create without the cookbook. But, that only applies to what’s actually listed in the cook book. For a tutorial to allow somone to ‘know’ react it would need to encompass all edge cases etc.

Given all nessisary tools and resources like reference materials somone might be able to create arbitrary React.js pages which is closer to what your example. The remaining difference is time to completion and quality of output. But, if the tools allow somone to get the same quality of output in the same fine frame as somone that knows React.js then the system effectively knows React.js.

Consider, somone knows ASM but not machine code. Given the right tooling that coverts back and forth 1:1 to machine code it’s functionally identical to somone knowing machine code. Which is why nobody learns machine code over ASM as there is no benifit to learning machine code.


Why would the room being successful reveal anything useful? Why does it matter if a program is executed on pencil and paper or electronically? The thought experiment does not bring any novel insights and just misdirects the reader by focusing on the human.

You could execute the Strong AI program on any Turing complete system. Asking if the human understands Chinese is just as absurd as asking if a universal Turing machine understands Chinese given the same program. The machine just follows the instructions given, the philosophical questions reside entirely on the side of the program and the execution of it.


I think you have put your finger on exactly what is at issue: it does not matter whether the program is pencil or paper or electronically. It's an analogy. The point is that we do not (says Searle) impute understanding to the person in the room, therefore we should not impute understanding to a computer running a program. But Strong AI requires us to impute understanding; and this is exactly what is impossible. Therefore Strong AI does not exist. I take that to be the force of the argument.


The entire argument is flawed, as it doesn't actually prove anything at all. Here is the full argument, with all assumptions explicit:

Assume properties 'consciousness' and 'intelligence' exist. I'm not going to define them because I can't. Assume that a human has these properties. Now I demonstrate that a machine can't have these properties, since they were never actually defined. Since by assumption a human has them, a human has intelligence and a machine not.

The type of ideas that can lead to this reasoning are the same of the people who rejected evolution as thinking that man could never have descended from monkeys. The only think the argument proves is the close-mindedness of its author.


> Assume properties 'consciousness' and 'intelligence' exist. I'm not going to define them because I can't. Assume that a human has these properties.

But this is not what the argument says, is it? The argument says imagine a man that can't understand Chinese. The assumption here is that some humans can understand Chinese while others can not.


It is, because the term 'understanding' is not defined in a useful manner. What does it mean to understand something? If you can ever define it, I can bet I can make a machine satisfy your definition. What the argument does here is to simply avoid the definition, so that it cannot be disproved.


I take the Chinese Room to be a reaction to the Turing Test. The Turing Test also avoids defining understanding but it seems to say that we don't need a definition of understanding - if something appears to understand then it does in fact understand.

The Chinese Room attempts to refute that idea by saying that you can get the effect of understanding with a methodology that obviously lacks the gestalt that we associate with understanding a subject.

Personally I'm a sketptic of hard A.I. in general. Conscious to me seems like a magic trick - in that the only magical thing about it is that it will disappear once you know how it's done. Which makes it seem like a black box that can only be created if you don't know how or what you're doing.


I have always considered Searle's chinese room to be an intellectually dishonest argument.

It argues that a system (the room) consisting of a non-chinese speaker and a arbitrarily large and complex rules system and a human cannot "understand" chinese when the human clearly doesn't understand chinese. The human is just a distraction.

You could instead image a variant system where the room contains two people (and no filing cabinets): non-chinese speaking scribe and a chinese speaker. The scribe laboriously transcribes messages without understanding them and passes them to and from the chinese speaker that clearly does.

Obviously the addition of the non-understanding human in that example does not make the system unintelligent. Similarly, the (non-)intelligence of the system in the classic argument is independent of the understanding of the human stuck inside it.

There are interesting question to ask about the nature of intelligence, but the Chinese Room argument draws our attention to none of them and simply obfuscates the issue with an irrelevant misdirection.


Another way to highlight the folly of The Chinese Room is to consider a different alternative:

Imagine you create a system of filing cabnets that encode rules to compute AlphaGo on encrypted inputs. This is _clearly_ conceptually possible (if not practical just due to the sheer cost and size).

This system would, as AlphaGo does, be the worlds strongest go player. Even though the human inside may not understand go at all (and certainly wouldn't understand the game they were helping to play, as all the moves would be encrypted).

In this form we see The Chinese Room not as as evidence against AI as intended, but if the argument actually held (and wasn't just a form of intellectually low misdirection) it would be very strong evidence that its idea of "understanding" is of no practical use. ... since what would "understanding" even mean if we couldn't even say that the worlds best go player didn't understand the game at all?


I'm happy to say that the world's best Go playing algorithm doesn't understand the game at all. AlphaZero doesn't understand anything because there's no semantic content or metacognition to anything it does. It just takes inputs and optimizes outputs. It's a very fancy probabilistic calculator.


As a non-philosopher, this argument seems flawed in that one could argue the computer in the chinese room "understands" chinese. The Turing test, as I see it, would metaphorically ask if someone or something in the chinese room understood chinese and you could argue the answer is yes.


You are right, if you add quotes to the word 'understood' in the second sentence. So, it is an issue of what understanding in the context of humans, in what way this is different from what understanding in the context of computers/symbolic language is.


This highlights how fuzzy terms like intelligence and understanding are. How exactly is human understanding/reasoning different to that of a dog, a monkey or a computer?

How do you value each one of these? Should an animal be valued higher than than a constructed machine that is otherwise identical? What about humans and machines?


Does a light bulb know that it is on or off? That it exists?

Normally, in order to answer these questions we would look at the empirical evidence, and the evidence in favour that inanimate objects have self-awareness is completely non-existent.

On the other hand, the evidence that animals have self-awareness is overwhelmingly strong.

Thus, it is reasonable to conclude that inanimate objects aren't self-aware but that people and animals are. (And of course this also means that is NOT reasonable to conclude otherwise.)



We attribute self-awareness (or other attributes) based on the results of behavioral tests that we perform on the test subject. Assume, for the sake of the argument, that I built a machine that exhibits the same kind of self-awareness as an elephant. So it would recognize itself in a mirror test etc... So if it manages to pass all these tests, is it self-aware? According to Ernst Mach, two things that are indistinguishable by observation are actually equal.

This is essentially a part of the Chinese Room Argument: how can you draw a distinction between two things, when you cannot observe it?


The Turing test is an attempt to define intelligence operationally, from the outside, as a black box. Searle's argument is that the test is ineffective, since he is hypothetically satisfying it without "intelligence".

Unfortunately, his argument applies to a human being as well, so...


Indeed. In a lot of practical cases with humans we have people who talk a good talk but are unable to put the ideas they verbalise into practice. We don't say "this person doesn't understand [the language]", we say "this person doesn't understand [the thing they are talking about]". It seems reasonable to say the Chinese room has a perfect grasp of Chinese, it just happens to be pretty stupid and probably not reliable.

It would be interesting to do a study of a group like judges or politicians and compare them to a hypothetical Chinese Room. It seems likely that they would often need to speak briefly on subjects outside their actual understanding, but they clearly have a deep understanding of the language.


not the computer but the room itself


Perhaps it would have been better to phrase it as "an 'understanding' of chinese exists in the room"


Kurzweil had a nice end-run around this problem. It doesn't matter whether the machine is sentient/intelligent/understands/etc. It matters whether the machine can convince humans that it is doing these things. There is of course still room for skepticism, as a researcher could uncover new, testable ways in which the machine isn't actually living up to our definition of any of these concepts. But if all that is left to decide the question is the way in which the thing doing the convincing happens to be implemented, then you're in the land of xenophobia to claim that as a determining factor.

AFAICT this is one step further than the Turing test which is blind (or perhaps double blind). In this test the human starts interacting with something that is clearly a machine, then comes away convinced that the machine possesses non-machine qualities like sentience, a soul, etc. I don't remember Kurzweil addressing whether the machine itself would argue this on its own behalf, but that seems likely.

Edit: clarification


Not a neurologist so excuse the potential naïveté here but can one also make the argument that we can reduce a Chinese speaker's brain's ability to respond into a series of "manipulating symbols and numerals" as well, albeit, a extremely complicated one?


I think this is still an open question. Some people have claimed quantum effects may be at play, but this very controversial: https://www.quantamagazine.org/a-new-spin-on-the-quantum-bra...

Most neuroscientists feel strongly that the brain is not (just?) a computer https://aeon.co/essays/your-brain-does-not-process-informati...


My understanding is that the brain, to the extent that it is a computer, is not anywhere near Turing complete. It is not something more than a very fast (Turing complete) computer, it is something less. The brain makes up for the deficiency by being extremely fast and well optimized for the tasks it needs to do.

But as a consequence, we cannot draw the metaphor about the brain being a "computer" too far. The "software" in the brain is specific to the given brain, and not hardware independent.

Sure, if we imagine a Turing machine with infinite speed and memory, it might be able to run the "software" from every human brain, but we do not know if it is even possible to build such a computer within our current universe (if we require it to operate at real time). And if it is possible, I would guess that it would be so many orders of magnitude less efficient than a non-Turing complete machine that there would be little reason to build it.

Penrose and others try to smuggle in something like dualism through QM, but I don't see why that is necessary at all.

I think we just need to throw out most of the "Computer" metaphor that requires Turing completeness, and deal with the brain for what it is.


It's chemistry and physics from there down.

Searle's argument is closet dualism.


I feel like this fails at the scope level. Of course excluding the program the computer is unable to understand Chinese. Just like a transistor alone is unable to perform a sum of two integers. Either you analyse the complete system or you are just drawing false parallels.

EDIT: Spelling


If you assume that the brain of a person speaking Chinese is fundamentally different than a computer executing an instruction set, then you can also prove it based on that assumption. How is this significant?


Edit: misremembered the argument. Ignore me :)


It’s not an automated translation machine. The Chinese Room received input in Chinese and produces output in Chinese.


[flagged]


It seems to me that the argument is not meant to address the value judgements of people doing work in applied ML.

The basic idea of the Chinese Room is that the Turing Test is inadequate—such that even if a machine could pass the test, we would not be permitted to infer that such a machine is intelligent. This is significant for the distinction between syntax and semantics, as well as any mechanical account of mind that integrates the results of computational logic. And indeed it does seem like many people do wonder about the nature of machine intelligence and about whether we can give a coherent account of the mind from a mechanical perspective. So perhaps some people doing work in applied ML do not care about these questions; but it does not follow from this that such questions are without qualification uninteresting or worthy of being considered.


But it's garbage. The whole proposition is that if we reduce the human to the role of an automaton (while still retaining our prior knowledge of their humanity) we can get out of explaining how this perfect Chinese conversation-system is supposed to work by saying 'ha, there was a human in the machine the whole time!' - an extremely obvious bait-and-switch.

Also, reflect on the fact that while this isn't supposed to work in real time, the thought experiment calls for the human to be able to operate this conversation-machine by following the exhaustive instructions but without absorbing any information about the system they're operating, such that if they are let out of the Chinese room and handed some Chinese text on paper, it will supposedly be meaningless to them.

The whole idea has so many inherent flaws that I'm perplexed that it was ever taken seriously.


Good evaluation. Searle’s Chinese room argument always seems to hit a nerve with many on HN, with lots of outright diamissal. Whether or not it’s wrong, it’s intent is often just misunderstood. If I remember correctly Searle isn’t arguing against the possibility of a computational machine equal in function to a biological “brain”. So no real need to take it as a slight against ml/ai work. Searle’s intrests are in phil of mind and language, so his interest in questions of mind/consciousness and semantic meaning aren’t going to have too much to bear on applied ML.


This is the best response of all. It hinges on the idea that the program is simple or trivial. That a simple recipe that a human might carry out would be able to pass a Turing test. The entire premise is a strawman.


Unless things have changed recently, my understanding is everyone in working in "applied ML" has long ago given up on strong AGI and instead focused on redefining image categorization algorithms as "learning".




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: