Haha, I fooled you dummy, something has to be missing here; the thing it does can't really be THAT amazing, because you were told it was just boxes and books; you can get those anywhere.
You're told right away to imagine a set of comprehensible mundane objects, assume they can do this thing in the form you've been given, and as a result, the thing must be mundane and incomplete.
I know there's more depth to it than that, but I don't think you need to go that deep to find large flaws.
You can (and philosophers have!) make that case, but it seems very counter-intuitive to the way we experience conscientious as a unitary, embodied being.
I don't see how it's counter-intuitive to anyone who is aware of various brain functions, for example. It's abundantly clear by now that it's not a unitary thing, but a set of traits.
I think the reason why people keep coming up with those "it's not a real thing" retorts is the same reason why (other) people reject the notion that some animals might be sapient enough for us to care about. We like being uniquely sapient, and that's just impossible with any sort of rational - i.e. mysticism-free - approach to intelligence and self-awareness and all that stuff.
It's much less convincing to me to say that a computer (or an algorithm running on that computer?) is somewhere on that same spectrum.
If we do that, wouldn't said simulation be an algorithm that ends up on your spectrum?
But also, I'm curious why your spectrum is defined the way it is - i.e. why a clam can have sentience, but not a laptop? Because one is organic, and the other one is not? But doesn't that imply that your definition of sentience inherently excludes anything artificial?
Even in that wikipedia article you linked:
> These early attempts of simulation have been criticized for not being biologically realistic. Although we have the complete structural connectome, we do not know the synaptic weights at each of the known synapses. We do not even know whether the synapses are inhibitory or excitatory. To compensate for this the Hiroshima group used machine learning to find some weights of the synapses which would generate the desired behaviour. It is therefore no surprise that the model displayed the behaviour, and it may not represent true understanding of the system.
The common thread running through many of these comments is people who don't appreciate just how complex biology is, and how much we don't understand, are nevertheless confidently asserting that some Turning machine they can half-imagine will obviously be able to "think" the way a brain does.
The Chinese Room argument essentially tries to make a point that it's impossible at any level of complexity. The only way that can be true is if you reject materialism entirely, and adopt the notion that there's something else that "bestows" intelligence on a brain, that cannot be replicated by any artificial physical means (but is somehow still replicated in utero). It's not really a verifiable claim, which is evident from the very design of the experiment - it says that there are things that appear intelligent to any possible test, but still aren't, because common sense, essentially.
It seems like you've got a philosophy that is taking a person through a series of reasoning based stringing together back intuitions and ending up in the same place religion takes us.
If you want to claim that what your brain does is basically a more complicated version of what your laptop does, then you're free to make that case, but I don't think you honestly believe it yourself. You do recognize your own inner life.
In contrast, the Chinese Room argument jumps between "hard problem" metaphysics and the appearance of a materialistic perspective.
> There is a really fancy computer, and a person who has the switch. When the person turns on the switch, you can converse with the computer in Chinese. Does this fancy computer understand Chinese?
Then, well, at least I would say "Well, wasn't that the exact question you were trying to answer in the first place? What changed?"
Which still preserves the question: if you can reliably fake knowing Chinese, do you know Chinese?
On the one hand, there's the implication that because it's just books and boxes & whatever, it can't do the impressive feat of speaking Chinese.
On the other hand, there's the implication that if it is a "room" or "computer" or something doing the speaking of Chinese, it can't "really" be doing it because it doesn't have the spark of life or a soul or whatever magic one might name.
And because the argument has a bit of both these claims to it, it is hard to address since a proponent can go back and forth between them.
That this argument was taken seriously by philosophers was rather disheartening to me when I discovered this, when it became obvious there really wasn't any more there than a colorful (but not particular relevant) metaphor.
The man is not the source of the intelligence, the program is (or rather, the execution of the program in an emergent manner - obviously, if the program does not get executed, it cannot "understand" Chinese). Remove the program and the man is unable to process the Chinese input, just like a modern CPU if you remove a program from memory.
The choice of conducting the experiment in Chinese is just another misdirection. The only thing that the experiment is truly positing is, assume a human is executing a program that passes the Turing test rather than a traditional silicon computer.
In other words, assume strong AI. Strong AI implies machines have understanding. But the Chinese Room argues machines can never have understanding. So, strong AI is not possible.
In the background is Turing's attempt to provide an account of intelligence. Turing skirts the issue in his famous paper "Computing Machinery and Intelligence": instead of defining intelligence, Turing proposes a test. But the Chinese Room argues that even if we had a machine pass this test, we would not thereby be bound to say that the machine was intelligent.
Replace a book with a person in that example and have the human walk up to the person and ask a question. Now in that case would saying the person moving the note does not understand something mean the room does not contain something that does understand? Nope.
Therefore, saying the human in the original example does not understand something does not in fact answer your question. My lungs don’t speak English, but I can’t speak English without them.
The key, of course, is whether we think that the person in the Room actually understands Chinese. The inference Searle wants us to draw, based on our intuition, is that the person does not understand Chinese just because the person is following a lookup table. The way to get around the argument is to claim either (i) that the inference doesn't work—that is, Strong AI does not imply machines understand, or (ii) that the Chinese Room does not imply machines do not understand.
I think (i) is a reasonable claim that follows from understanding what is intended by the term "Strong AI." (ii) is the tricky one. It seems to me the best route out is to find a way to substantiate a claim alluded to by another commentator, viz. the property of intelligence does not exist (though I would say "understanding does not apply" or something like that). The thing is, it does seem to me that understanding is a reasonable category for this case; and anyone who thinks this is likely to feel the force of the argument.
As I said that’s no more relevant than if the paint on the walls understands Chinease. You can’t answer the question of if the room understands something by saying if a single element understands something or not.
Or consider this, does Microsoft the company understand French? It seems like a simple question, but you can easily support yes or no. In some situations it can respond to a French speaker, but not all situations.
In order to grasp what you are saying about "Microsoft the company understand French" one needs to define what we mean by "understand." As you (correctly) say, our answer will depend on that definition.
But to say everything depends on our definition of understanding is to miss the point of the Chinese Room. The point of the argument is to support the claim that the sort of thing we normally classify as understanding—such as when we say someone understands Chinese—is not a property of the person following a lookup table (or by analogy, a machine with instructions). Thus, neither the person in the room, nor the machine, understands in the same sense as when we say "this person understands Chinese."
This is how the Chinese Room is supposed to work against Strong AI. Strong AI supposes that when you have appropriate instructions, a machine is said to understand in the same sense in which you say a human understands. The Chinese Room argument is meant to prompt the claim that the machine does not understand—or at the very least does not understand in the same sense that a human understands.
Strong AI is a system not a machine. The person using a lookup table is just a portion of the system and thus can’t be used to limit the entire system.
Twins don’t nessisarily understand the same languages based on their training. Referring to the Human in “the room” as the machine following specific instructions is the same as saying human DNA don’t understand English. It’s might be true, but it’s definitely irrelevant.
Machine code is binary data used by computers that ideally maps 1:1 with ASM. A human can know ASM and with the aid of a computer produce, read, edit, etc Machine code they don’t directly understand. In effect they are part of a system that understands something that they themselves don’t.
Thus “the Chinese Room” has an inescapable though easily missed failure in logic. Saying the machine is nor the room, seems like an unreasonable objection. But, talking about the machine it’s just as relevant as saying people without exposure don’t know specific languages.
PS: Starting with a glass of water you can separate it into smaller places up to a point. But in the end water is H20, if you just look at the H that’s not water because you reached the point where subdivision results in a different substance. Saying H is not water thus water does not exist is a silly argument.
The argument is non-speaker+room doesn't write Chinese but lungs+rest-of-person does speak Chinese.
The Chinese room is intended to be decomposition that shows something different than decomposing a person.
Except in this case it’s more clear the person does understand how to get the outcome. Just follow the instructions, thus the system of them plus the instructions understands how to build a desk.
Sure, computer hardware just like human DNA for example does not understand English. But, that says nothing about systems created by human DNA or computer hardware.
So if I follow a React.js tutorial and get to a working Todo App example, that means me + tutorial page "understands" React.js? What am I missing?
Basically, cookbooks allow people to create food that they don’t know how or create without the cookbook. But, that only applies to what’s actually listed in the cook book. For a tutorial to allow somone to ‘know’ react it would need to encompass all edge cases etc.
Given all nessisary tools and resources like reference materials somone might be able to create arbitrary React.js pages which is closer to what your example. The remaining difference is time to completion and quality of output. But, if the tools allow somone to get the same quality of output in the same fine frame as somone that knows React.js then the system effectively knows React.js.
Consider, somone knows ASM but not machine code. Given the right tooling that coverts back and forth 1:1 to machine code it’s functionally identical to somone knowing machine code. Which is why nobody learns machine code over ASM as there is no benifit to learning machine code.
You could execute the Strong AI program on any Turing complete system. Asking if the human understands Chinese is just as absurd as asking if a universal Turing machine understands Chinese given the same program. The machine just follows the instructions given, the philosophical questions reside entirely on the side of the program and the execution of it.
Assume properties 'consciousness' and 'intelligence' exist. I'm not going to define them because I can't. Assume that a human has these properties. Now I demonstrate that a machine can't have these properties, since they were never actually defined. Since by assumption a human has them, a human has intelligence and a machine not.
The type of ideas that can lead to this reasoning are the same of the people who rejected evolution as thinking that man could never have descended from monkeys. The only think the argument proves is the close-mindedness of its author.
But this is not what the argument says, is it? The argument says imagine a man that can't understand Chinese. The assumption here is that some humans can understand Chinese while others can not.
The Chinese Room attempts to refute that idea by saying that you can get the effect of understanding with a methodology that obviously lacks the gestalt that we associate with understanding a subject.
Personally I'm a sketptic of hard A.I. in general. Conscious to me seems like a magic trick - in that the only magical thing about it is that it will disappear once you know how it's done. Which makes it seem like a black box that can only be created if you don't know how or what you're doing.
It argues that a system (the room) consisting of a non-chinese speaker and a arbitrarily large and complex rules system and a human cannot "understand" chinese when the human clearly doesn't understand chinese. The human is just a distraction.
You could instead image a variant system where the room contains two people (and no filing cabinets): non-chinese speaking scribe and a chinese speaker. The scribe laboriously transcribes messages without understanding them and passes them to and from the chinese speaker that clearly does.
Obviously the addition of the non-understanding human in that example does not make the system unintelligent. Similarly, the (non-)intelligence of the system in the classic argument is independent of the understanding of the human stuck inside it.
There are interesting question to ask about the nature of intelligence, but the Chinese Room argument draws our attention to none of them and simply obfuscates the issue with an irrelevant misdirection.
Imagine you create a system of filing cabnets that encode rules to compute AlphaGo on encrypted inputs. This is _clearly_ conceptually possible (if not practical just due to the sheer cost and size).
This system would, as AlphaGo does, be the worlds strongest go player. Even though the human inside may not understand go at all (and certainly wouldn't understand the game they were helping to play, as all the moves would be encrypted).
In this form we see The Chinese Room not as as evidence against AI as intended, but if the argument actually held (and wasn't just a form of intellectually low misdirection) it would be very strong evidence that its idea of "understanding" is of no practical use. ... since what would "understanding" even mean if we couldn't even say that the worlds best go player didn't understand the game at all?
How do you value each one of these? Should an animal be valued higher than than a constructed machine that is otherwise identical? What about humans and machines?
Normally, in order to answer these questions we would look at the empirical evidence, and the evidence in favour that inanimate objects have self-awareness is completely non-existent.
On the other hand, the evidence that animals have self-awareness is overwhelmingly strong.
Thus, it is reasonable to conclude that inanimate objects aren't self-aware but that people and animals are. (And of course this also means that is NOT reasonable to conclude otherwise.)
This is essentially a part of the Chinese Room Argument: how can you draw a distinction between two things, when you cannot observe it?
Unfortunately, his argument applies to a human being as well, so...
It would be interesting to do a study of a group like judges or politicians and compare them to a hypothetical Chinese Room. It seems likely that they would often need to speak briefly on subjects outside their actual understanding, but they clearly have a deep understanding of the language.
AFAICT this is one step further than the Turing test which is blind (or perhaps double blind). In this test the human starts interacting with something that is clearly a machine, then comes away convinced that the machine possesses non-machine qualities like sentience, a soul, etc. I don't remember Kurzweil addressing whether the machine itself would argue this on its own behalf, but that seems likely.
Most neuroscientists feel strongly that the brain is not (just?) a computer https://aeon.co/essays/your-brain-does-not-process-informati...
But as a consequence, we cannot draw the metaphor about the brain being a "computer" too far. The "software" in the brain is specific to the given brain, and not hardware independent.
Sure, if we imagine a Turing machine with infinite speed and memory, it might be able to run the "software" from every human brain, but we do not know if it is even possible to build such a computer within our current universe (if we require it to operate at real time). And if it is possible, I would guess that it would be so many orders of magnitude less efficient than a non-Turing complete machine that there would be little reason to build it.
Penrose and others try to smuggle in something like dualism through QM, but I don't see why that is necessary at all.
I think we just need to throw out most of the "Computer" metaphor that requires Turing completeness, and deal with the brain for what it is.
Searle's argument is closet dualism.
The basic idea of the Chinese Room is that the Turing Test is inadequate—such that even if a machine could pass the test, we would not be permitted to infer that such a machine is intelligent. This is significant for the distinction between syntax and semantics, as well as any mechanical account of mind that integrates the results of computational logic. And indeed it does seem like many people do wonder about the nature of machine intelligence and about whether we can give a coherent account of the mind from a mechanical perspective. So perhaps some people doing work in applied ML do not care about these questions; but it does not follow from this that such questions are without qualification uninteresting or worthy of being considered.
Also, reflect on the fact that while this isn't supposed to work in real time, the thought experiment calls for the human to be able to operate this conversation-machine by following the exhaustive instructions but without absorbing any information about the system they're operating, such that if they are let out of the Chinese room and handed some Chinese text on paper, it will supposedly be meaningless to them.
The whole idea has so many inherent flaws that I'm perplexed that it was ever taken seriously.