I don;t understand why people have any respect at all for Searle's"argument", it's just a bare assertion "machines's can't think", combined with some cheap misdirection. Can anyoneone argue that having Chinese characters instead of bits going in and out is something other than misdirection? Can anyone argue that having a human being acting like a cpu instead of having an actual cpu is something other than cheap misdorection?
I think you might be missing out on what the Chinese Room thought experiment is about.
The argument isn’t about whether machines can think, but about whether computation alone can generate understanding.
It shows that syntax (in this case, the formal manipulation of symbols) is insufficient for semantics, or genuine meaning. That means whether you're a machine or human being, I can teach you every grammatical rule or syntactical rule of a language but that is not enough for you to understand what is being said or have meaning arise, just like in his thought experiment. From the outside it looks like you understand, but the agent in the room has no clue what meaning is being imparted. You cannot derive semantics from syntax.
Searle is highlighting a limitation for computationalism and the idea of 'Strong AI'. No matter how sophisticated you make your machine it will never be able to achieve genuine understanding, intentionality, or consciousness because it operates purely through syntactic processes.
This has implications beyond the thought experiment, for example, this idea has impacted Philosophy of Language, Linguistics, AI and ML, Epistemology, and Cognitive Science. To boil it down, one major implication is that we lack a rock-solid understanding or theory of how semantics arises, whether in machines or humans.
Slight tangent but you seem well informed so I'll ask you (I skimmed Stanford site and didn't see an obvious answer):
Is the assumption that there is internal state and the rulebook is flexible enough that it can produce the correct output even for things that require learning and internal state?
For example, the input describes some rules to a game and then initiates the game with some input and expects the Chinese room to produce the correct output?
It seems that without learning+state the system would fail to produce the correct output so it couldn't possibly be said to understand.
With learning and state, at least it can get the right answer, but that still leaves the question of whether that represents understanding or not.
We don't have continuous learning machines yet so at least understanding new things, or being able to further link ideas isn't quite there yet. I've always taken the idea of understanding as taking an unrefined idea, or incomplete information, applying experimentation/doing, and coming out with a more complete model on how to do said action.
Like understanding how to bake a cake. I can have a simplistic model, for example taking a box cake and making it. Or a more complex model, using the raw ingredients in the right proportions. Both of these have some level of understanding on what's necessary to bake a cake.
And I think AI models have this too. When they have some base knowledge on a topic, and you ask a question that can require a tool without asking for a tool directly, they can suggest a tool to use. Which at least to me make it appear the system as a whole has understanding.
You're anticipating the modern AI angle and that's a good move.
In Searle's Chinese Room, we're asked to imagine a system that appears intelligent but lacks intentionality, the capacity of mental states to be about or directed toward something. In Searle's setup he didn't conceive of a system with either learning or an internal state and instead we have a static rulebook that manipulates symbols purely according to syntactic rules.
What you're suggesting is that if the rulebook or maybe the agent could learn and remember then it could adapt and is closer to an intelligent system and in turn would have understanding. Which is something Searle anticipated.
Searle covered this idea in the original paper and in a series of replies: Minds, Brains, and Programs (1980, anticipation p. 419 + peer replies), Minds, Brains, and Science (1984), Is the Brain’s Mind a Computer Program? (1990), The Rediscovery of the Mind (1992), and many more clarifications from lectures and interviews. He was responded to in papers from Dennett, the Churchlands, Hofstadter, Boden, Clark, and Chalmers (which you may be interested in if you're looking to go deeper).
To try and summarize Searle: adding learning or state will only complicate the syntax, it's still a purely rule-governed symbol manipulation system; there is no semantic content in the symbols; and the learning or internal changes remain formal operations (not experiences or intentions).
So zooming out, even adding learning and states, we're still dealing with syntax and no amount of syntactic complexity will get us to understanding. Of course, this leads to debate from Functionalists like Putnam, Fodor, and Lewis. This is similar to what you're pointing at and they would say that if a system with an internal state and learning can interpret new information, reason about it, and act coherently, then it functionally understands. And I think this is sort of the place where people are landing with modern AI.
Searle’s deeper claim, however, is that the mind is non-computational. Computation manipulates symbols; the mind means. And the best evidence for that, I think, lies not in metaphysics but in philosophy of language, where we can observe how meaning continually outruns syntax.
Phenomena such as deixis, speech acts, irony and metaphor, reference and anaphora, presupposition and implicature, and reflexivity all reveal a cognitive and contextual dimension to language that no formal grammar explains.
Searle’s view parallels Frege’s insight that meaning involves both sense (how something is presented) and reference (what it designates), and it also echoes Kaplan’s account of indexicals in Demonstratives (1977), where expressions such as I, here, now, today, and that take their content entirely from the context of utterance: who is speaking, when, and where. Both Frege and Kaplan, in different ways, reveal the same limit that Searle emphasizes: understanding depends on an intentional, contextual relation to the world, not on syntactic form alone.
Before this becomes a rambling essay, we're left with Frege's tension of coextensivity (A = A and A = B), where logic treats them as equivalent but understanding does not. If the functionalists are right, then perhaps that difference, between meaning and mechanism, is only apparent, and we’re making distinctions without real differences.
> Frege's tension of coextensivity (A = A and A = B)
I googled and now reading up on this one. I really enjoy how things that seem basic on the surface can generate so much thoughtful analysis without a clear and obvious solution.
I understand the assertion perfectly. I understand why people might feel it intuitively makes sense. I don't understand why anyone purports to believe that saying "Chinese characters" rather than bit sequences serves any purpose other than to confuse.
reply