Hacker News new | past | comments | ask | show | jobs | submit login

I have never actually understood the "chinese room experiment".

As formulated in WP, the point is supposed to be that while the room is working, the man in it cannot understand Chinese, so similarly the computer cannot understand it. But isn't this completely pointless, as despite what the man and the computer can or cannot understand, the room or the program can? The man (or the computer) is just a cog in a larger system, and cannot be expected to understand the system, just as none of my individual neurons cannot understand English.

> I think what we all had hoped for, however, was HAL. What we are going to get is more and more iterations of cleverbot.

And what is the difference? When we can build a program that can parse natural language and use it to access information, it will open a whole new technological revolution.




There is a whole line of philosophers of mind who viscerally reject the idea that a deterministic system can 'understand'. They will keep arguing about that even after human-level sentient agents are created. I call these people cryptodualists, because they claim to be non dualists,yet are willing to accept arbitrary hogwash such as qualia, consciousness as a fundamental physical property and the quantum correlates of consciousness.


A key element of the chinese room experiment though, is that it excludes the biochemical properties of the brain, which Searle strongly believes is the foundation of human consciousness.

Searle's main point: A deterministic, symbol-manipulating machine can never give rise to the equivalent of human consciousness because traits essential to human consciousness are rooted in the biochemical properties of the brain


I don't think so. He says it in his rebuttal of the "Brain simulator" reply:

The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states

(http://pami.uwaterloo.ca/tizhoosh/docs/Searle.pdf)

I believe he would say the same thing replacing "neuron firings" with "biochemical processes". This is a slippery argument that only religious arguments can equal. Can you reach your simulation down to the level of individual molecules and keep looking for "intentional states"? Molecules are pretty dumb.


This assumes the very thing CAN be simulated. Well, you can simulate molecule interactions for water molecule in a program, but this doesn't get you actual wetness.

So, what if this "consciousness" property depends on the biochemical processes and substances of the brain, in the way wetness depends on actual water?

If I simulate molecules moving rapidly and hitting each other, I get a simulation of what happens when heat is produced, but not actual heat --my simulation cannot light a match, for instance.


Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness. Conversely, your simulated heat could light a simulated match. These kind of linguistic arguments are inherently dualistic (I.e. Falsely assume that words like 'wetness' have meaning outside the realm of the human CNS, in a parallel universe perceived by the mind but not the brain).

Consciousness, OTOH, is a brain state felt by the brain so it should be in principle simulate-able, just like any physical system. Unless we believe it's metaphysical.


>Wetness=the sensory brain state generated when thermo and tactile receptors fire when hand touches water. Connect your simulator to these nerves and you got actual wetness.

Only this "wetness" wouldn't soak an actual napkin.

Nothing linguistic about it.

Physical objects have physical properties --you can simulate those, but then you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.


Only this "wetness" wouldn't soak an actual napkin.

No "wetness" will ever do it. That would require real water, right? What causes the soaking are electrical forces, there's no such thing as "wetness" in nature. I believe it's just a word that humans invented for the properties of water.

you have to simulate the whole surroundings (or the universe in the extreme) to get the effects of those properties to other items.

Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us. That's what humans do with each other (the other minds problem), and we typically assume that there is "understanding".


>No "wetness" will ever do it. That would require real water, right?

You don't say! (as the meme goes).

I mean, of course, I'm using the word wetness to imply the physical implications of the presence of water.

So, to return to the actual thing under discussion, what I mean is that we can simulate stuff from the physical world, but this simulation might capture some of same information and calculations (e.g down to the individual positions of particles, exchanges of energy etc) but it doesn't have the same properties, unless you simulate the whole of their environment.

>Agreed, what's the point? The important thing is that the agent is able to communicate its internal state with us.

What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing. Not just "this neuron fires now" etc, but also stuff like the neuron's materials, physical characteristics and responses, etc. Those could be essential to things like how accurately (or not) information like memory and thoughts is saved, how it is recalled, timings of neuron firings, etc.


What I'm implying is that you might not be able to get an intelligent agent to even have an "internal state" advanced enough, unless you mimic and simulate the whole thing

You might not, but current research is more hopeful. The current consensus is that you simulate a neuron well enough if you get down to the level of chemical reaction kinetics, and it appears that this description is accurate enough to recreate the electrical properties of neurons. There are yet no neuronal phenomena that can't be explained with this framework, so the consensus is more like we might than we might not.


Blindsight by Peter Watts hits many of this thread's themes. Sometimes it gets a mention on HN, so a reference for the uninitiated - http://www.hnsearch.com/search#request/all&q=blindsight+...


Searle calls your objection the "systems reply", and his response can be found here (2a): http://www.iep.utm.edu/chineser/#H2


Pretty much everything Searle has ever written on the subject can be predicted by starting with the argument "but only humans can understand meaning!" and working from there.

In part c of that link, he lays it out quite clearly: even if you manage to build a detailed working simulation of a human brain, even if you then insert it inside a human's head and hook it up in all the right ways, you still haven't "done the job", because a mere simulation of a brain can't have mental states or understand meaning. Because it's not a human brain.

In other words, he's an idiot. Or at least he's so committed to being "right" on this issue that he's willing to play the dirty philosophy game of sneakily redefining words behind the scenes until he's right by default.

But in any case, he's not talking about any practical or mesurable effect or difficulty related to AI. He's arguing that even if you built HAL, he wouldn't acknowledge it as intelligent, because his definition of "intelligent" can only be applied to humans.


Is it Searle who redefines consciousness because he's doesn't like computers, or is it you, because you like them? His argument is quite brilliant, because it's both clear and non-trivial. Most of the self-appointed internet philosophers lack both of these qualities.

For example, people who say that there is no difference between understanding addition and merely running an addition algorithm are wrong. Dead wrong. You don't need complex philosophy to show that. Yes, the results of computations would be the same, but the consequences for the one doing computing are not. We all know that a person who understands something can do much more with it than a person who merely memorized a process. Everybody agrees to this when it comes to education, so why is this principle suddenly reversed when it comes to computers?


Most of the self-appointed internet philosophers lack both of these qualities: What use is attacking the man here?

You are also misrepresenting Searle's argument. In the case of addition, the machine would not only be able to perform it, but also answer any conceivable question that regards the abstract operation of addition. It would be able to do everything a human would do, excluding nothing. The underlying argument is that "understanding" is a fundamentally and exclusively human property (this will not be fully rebutted until we discover in full the processes underlying learning and memory in humans)

Granted, a huge list of syntactic rules will probably not result to any useful intelligence, but a brain simulator would be exactly equivalent to a human (and Searle's response to that argument is completely unfounded)


I don't think I misrepresent his argument. I just interpret it using different examples. He uses a huge example, like speaking Chinese, which seems to confuse a lot of people. I use something much simpler, like doing addition.

His argument is based on the notion that doing something and understanding what you do are two different things. I don't see why this needs an elaborate thought-experiment when we all have experienced doing things without understanding them. We don't need to compare humans to computers to see the difference.

Problem is, this difference becomes apparent only when you go beyond the scope of the original activity/algorithm. And that's exactly where modern AI programs fail, badly. You take a sophisticated algorithm that does wonders in one domain, throw it into a vastly different domain, and it starts to fail, miserably, even though that second domain might be very simple.


His argument is that, while a human can do something with or without understanding it (e.g Memorizing), a machine can only do the former and will never do the latter. The argument may hold for the currect (simplistic) AI, but not for a future full brain simulator.


And he completely misses the point of the argument.

"There isn’t anything in the system that isn’t in him." This small sentence just completely shows his ignorance of virtual machines. Yes my tinyxp system is running within Ubuntu, that doesn't mean my Ubuntu system is a tinyxp system. “[it’s just ridiculous to say] that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might” He is, yet again, falling back and just disregarding the argument. As I said before, he is arguing that because my processor cannot do maths would the aid of the rest of my computer it is ridiculous to assume that when combined properties might emerge. Sort of similar to how Sapience is an emergent property of our bodies really...


I think, put quite simply, the systems reply completely devastated the Chinese Room argument. I don't know why we bother even bringing it up any more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: