Hacker News new | past | comments | ask | show | jobs | submit login

I think that there's a fundamental cognitive-shift style problem with Searle's argument, because I remember encountering it when I was in my tweens and wondering why anyone thought there was any 'there' there.

I think that -- from memory -- this is approximately what Searle believes himself to be saying:

1. Imagine that you're having a conversation with a person in another room, in Chinese. You're writing stuff down on little scraps of paper and getting little scraps of paper back. It's 100% clear to you that this is a real live person you're talking to.

2. Except here's the thing, it isn't. There's actually just this guy in the other room, he doesn't speak read or write Chinese at all. He just has a whole bunch of books and ledgers that contain rules and recorded values that let him transform any input sentence in Chinese into an output sentence in Chinese that is completely indistinguishable from what a real live person who spoke Chinese might say.

3. So, it's ridiculous to imagine that someone could actually simulate consciousness with books and ledgers. There's no way. Since the guy doesn't understand Chinese, he isn't "conscious" in the sense of this example. So we can't describe him as conscious. And the idea that the books are conscious is ridiculous, because they're just information without the guy. So there actually can't be any consciousness there, even though it seems like it. Since consciousness can't be simulated by some books, it's clear that we're just interacting with the illusion of consciousness.

Meanwhile, this is what people like myself hear when he tries to make that argument:

1. Imagine that you're having a conversation with a person in another room, in Chinese. You're writing stuff down on little scraps of paper and getting little scraps of paper back. It's 100% clear to you that this is a real live person you're talking to.

2. Except here's the thing, it isn't. There's actually a system made up of books and ledgers of rules and values in the other room. There's this guy there who doesn't read or write Chinese; he just takes your input sentence in Chinese and applies the rules, noting down values as needed, transforming it until the rules say to send it back to you. That's the sentence that you get back. It's completely indistinguishable from what a real live person who spoke Chinese might say.

3. So, it's ridiculous to imagine that someone could actually simulate consciousness with books and ledgers, but we're doing it for the sake of argument because it's a metaphor that we can hold in our heads. No one would claim that the guy following the rules in the other room is the "conscious" entity that we believe ourselves to be communicating with. And no-one would claim that static information itself is conscious. So either the "conscious" entity must be the system of rules and information as applied and transformed, or else there is no conscious entity involved. If there is no conscious entity involved, and since this is a metaphor, we can substitute "books and ledgers" with "individual nerve cells with synaptic connections" and "potentials of activation", and the conclusion will still hold true; there will still be no consciousness there.

4. However, we feel that there is a consciousness there when we interact with a system of individual nerve cells, synaptically connected with various thresholds of potentiation: even if it's a system smaller by an order of magnitude or so* that the one in our skulls, like our dog has. Thus we must conclude that the "conscious" entity must be the system of rules and information as applied and transformed, or we must conclude that notion of consciousness is ill-founded and inarticulate, that our understanding of consciousness is incomplete, and that our sense of "knowing" that we or another person are conscious is likely an illusion.

*I am fudging on the figure, but essentially we're comparing melon volumes to walnut volumes, as dogs have thick little noggins.




That's what's weird about Searle. He posited a great strawman that exposes the fallacy of taking "machines can't think" as an axiom, but he claims the straw man is a steel man. It is as though he is a sacrificial troll, making himself look silly to encourage everyone who can reject this straw man.


> [...] we must conclude that notion of consciousness is ill-founded and inarticulate, that our understanding of consciousness is incomplete, and that our sense of "knowing" that we or another person are conscious is likely an illusion.

I think I mostly agree with you, but I would argue that if your notion of consciousness is ill-founded and inarticulate, you can't really decide whether it's an illusion either. After all, the subjective experience quite definitely does happen/is real, thus obviously not an illusion, while the interpretation offered for that subjective experience is incoherent, thus there is no way to decide whether it's describing an illusion or not.


Interesting. It's also unclear to me why a system of books and ledgers of rules couldn't be conscious if they are self modifying. Who knows what property of the system insides of heads gives it this sense of "self" and how could you even test that it has one?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: