Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you seriously bringing up Searle's Chinese Room as an argument against AI research? According to Wikipedia, "The Chinese room argument is primarily an argument in the philosophy of mind, and both major computer scientists and artificial intelligence researchers consider it irrelevant to their fields." Right on.

First off, philosophy of mind is 100% irrelevant to modern AI research, which is more concerned with creating algorithms that act as if at a human level of intelligence than creating algorithms that recreate human states of mind. You guys might not get that, but every person working on AI does.

Even given that, it's allowing for a very charitable interpretation of Searle's "work": most of us consider Searle to be a fucking idiot at best, a troll in the most likely case. The Chinese Room analogy is tortured, and pretty much assumes dualism from the start - to me, if a dude in a closet pushing papers around could fake understanding Chinese as far as an outside observer is concerned, we'd have solved strong AI, so I don't care whether Searle thinks we've succeeded or not.

Re: Nagel, I don't know his stuff, but having read http://en.wikipedia.org/wiki/What_Is_it_Like_to_Be_a_Bat%3F, I'm not too interested, there's so much vagueness there that I feel like this is just more bullshit questioning whether we have achieved "real" understanding or just a mechanical approximation. And again, I don't care. I want a program that acts as if it's intelligent, and it needs to pass most normal people's bar for intelligence, not some dipshit philosopher's bar for being human.

Philosophers have always been misinterpreting AI research's goals, which is why nobody in AI has ever paid them any attention, and which is also why they'll never be relevant to anything. Even if they're right, they're not asking questions that anyone cares about.



I like this deconstruction of Searle's Chinese Room, by Scott Aaronson. He calls Searle's argument a "non-insight": http://www.scottaaronson.com/democritus/lec4.html


That link was the most interesting thing I've read on HN all day. Thank you!

My favorite quote: "As a historical remark, it's interesting that the possibility of thinking machines isn't something that occurred to people gradually, after they'd already been using computers for decades. Instead it occurred to them immediately, the minute they started talking about computers themselves. People like Leibniz and Babbage and Lovelace and Turing and von Neumann understood from the beginning that a computer wouldn't just be another steam engine or toaster -- that, because of the property of universality (whether or not they called it that), it's difficult even to talk about computers without also talking about ourselves."


Sure thing. I should have mentioned that you don't need to read the whole thing (Searle's Chinese room argument is dicussed in only one section of the linked lecture).

You'll probably enjoy his other material. Scott Aaronson is a prolific expositor, he's been blogging for nearly a decade (since before he was hired by MIT's CS dept).


Searle's primary contribution to philosophy is that he forces every CS student to ponder this important question: "Is this famous philosopher correct, that AI is impossible, or does a nobody like me actually understand the concept of emergence better than he does?


> Philosophers have always been misinterpreting AI research's goals, which is why nobody in AI has ever paid them any attention, and which is also why they'll never be relevant to anything. Even if they're right, they're not asking questions that anyone cares about.

That's totally unfair. Serious philosophers object to the misapplication of AI research to answer philosophical questions about the mind (not necessarily even by AI researchers), not AI in general. It's basically the the same complaint you have against Searle being invoked against AI. Don't sink to the level of the person you're replying to with mindless tribalism. Your definition of 'anyone' appears to be AI researchers.

Fwiw, although Searle made it onto the undergraduate phil mind courses, he isn't really taken that seriously by contemporary philosophers.


That's fair - I'm biased by having too many conversations with self professed philosophers telling me that any push towards AGI is wasted effort because of X, where X just means that it wouldn't satisfy whatever they think is special about humans.

I don't think AI researchers have anything to offer philosophy. The thing is, AI researchers rarely engage at all except when philosophers pop up and tell them that what they're doing is impossible. AI researchers generally don't give a shit about philosophy, whereas there is a ton of noise coming from the other direction.

You may be right in your implicit suggestion that the people bringing up Searle are really just amateurs, though. I don't ever recall anyone with bona fide credentials in philosophy ever mentioning the guy as anything more than a sad amusement...


Philosophy often borrows examples from other fields in order to provide concrete examples of quite abstract ideas.

Unfortunately, this often gets misunderstood (both by practitioners of those fields and people with an axe to grind) as being critical of the field. The criticism is usually really directed at another philosophical position.

I think the reason the Chinese Room argument gets so much attention is that it's an argument against a position that was popular in the 1970s --- that mental states are identical (as in, strict identity) to classical computational states --- while being easy to understand and criticise. As you say, it assumes its own conclusion.

To be fair to Searle, I shoul point out that while the chinese room argument isn't taken seriously, he did other unrelated work that is still relevant!


I care about this question. I think that 'creating algorithms that act as if at a human level of intelligence than creating algorithms that recreate human states of mind' is part of the problem, and acts as a limitation on our AI. I am actively interested in AI that has a sense of self and is capable of developing preferences rather than mere opinions.

That doesn't mean I buy Searle's argument, and indeed have refuted it below. But I do think that dismissing the questions asked by philosophy out of hand is a mistake, and is causing us to overlook opportunities. The most likely place for general AI to develop is on mobile devices - not in a her type human fashion, though.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: