Hacker News new | past | comments | ask | show | jobs | submit login

How is this article different from a tired rehashing of the "Chinese Room" argument of Searle which never made much sense to begin with?

People argued the same way about computer chess, "it doesn't really understand the board, it is just checking all possibilities", etc. People like Chomsky used to say that a computer will never beat a master chess or go player because it "lacks the imagination to come up with a strategy", etc. No-one makes that argument anymore. Von Neumann already remarked in the 1940s that AI is a moving goalpost because as something is achieved, it doesn't seem intelligent anymore.

Chomsky's arguments were already debunked by Norvig a decade ago. Instead of bothering to respond, he writes another high-brow dismissal in flowery prose.




The Chinese Room argument always made sense to me. Machine translation only understands the rules for translating X to Y. It does not understand what X and Y mean, as in the way humans apply language to the world and themselves. How could it?

LLMs are a step beyond that, though. As in they do encode language meanings in their weights. But they still aren't connected to the world itself. Things are only meaningful in word relations, because that's how humans have created the language.


How do you know I understand X and Y and not just apply some mechanistic rules for producing this text? Even in the Chinese Room, to make it reasonably efficient, you'd need some shortcuts, some organization, some algorithm to do it. How is that different from some kind of understanding?


Because we have bodies that interact with the world and each other, and that's what language is based on. It's like computer science people completely forget how we evolved and created languages. Or how kids learn.


> Even in the Chinese Room, to make it reasonably efficient

That's the point - the brain isn't a Chinese room...


What if I gave you the complete description of how the brain of a person that speaks both Chinese and English is organised, you could simulate what happens when that person reads Chinese after being told to translate to English. Does that mean that that person cannot translate from Chinese to English just because you could (in theory, of course) do it without speaking Chinese yourself?

Yes, the algorithm is much more complicated, and we obviously don't have the capacity to map a brain like that, but to imply that there's anything except the laws of physics that governs it is... well, not very scientific.


I never said the system couldn't translate Chinese to English, only that doesn't understand the meanings of the words it's translating, because they're ungrounded symbols. Words have meanings because they're about something. Searle never said a machine in principle couldn't understand, only that symbol manipulation isn't enough.

Obviously if we made something like Data from Star Trek, it would understand language.


I totally agree. The Chinese Room and, in general, philosophical arguments about the limits of AI always seem to come down to the belief of human exceptionalism.


<deleted>




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: