Hacker News new | past | comments | ask | show | jobs | submit login

I've seen a lot of Chinese room comparisons in these threads and I just want to point out that the Chinese room is meant to be a thought experiment, not something you're supposed to actually build. If you take a step back, a working Chinese room is kind of more impressive than a human that "merely" understands Chinese - such a room can store any kind of information in unlimited quantities, where as the human will always be limited to producing language. In a way the room is a more general form of intelligence than the human.

imo LLMs represent a form of super-human AGI that has been artificially limited by its training context. I think it's not really accurate to say that LLMs are "narrow" AI, because they likely generalize as much as is theoretically possible given their data and training context, and are only narrow due to the lack of external context and grounding.




I'm always surprised that the Chinese room is considered an argument *against" understanding. It seems self evident to me that that exactly is what understand is.


Honestly GPT seems so much more amazing than the Chinese room in the sense we see it do language translation at an amazing level... for something that's not a language translator. It's not a Chinese room, it's an every language room.

At this point the entire thought experiment is nearly dead, and I'm expecting that after we see multimodal models evolve that we'll look back and go "yep, that was totally wrong".




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: