Hacker News new | past | comments | ask | show | jobs | submit login

I don't think I misrepresent his argument. I just interpret it using different examples. He uses a huge example, like speaking Chinese, which seems to confuse a lot of people. I use something much simpler, like doing addition.

His argument is based on the notion that doing something and understanding what you do are two different things. I don't see why this needs an elaborate thought-experiment when we all have experienced doing things without understanding them. We don't need to compare humans to computers to see the difference.

Problem is, this difference becomes apparent only when you go beyond the scope of the original activity/algorithm. And that's exactly where modern AI programs fail, badly. You take a sophisticated algorithm that does wonders in one domain, throw it into a vastly different domain, and it starts to fail, miserably, even though that second domain might be very simple.




His argument is that, while a human can do something with or without understanding it (e.g Memorizing), a machine can only do the former and will never do the latter. The argument may hold for the currect (simplistic) AI, but not for a future full brain simulator.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: