Hacker News new | past | comments | ask | show | jobs | submit login

Daniel Dennett is a philosopher who made it sort of his favorite subject: competence without comprehension. I won't attempt to explain this better than he does.

But the general idea is that these models are more elaborate versions of ELIZA, while they aren't in principle different. They work like my child who when I quiz him after reading him a page of a bed-time story on the contents of it will try to come up with plausible answers, but I can tell when he was actually paying attention / knew the words or weren't / didn't.

Now, this is something I don't know whether Dennett would claim, so, it's my thought, not his. I believe that a fundamental difference is in motivation / value judgement. I.e. humans do things for a reason that they constructed based on what they perceive to be more valuable to them. Models aren't built to have values or wishes. They don't want to be able to answer more "why?" questions about whatever they "say".

I mean, your reflection in the mirror is a very realistic counterfeit human being, and in many respects is indistinguishable from you, but it's nothing like a human being.




If you request a bot to write code that can accomplish a specified task, it needs to understand the task, understand the meaning of the code, and then write code that accomplishes the task. I don’t care about appeal to sentience or what it means to “understand”. Fundamentally the meaning and of these things must be embedded in a way that leads to a synthesis of new code that solves the problem. Comparisons to ELIZA are naive.

Too many philosophers waste time debating the definitions of words that they assert must represent intrinsic concepts. But they’re wrong. Sentience is just a word. It’s not well defined. You haven’t accomplished anything by finding more examples where its lack of definition is made more clear.

You could say the same about comprehensions but on some level the pedantry feels so obvious and unnecessary that it’s just annoying to hear. Ffs. These modes clearly embed a comprehension of many things.


> If you request a bot to write code that can accomplish a specified task, it needs to understand the task

No, it doesn't. It may simply be a coincidence. For example, I can write a bot that always outputs print('hello world'), and then ask it to write a hello world program in Python.

Comparisons to ELIZA aren't naive. They underscore the fact that more complex models of the same kind as GPT-3 use the same matching approach, they just have a bigger database of matches with more complex rules. They don't derive their answers from first principles. They don't have anything like more abstract concepts or models in any useful form. Which was the goal of AI all along! The AI was the search for these models and concepts, so that we could automatically establish the truth of questions nobody knows the answer to. Models like GPT-3 don't know, and cannot possibly search for the answer to the questions nobody knows the answer to because they are aggregators of existing knowledge.

> Too many philosophers waste time debating

I bet you aren't one of them though.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: