Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I said it here before and I will repeat it: Unless it solves the Abstraction and Reasoning Corpus — ARC (See: https://twitter.com/fchollet/status/1636054491480088823) you can not say that ChatGPT is able to think or abstract.


Is it really surprising that text model can't solve graphical quizzles?


Wow that's a really high bar to clear! I consider myself to be a non-dumb person and it would take genuine concentrated thought to figure out the task in the example image.

But I think even if GPT-X solves it some people will say it's just regurgitating whatever words and images and associations it has seen in training.

There was a time when natural language conversation was considered the gold standard for AI. Now it's "just statistics".


> some people will say it's just regurgitating whatever words and images and associations it has seen in training

That's why I specifically mentioned ARC: Its test test is novel and fully private (even to us, humans), so the model will need abstraction capabilities in order to solve it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: