Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And it will work.

I just whish the people believing LLM can actually reason and generalize see that they don't.



If that was evidence current AI don't reason, then the Thatcher effect would be evidence that humans don't: https://en.wikipedia.org/wiki/Thatcher_effect

LLMs may or may not "reason", for certain definitions of the word (there are many), but this specific thing doesn't differentiate them from us.


Being tricked by optical illusions is more about the sensory apparatus and image processing faculties than reasoning, but detecting optical illusions is definitely a reasoning task. I doubt it's an important enough task to train into general models though.


At this point think all reasoning really means is having seen enough of the right training data to make the correct inferences, and they're just missing some training data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: