Hacker News new | past | comments | ask | show | jobs | submit login

Unless I'm missing something, I don't think this blog really defines "reason". So, like, this is a completely pointless question.



I don't rigorously define reason in the article and I state that it is hard to draw clear boundaries. I'm more relying on the readers intuitive understanding of the idea, which is perhaps not a good thing to do.

I wouldn't say that the question is completely pointless. There are a bunch of datapoints in the post that you can use to inform a conclusion about whether you think LLMs can reason or not.


Maybe you could train it explicitly on modus ponens et alia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: