Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you for picking at this.

A lot of people appear to be - often not consciously or intentionally - setting the bar for "reasoning" at a level many or most people would not meet.

Sometimes that is just a reaction to wanting an LLM that is producing result that is good for their own level. Sometimes it reveals a view of fellow humans that would be quite elitist if stated outright. Sometimes it's a kneejerk attempt at setting the bar at a point that would justify a claim that LLMs aren't reasoning.

Whatever the reason, it's a massive pet peeve of mine that it is rarely made explicit in these conversations, and it makes a lot of these conversations pointless because people keep talking past each other.

For my part a lot of these models often clearly reason by my standard, even if poorly. People also often reason poorly, even when they demonstrably attempt to reason step by step. Either because they have motivations to skip over uncomfortable steps, or because they don't know how to do it right. But we still would rarely claim they are not capable of reasoning.

I wish more evaluations of LLMs would establish a human baseline to test them against for much this reason. It would be illuminating in terms of actually telling us more about how LLMs match up to humans in different areas.





Computers have forever been doing stuff people can't do.

The real question is how useful this tool is and if this is as transformative as investors expect. Understanding its limits is crucial.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: