Hacker News new | past | comments | ask | show | jobs | submit login

If an LLM states an answer and then provides a justification for that answer, the justification is entirely irrelevant to the reasoning the bot used. It might be that the semantics of the justification happen to align with the implied logic of the internal vector space, but it is best case a manufactured coincidence. It’s not different from you stating an answer and then telling the bot to justify it.

If an LLM is told to do reasoning and then state the answer, it follows that the answer is basically guaranteed to be derived from the previously generated reasoning.




The answer will likely match what the reasoning steps bring it to, but that doesn’t mean the computations by the LLM to get that answer are necessarily approximated by the outputted reasoning steps. E.g. you might have an LLM that is trained on many examples of Shakespearean text. If you ask it who the author of a given text is, it might give some more detailed rationale for why it is Shakepeare, when the real answer is “I have a large prior for Shakespeare”.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: