>Really for LLMs you just need to have the model put it's output to an internal buffer, read that buffer and make sure it makes sense, then output that to the end user.
Makes sense to what. The LLM doesn't have a goal, other then to spew text that looks like it should be there.
The analogy lies in the fact that, much like evolution through natural selection, deliberate intelligence/ability of organisms to comprehend reality is not the objective, but something else entirely is.
For evolution, it's fitness. For LLMs, it's the next token.
Yet despite that, the ability to reason emerges as a means to an end.
To the terminal or instrumental goal of the statement it is working on.
Question to LLM, "I have one hundred and eleven eggs in the store and another two hundred and twenty two are showing up in an hour, how many eggs will I have in total"
Internal response "this looks like math problem that requires addition. the answer is 333. use a calculator validate 111 + 222. (send 111+222, receive 333). Tool returns 333 validing previous response"
External response: "The answer is 333"
This chain of logic is internally consistent, hence makes sense.
Makes sense to what. The LLM doesn't have a goal, other then to spew text that looks like it should be there.