Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sorry, I should have said "analogy" and not "mental model", that was presumptuous. Maybe I also should have replied to the GP comment instead.

Anyway, since we're here, I personally think giving LLMs agency helps unlock this latent knowledge, as it provides the agent more mobility when walking the manifold. It has a better chance at avoiding or leaving local minima/maxima, among other things. So I don't know if agentic loops are entirely off-topic when discussing the latent power of LLMs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: