Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
CuriouslyC
3 months ago
|
parent
|
context
|
favorite
| on:
Large Language Models Are Neurosymbolic Reasoners
Chain of thought is basically reasoning as humans do it, the only difference is that unlike humans the model can't see that its output is wrong, abandon a line of reasoning and re-prompt itself (yet).
naasking
3 months ago
|
next
[–]
Various attempts at feeding their output back in to check itself have shown marked improvements in accuracy.
skenderbeu
3 months ago
|
prev
[–]
Multi agent LLMs talking to each other can already do this. It's just not cost feasible yet because it can lead to infinite loops and no solutions
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: