Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"your comment confidently states this is unfixable - presumably based on the frequency you've seen similar text on the internet. why should anyone believe the veracity of your statement? "

no its because GPT is based on transformers.



and you aren't?

Aren't you just a function of your input and memories (stuff you've read, sensory input) as run through/managed by some neural network?

What makes you think the rest isn't just emergent properties?

And what makes you think you can't hook up the LLM with some algorithms or layers that handle some of the rest behavior of what your brain does?


Yep, the idea of grounding seems interesting to me. Everything in a LLM is just a statistical dream at this point with no 'reality basis' at this point. I wonder if it's possible to give the language model grounding points of things that are real and building a truth model from that.


no, our brains arent based on transformers.

and the issue of lost uncertainty is inherrent to this, yes.

to fix this, a new type of llm would have to be invented. this particular branch of development may very well be a dead end.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: