Hacker News new | past | comments | ask | show | jobs | submit login

Good point. In some ways this kind of error makes it feel like its process is very comparable to what humans do.



Yeah, I'm counting this as yet another weak but positive evidence that we've hit on something fundamental here - if not universal, then at least to the path evolution took that led to a human mind.


Yeah, I agree. There is exactly one non-LLM entity in existence that can reason this generally and this well in human languages, and that's us, human beings. We built LLMs by taking inspiration from the human brain and trying to approximate it with neural networks that were often able to achieve intelligent-ish performance on tasks in narrow domains, and eventually we stumbled on an architecture that is truly general, even if it's generally dumb. It would be an absurd coincidence to me if that architecture, LLMs, actually had nothing in common with how humans think. None of that means it is the best architecture for thinking like humans, or that we just need to scale it up to get to super-intelligence, or that it is currently as smart as a human being. But it just doesn't seem plausible that it behaves so much like a human mind if there's really nothing in common underneath.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: