Hacker News new | past | comments | ask | show | jobs | submit login

It's modeling patterns found across the massive corpus of textual training input it has seen -- not the true concepts related by the words as humans understand them. If you don't believe me then ask ChatGPT some bespoke geometry-related brain teasers and see how far it gets.

I want to be clear that the successful scale-up of this training and inference methodology is nonetheless a massive achievement -- but it is inherently limited by the nature of its construction and is in no way indicative of a system that exudes agency or deliberative thought, nor one that "understands" or models the world as a human would.




> [...] no way indicative of a system that exudes agency or deliberative thought, nor one that "understands" or models the world as a human would.

Certainly not - its architecture doesn't model ours. But it has taken a huge step forward in our direction in terms of capabilities, from early to late 2022.

As its reasoning gets better, simply a conversation with itself could become a kind of deliberative thought.

Also, as more data modalities are combined, text with video and audio, human generated and recordings of the natural world, etc., more systematic inclusion of math, its intuition about solving bespoke geometry problems, and other kinds of problems, are likely to improve.

Framing a problem is a lot of the solving of a problem. And we frame geometry with a sensory driven understanding of geometry that the current ChatGPT isn't being given.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: