Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>Ever since GPT-3 was released, I've been convinced that language comprehension isn't nearly as big a component of general intelligence as people are making it out to be

"X is the key to intelligence"

computers do X

"Well actually, X isn't that hard..."

rinse and repeat 100x

At some point you have to stop and reflect on whether your concept of intelligence is faulty. All the milestones that came and went (arithmetic, simulations, chess, image recognition, language, etc) are all facets of intelligence. It's not that we're discovering intelligence isn't this or that computational feat, but that intelligence is just made up of many computational feats. Eventually we will have them all covered, much sooner than the naysayers think.



> All the milestones that came and went (arithmetic, simulations, chess, image recognition, language, etc) are all facets of intelligence.

Why should I have to care about those weird milestones that some other randos came up with once upon a time? I've never espoused any of those myself, so how is this supposed to prove anything about my thought process?

> It's not that we're discovering intelligence isn't this or that computational feat, but that intelligence is just made up of many computational feats. Eventually we will have them all covered, much sooner than the naysayers think.

Well, it certainly appears to me like there's a big qualitative difference between the capabilities you mentioned (arithmetic and simulations are just applications of predefined algorithms; chess, image recognition, and language are memorization, association, and analogy on a massive scale) and the kind of ad-hoc multi-step logical reasoning that I'd expect from any AGI. You can argue that the difference is purely illusory, but I'll have a very hard time believing that until I see it with my own eyes.


>so how is this supposed to prove anything about my thought process?

Because its the same thought process that animated theorists of the past. Unless you have some novel argument to demonstrate why language isn't a feature of intelligence despite wide acceptance pre-LLMs, the claim can be dismissed as an instance of this pernicious pattern. Just because computers can do it and it isn't incomprehensibly complex, doesn't mean it's not a feature of intelligence.

>Well, it certainly appears to me like there's a big qualitative difference between the capabilities you mentioned... and the kind of ad-hoc multi-step logical reasoning that I'd expect from any AGI.

I don't know what "qualitative" means here, but I agree there is a difference in kind of computation. But I expect multistep reasoning to just be variations of the kinds of computations we already know how to do. Multistep reasoning is a kind of search problem over semantic space. LLM's handle mapping the semantic space, and our knowledge from solving games can inform a kind of heuristic search. Multistep reasoning will fall to a meta-computational search through semantic space. ChatGPT can already do passable multistep reasoning when guided by the user. An architecture with a meta-computational control mechanism can learn to do this through self-supervision. The current limitations of LLMs are not due to fundamental limits of Transformers, but rather are architectural, as in the kinds of information flow paths that are allowed. In fact, I will be so bold as to say that such a meta-computational architecture will be conscious.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: