Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "language models don't really understand anything"

I have a sneaking suspicion that, if blinded, the crowd of people saying variations of that quote would also identify the vast majority of human speech as regurgitated ideas as well.

> I see no reason that this technology couldn't smoothly scale into human-level intelligence

Yup, the OpenAI scaling paper makes this abundantly clear. There is currently no end in sight for the size that we can scale GPT to. We can literally just throw compute at the problem and GPT will get smarter. That's never been seen before in ML. Last time I ran the calculations I estimated that, everything else being equal, we'd reach GPT-human in 20 years (GPT with similar parameter scale as a human brain). That's everything else being equal. It is more than likely that in the next twenty years innovation will make GPT and the platforms we use to train and run models like it more efficient.

And the truly terrifying thing is that, to me, GPT-3 has about the intelligence of a bug. Yet it's a bug who's whole existence is human language. It doesn't have to dedicate brain power to spatial awareness, navigation, its body, handling sensory input, etc. GPT-human will be an intelligence with the size of a human brain, but who's sole purpose is understanding human language. And it's been to every library to read every book ever written. In every language. Whatever failings GPT may have at that point, it will be more than capable of compensating for in sheer parameter count, and leaning on the ability to combine ideas across the _entire_ human corpus.

All available through an API.



Sorry for my limited knowledge of GPT, but isn't it limited by the training data set, like all other models?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: