Hacker News new | past | comments | ask | show | jobs | submit login

This may have been true elsewhere, but I don't think this holds for GPT4.

I suspect that complex intelligence, that cannot be directly attributed to structure of the underlying LLM, has emerged. I am guessing it has to do with the use of language itself and at a sufficient enough size, this property exist in both humans and models.




It's interesting that you can only suspect.

Seeing how everyone is so divided on this really highlights how it's almost purely a philosophical argument about what intelligence actually is.


A lot of the experimentation I've done is too long and complex to fit nicely in an Ask HN post. People have the tendency to move the bar when assigning intelligence to AI. GPT4 is different. Here is a post from earlier today that might be more convincing.

https://www.reddit.com/r/ChatGPT/comments/12l9nwx/really_imp...


GPT-4 is no different to any old deep neural network and fundamentally, they are black-boxes and have no capability of reasoning. What we are seeing in GPT-4 is regurgitating text it has been trained on.

Not even the researchers who created it can get it to transparently explain it decisions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: