Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I too noticed (for a usecase totally unrelated to chess, namely code generation) that ChatGPT3.5 gave better answers than GPT4. The 3.5 answer was exactly what I wanted, GPT4 was wrong.

Does thay mean we have plateaued?



It's inevitable that LLMs will plateau. They'll increase their abilities in certain areas but ultimately core flaws of their architecture and training approach will likely require another rethink. Unclear what that is yet (though Yann Lecun seems to think world models are the path forward).

We've gone through the "hype" phase. Now I suspect the next few years will be a lot of growth finding how to apply LLMs, creating good interfaces for them, and running them cheaply. Paying OpenAI for API access without true fine tuning, etc. is a hard sell.


I think they invested the parameters into supporting multimodal inputs (images).




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: