All the AI founders (e.g., Dario Amodei) seem to believe that we're nowhere near the end of seeing performance improvements in LLMs as they are trained on more data (i.e., LLM scaling laws) - at least that's what they say publicly, but they obviously have skin in the game. Curious what knowledgeable people think who are not incentivized to make optimistic public statements?
What I really want to know is, assuming capital / compute is not a constraint, will be continue to see order of magnitude improvements in LLMs, or is there some kind of "technological" limit you think exists?
This isn't necessarily going to limit it though. It's possible there are clever approaches to leverage much more data. This could either be through AI-generated data, other modalities (e.g. video) or another approach altogether.
This is quite a good accessible post on both sides of this discussion: https://www.dwarkeshpatel.com/p/will-scaling-work