I really don’t see any evidence whatsoever that LLMs couldn’t be a cornerstone/building block to future levels of AI
Why are you so strong minded that it cannot be this way? Genuinely curious as I’ve personally never seen more than conjecture that it should be this way
> I really don’t see any evidence whatsoever that LLMs couldn’t be a cornerstone/building block to future levels of AI
I've also not seen any evidence it can be. The reality is that we don't really know, because evidence one way or the other pretty much amounts to either 1) having a detailed and accurate understanding of human intelligence, or 2) building the thing to demonstrate the point.
I'm fairly certain 1) won't be happening any time soon, and I'm skeptical that 2) will happen any time soon, given the current limitations, but on this I'm far less certain. I don't think anyone can be certain, and anyone stating things one way or the other with absolutely certainty is wrong.
I think the key limitation is that language is not intelligence and that much of the progress has either been centred around language, or has been comparatively simple problems.
There is definitely evidence that self-supervised predictions using e.g. Transformers is helpful for AGI. The brain has 100k cortical columns that to the best of our knowledge predict the next state given the current one. We've seen how these models can be used on all modalities, text, audio, images and video. It's a small part of what's necessary but to say there's "no evidence" is complete hyperbole.
Why are you so strong minded that it cannot be this way? Genuinely curious as I’ve personally never seen more than conjecture that it should be this way