Because LLMs are on an off-ramp path towards AGI. A generally intelligent system can brute force its way with just memory.
Once a model recognizes a weakness through reasoning with CoT when posed to a certain problem and gets the agency to adapt to solve that problem that's a precursor towards real AGI capability!
reply