People like to assume that progress is this steady upward line, but I think it's more like a staircase. Someone comes up with something cool, there's a lot of amazing progress in the short-to-mid term, and then things kind of level out. I mean, hell, this isn't even the first time that this has happened with AI [1].
The newer AI models are pretty cool but I think we're getting into the "leveling out" phase of it.
The main problem with the current technology, to my eye, is you need these huge multi dimensional models with extremely lossy encoding in order to implement the system on a modern CPU which is effectively a 2.5D piece of hardware that ultimately accesses a 1D array of memory.
Your exponential problems have exponential problems. Scaling this system is factorially hard.
People like to assume that progress is this steady upward line, but I think it's more like a staircase. Someone comes up with something cool, there's a lot of amazing progress in the short-to-mid term, and then things kind of level out. I mean, hell, this isn't even the first time that this has happened with AI [1].
The newer AI models are pretty cool but I think we're getting into the "leveling out" phase of it.
[1] https://en.wikipedia.org/wiki/AI_winter