The strengths and weaknesses of the algorithmic niche that artificial NNs are in hasn't changed a bit since a decade ago. They are still bad at anything I'd want to actually use them for that you'd imagine actual AI would be good at. The only thing that has changed is people's perception. LLMs found a market fit, but if you notice, compared to last decade where we had Deepmind and OpenAI competing at actual AI in games like Go and Starcraft, they've pretty much given up on that in favor on hyping text predictors. For anybody in the field, it should be an obvious bubble.
Underneath it all, there is some hope that an innovation might come about to keep the wave going, and indeed, a new branch of ML being discovered could revolutionize AI and actually be worthy of the hype that LLMs have now, but that has nothing to do with the LLM craze.
It's cool that we have them, and I also appreciate what Stable Diffusion has brought to the world, but in terms of how much LLMs influenced me, they only shorted the time it takes for me to read the documentation.
I don't think that machines cannot be more intelligent than humans. I don't think that the fact that they use linear algebra and mathematical functions makes the computers inferior to humans. I just think that the current algorithms suck. I want better algos so we can have actual AI instead of this trash.
It's very difficult to understand this statement. What meaning of "qualitatively" could possibly make it true?