I made the direct comparison to neural nets as it uses a very similar method to them (i.e. using weights as parameters and minimizing a cost function via gradient descent) but is simpler (gets rid of layers, neurons, activation functions, etc).
I never stated "AGI means solving polynomials". Based on how far LLMs have come, function approximation seems to play a role in it.
The first is if this method can express all the functions that a NN can express. High order polinomials usualy have huge spikes outside the region where they are fitted. The functions in NN usualy give more smooth interpolations. Those high exponents make me very worried.
The second is if it's possible to train them. I use the solver of Excel to fit a lot of experimental data with theoretical formulaswith few parameters. In my experience, it's important to guess initial values of the parameters than are close enough to the best values. Otherwise the gradient descent method just get a horrible local minimum that is completely unrelated to the solution you are looking for.
In conclusion, it's important to show that this new proposed method works well in practice in a few non trivial problems that then NN can solve, or at least that it can solve some problems that NN can't solve.
But sure, if you think AGI means solving polynomials, you've done it!