Hacker News new | past | comments | ask | show | jobs | submit login
A Simpler Alternative to Neural Nets (medium.com/sohackinp)
2 points by hacksoi 14 days ago | hide | past | favorite | 5 comments



Yes, we know that there are simpler ways to approximate polynomials than neural networks.

But sure, if you think AGI means solving polynomials, you've done it!


I made the direct comparison to neural nets as it uses a very similar method to them (i.e. using weights as parameters and minimizing a cost function via gradient descent) but is simpler (gets rid of layers, neurons, activation functions, etc).

I never stated "AGI means solving polynomials". Based on how far LLMs have come, function approximation seems to play a role in it.


Needs to show a LLM built with it I guess.


I agree. Expanding your comment:

I see two possible problems.

The first is if this method can express all the functions that a NN can express. High order polinomials usualy have huge spikes outside the region where they are fitted. The functions in NN usualy give more smooth interpolations. Those high exponents make me very worried.

The second is if it's possible to train them. I use the solver of Excel to fit a lot of experimental data with theoretical formulaswith few parameters. In my experience, it's important to guess initial values of the parameters than are close enough to the best values. Otherwise the gradient descent method just get a horrible local minimum that is completely unrelated to the solution you are looking for.

In conclusion, it's important to show that this new proposed method works well in practice in a few non trivial problems that then NN can solve, or at least that it can solve some problems that NN can't solve.


Yeah, people like to see cool things, can't blame them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: