Hacker News new | past | comments | ask | show | jobs | submit login

Since neural networks are universal function approximators, they can learn any protocol.



I've read this notion before, but whenever I've actually looked into it, neural networks can only seem to represent differentiable/integrale functions, not any computable function.

More importantly, the fact that any function can be represented as a NN does NOT mean that any function can be learned through known NN training mechanisms, not even in principle (i.e. given finitely arbitrarily many examples and unbounded but finite time).

And of course, there is always still the vague possibility (which I don't personally subscribe to) that the actual function is not Turing computable, which would mean it certainly can't be approximated by an NN.


I am really really tired of this stupid, half-educated claim that gets repeated again and again by NN fanboys.

There has never been a dearth of universal function approximators, polynomials can do it, splines can do it, sine/cosines can do it. Being a universal approximator is hardly unique or special.

There is absolutely something special about DNNs but being universal approximators is not one of them.

Being able to learn a function from data is very different from being able to represent that function.


Well maybe, but good luck breaking encryption this way.

Being a universal function approximator doesn't magically solve every problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: