
AI researchers allege that machine learning is alchemy - caprorso
http://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy
======
Nomentatus
As I've argued in the discussion here of this paper:
[https://blog.acolyer.org/2018/03/28/deep-learning-scaling-
is...](https://blog.acolyer.org/2018/03/28/deep-learning-scaling-is-
predictable-empirically/)

The most reasonable conclusion of the findings at that acolyer link is that
_all_ neural net training algorithms are essentially evolutionary, with the
data providing enough randomness just by itself to power that. Some algorithms
do better at first, when the going is easiest, but all get you to the same
place at roughly the same rate, overall.

We've really discovered a singular method of arriving at working neural nets:
evolution. (A method which I used back in the 80's to create a neural net tic-
tac-toe player on a 12mhz machine - but couldn't get Hinton to look at, back
then, unfortunately, when we were face-to-face.)

"Trial and error" isn't an unfortunate bug in the method, it _is_ the method.
The rest of the algorithmic arabesques do speed that process of evolution a
bit in the initial stages, but they don't change the underlying effective
process substantially and get the same results. We are where we are.

