
How to Explain Deep Learning Using Chaos and Complexity - ceperez
https://medium.com/intuitionmachine/how-to-explain-deep-learning-using-chaos-and-complexity-33de81c321de#.5hzuzsbla
======
pizza
[http://nuit-blanche.blogspot.com/2017/01/understanding-deep-...](http://nuit-
blanche.blogspot.com/2017/01/understanding-deep-learning-requires.html)

 _Here is an interesting paper that pinpoints the influence of regularization
on learning with Neural networks. From the paper:

> Our central finding can be summarized as:

Deep neural networks easily fit random labels.

and later:

> While simple to state, this observation has profound implications from a
> statistical learning perspective:

> 1\. The effective capacity of neural networks is large enough for a brute-
> force memorization of the entire data set.

> 2\. Even optimization on random labels remains easy. In fact, training time
> increases only by a small constant factor compared with training on the true
> labels.

> 3\. Randomizing labels is solely a data transformation, leaving all other
> properties of the learning problem unchanged. _

I subscribe to Schmidhuber's notion that algorithms that rely upon randomness
to select parameters have a very illusory "simplicity" that relies upon
externalizing the complexity of the PRNG.
[http://people.idsia.ch/~juergen/newai/](http://people.idsia.ch/~juergen/newai/)

