
Neural Networks seem to follow a puzzlingly simple strategy to classify images - LaserToy
https://medium.com/bethgelab/neural-networks-seem-to-follow-a-puzzlingly-simple-strategy-to-classify-images-f4229317261f
======
kozikow
So why not scramble the training set during the training as negative examples
to force network to learn more "global" features?

