

Intriguing Properties of Neural Networks [pdf] - jeremynixon
http://arxiv.org/pdf/1312.6199.pdf

======
jostmey
The reason the Neural Networks are suffering from blindspots is because they
were never trained as generative models. A straightforward classifier cannot
be expected to correctly identify something it has never seen before---so of
course you can find adversary examples. Generative models are computationally
more expensive to build, which is why they are not always used.

------
SlipperySlope
This paper demonstrates that deep neural networks have surprising blind spots
when inputs are only slightly perturbed in a certain manner.

I wonder if these algorithmically determined adversarial examples can be fed
back into the network in the training set with correct tagging to make the
network more robust with regard to blind spots?

~~~
anko
I think the problem will remain because the "bits" (perceptrons etc.) of
representation are always less than the sum of the inputs. So the training is
lossy, which also means not perfect.

As humans, we probably have less "blindspots" because we have learned extra
double checking mechanisms, such as applying logic and our world knowledge to
analyzing an image. We still fall prey to optical illusions though.

