
Generative Adversarial Examples [pdf] - stablemap
https://arxiv.org/abs/1805.07894
======
jwatte
I think GA is great at pointing out models that are overfitted and/or
undertrained (not enough perturbed input in training, for example.)

However, everyone who says "this shows deep learning models can't work," or
draw a similar conclusion, is missing the point.

More training data and more stretching of existing data will increase
robustness. Ideally, models will be measured on robustness against attack as
well as precision and recall.

