Better captchas that are optimized to be hard for machines, but easy for humans.
Getting around automated systems that discriminate content. Like detecting copyrighted songs.
Training on these images improves generalization. Essentially these images add more data, since you know what class they should be given. But they are optimal in a certain sense, testing the things that NNs are getting it wrong, or finding the places where it has bad discontinuities.
"Better captchas that are optimized to be hard for machines, but easy for humans."
Nope, not gonna work. You'd have to have the classifier/ANN parameters to generate these in the first place in order to locate its adversarial counterexample. Otherwise, the perturbations would likely be irrelevant noise.
The discovery of the paper was that these adversarial examples worked on other neural networks. Including ones trained on entirely different datasets. They are not specific to a single NN.
Well... Not really... They split the MNIST data set and trained on disparate halves. Which is to say I wouldn't generalize from two networks trained on far less than 10x their parameter counts all the way to all neural networks in existence, but of course, your opinions may vary...
Better captchas that are optimized to be hard for machines, but easy for humans.
Getting around automated systems that discriminate content. Like detecting copyrighted songs.
Training on these images improves generalization. Essentially these images add more data, since you know what class they should be given. But they are optimal in a certain sense, testing the things that NNs are getting it wrong, or finding the places where it has bad discontinuities.