Security/ML is a fairly new area of research, but I think it's going to be pretty important in the next few years. There's even a very timely Kaggle competition about this (https://www.kaggle.com/c/nips-2017-defense-against-adversari...) run by Google Brain. I hope that this blog post will help make this really neat area of research slightly more approachable/accessible! Also, the attacks don't require that much compute power, so you should be able to run the code from the post on your laptop.
May "guacamole" become as prominent as "Alice and Bob".
Keep up the good work!
Hopefully, this means research will focus on more robust classifiers based on weakness identified by adversarial approaches!
> Adversarial training seeks to improve the generalization of a model when presented with adversarial examples at test time by proactively generating adversarial examples as part of the training procedure. This idea was first introduced by Szegedy et al. [SZS13] but was not yet practical because of the high computation cost of generating adversarial examples. Goodfellow et al. showed how to generate adversarial examples inexpensively with the fast gradient sign method and made it computationally efficient to generate large batches of adversarial examples during the training process [GSS14]. The model is then trained to assign the same label to the adversarial example as to the original example—for example, we might take a picture of a cat, and adversarially perturb it to fool the model into thinking it is a vulture, then tell the model it should learn that this picture is still a cat. An open-source implementation of adversarial training is available in the cleverhans library and its use illustrated in the following tutorial.
Bonus, if you tackle this problem you get several semi-orthogonal technologies for "free".
1 - https://www.usenix.org/system/files/conference/cset16/cset16...
The end result will vary depending if the adversial input is a child or the rear end of a truck illuminated by sunshine at a certain angle.
So far only the second one have been tested IRL but for some reason I'm not really fond of the idea that we should be gathering more field data of adversial input...
My main fear with current autonomous driving applications of ML is that they might not be that ready for prime time and that we are only a few deadly accidents away from a major setback in public trust relating to autonomous driving.