There are other work in the literature describing faster algorithms to compute these perturbations, which makes it possible to use them while training. See, eg.: https://arxiv.org/abs/1412.6572
IMO, (at least) two pieces of research on the subject means that the short answer really is "yes". Maybe not the exact technique used in the paper in the original post, but conceptually similar techniques.
Attention, localized gain, etc would not have this effect, but they tend to allow a smaller network to perform more sophisticated tasks.
We have two 'cameras' and they scan the image they are looking at by jumping around the image 20–200 ms intervals. The perceived image is integration of many of these jumps and its' constantly changing.
I wonder what it is about these high-saturation, stripy-spiraly bits that these networks are responding to.
Is it something inherent in natural images? In the training algorithm? In our image compression algorithms? Presumably, the networks would work better if they weren't so hypersensitive to these patterns, so finding a way to dial that down seems like it could be pretty fruitful.
(This is totally unsubstantiated though)
> The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers. It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
The paper unpacks that explanation pretty well along with actual pictures and how they are related to the classification boundary.
Also self-driving cars have distance sensors and wouldn't just drive into upcoming traffic because of one sensor anomaly.
Humans seem really good at being imprevious to these, due to millions of years of ignoring things..