
Adversarial Patch [pdf] - isp
https://arxiv.org/abs/1712.09665
======
isp
Key idea: it's possible to design a "patch" (a small portion of an image) that
is extremely salient to a neural network, which can fool the network into
misclassifying the overall larger image. This "patch" can be printed out as a
real-life sticker. See Figure 1 on page 2.

~~~
jwilk
[https://i.imgur.com/wF1THeV.png](https://i.imgur.com/wF1THeV.png)

Video demonstration:
[https://www.youtube.com/watch?v=i1sp4X57TL4](https://www.youtube.com/watch?v=i1sp4X57TL4)

------
chatmasta
Something similar is this truck with a video screen of cars on the back:
[https://i.pinimg.com/originals/4b/b7/78/4bb778ec36038e6f88ae...](https://i.pinimg.com/originals/4b/b7/78/4bb778ec36038e6f88ae5d92318c34ca.jpg)

This is going to be a more and more serious problem. It's not just image
recognition either, but also ML data sets. Algorithms can be poisoned.

