
Adversarial patch - godelmachine
https://blog.acolyer.org/2018/03/29/adversarial-patch/
======
web007
If you put a sticker that looks like a toaster next to a banana, your
classifier will see the toaster and not the banana.

I don't understand what the novelty is - Is the "attack" that you're making an
image that's TOASTER++ in a small area?

~~~
Spivak
Attacks against computer vision algorithms are pretty much always going to be
of the form of "I can make a change to the scene or image which makes no
difference to a human but which causes an erroneous classification."

This is a huge problem if we expect autonomous systems to do anything
meaningful with vision outside of tightly controlled environments. You're
going to be crucified after an autonomous vehicle merrily plows into a semi
and the explanation for the crash is "well the truck was white and had the
Hidden Valley logo on it which was misinterpreted as clouds." Or if an
autonomous car runs a red because there was billboard somewhere behind the
light which sorta-kinda looks like a green circle.

~~~
psychometry
Or meddling kids put stickers on stop signs that make them look like no
parking signs to an Uber automated car.

------
jwilk
Demonstration video:
[https://www.youtube.com/watch?v=i1sp4X57TL4](https://www.youtube.com/watch?v=i1sp4X57TL4)

------
donventure
This may sound stupid but I am curious what would be the effect on facial
recognition. Could a person place a sticker on a hat or side of the face, and
would that keep confuse the software? or even on the plate of your car?

~~~
overlordalex
This is a really interesting question, and makes me wonder how robust the
patch is to scaling, different lighting, etc.

This also reminds me of using makeup and hair to prevent automatic facial
recognition. I believe this is the project:
[https://cvdazzle.com/](https://cvdazzle.com/)

> The name is derived from a type of World War I naval camouflage called
> Dazzle, which used cubist-inspired designs to break apart the visual
> continuity of a battleship and conceal its orientation and size. Likewise,
> CV Dazzle uses avant-garde hairstyling and makeup designs to break apart the
> continuity of a face. Since facial-recognition algorithms rely on the
> identification and spatial relationship of key facial features, like
> symmetry and tonal contours, one can block detection by creating an “anti-
> face”.

~~~
number6
This is why in the cyberpunk genre there are theese crazy hairstyles :D

------
lixtra
They should point out the similarity to the natural phenomenon:
[https://en.m.wikipedia.org/wiki/Eyespot_(mimicry)](https://en.m.wikipedia.org/wiki/Eyespot_\(mimicry\))

~~~
jwilk
Non-mobile link:

[https://en.wikipedia.org/wiki/Eyespot_(mimicry)](https://en.wikipedia.org/wiki/Eyespot_\(mimicry\))

------
beaconstudios
isn't the main issue here that the classifier that's being attacked is looking
for a single subject, not trying to identify the items in a scene? I'm fairly
certain a classifier looking for all the items in a scene would still see the
banana in the first example.

------
joelthelion
Can't you just add a few of these patches to the training set and teach
networks to ignore them?

------
crawfordcomeaux
How can I get fabric printed with adversarial patches all over it?

