
Psychedelic stickers that interfere with AI image recognition - zipwitch
https://techcrunch.com/2018/01/02/these-psychedelic-stickers-blow-ai-minds/
======
dasil003
It actually does kind of look like a toaster.

~~~
waynecochran
Indeed ... looks like a toaster... I would say the AI is pretty dang good.

~~~
ascorbic
I think the point is that even though it's smaller and less obvious that the
other objects, it's still sufficient to "hijack" the recognition for the whole
image. If they had a little sticker with a normal picture of a toaster on it
it's unlikely that it would've prevented the banana from being recognised, and
the image only looks a bit like a toaster, whereas the banana is unambiguously
recognisable.

------
nopinsight
It appears to me that this mostly fools whole-image classification algorithms.
If the system performs object segmentation first and then applies
classification to each object in the scene, this method is unlikely to be
effective.

One can paste such a sticker on top of the face or other objects to be
disguised and it might reduce recognition accuracy a bit, but applying some
“paint-in” algorithms to fill in the blank covered by the sticker would
basically remove its effect. That is unless it is used to cover some prominent
features, although that is often unpractical in many circumstances.

~~~
asdfaefasdf
This sticker only works against _this_ classifier. If you start changing the
algorithm, you'd need to change the attack to match.

If you think you can write a better image classifier by first segmenting the
image before using ML, then I encourage you to get your own computer vision
paper published and see how that works for you.

------
ted_dunning
I think that the technical term for these should be "squirrels".

[https://www.youtube.com/watch?v=SSUXXzN26zg](https://www.youtube.com/watch?v=SSUXXzN26zg)

------
ryandrake
A potential application of this could be some kind of “privacy sticker” that
you’d wear on your hat or your face in order to disable automated facial
recognition systems.

~~~
oneweekwonder
Not sure if it can fool all cameras, But LEDs seems to bright for some cameras
and you can buy hats with embedded LEDs.[0]

Unfortunately this will make you stand out like a Christmas tree on the video.
But this must cripple automated facial recognition technology.

[0]: [http://odditymall.com/justice-caps-hide-your-face-from-
surve...](http://odditymall.com/justice-caps-hide-your-face-from-surveillance-
cameras)

~~~
dingo_bat
I think infrared LEDs can solve the problem quite easily.

------
Iv
Psychedelic stickers that interfere with one specific model used in AI image
recognition.

If necessary, next week these system can learn to ignore these.

~~~
kaybe
Check out the 34C3 talk on adversarial AI. They found that the percentages of
fooling are still high if one gets the adversary model completely wrong when
designing the attack generating network, so it seems surprisingly stable.

~~~
Iv
I think that working around these type of fooling is easy but not really
worthwhile for now. After all, adversarial models are designed to improve the
performances of the models.

Also in the article, they test a detector that has to identify a single object
in an image that contains two: place an actual toaster next to the banana and
call it fooled.

~~~
solarkraft
The point there may have been that the sticker over-powered the banana. The
article does leave me with more questions than answers.

~~~
Iv
Ask them, I may have some answers.

------
ModernMech
It's pretty funny seeing this post directly under this: "Beijing bets on
facial recognition in a big drive for total surveillance"

I guess we'll see people sticking these on their faces?

~~~
QAPereo
I would guess that if that becomes an issue, such behavior will rapidly be
criminalized, and in China, harshly punished.

------
tw1010
These are the types of fun details that are totally going to be part of future
history lessons (no matter which direction, positive or negative, all of this
is heading towards).

~~~
zitterbewegung
Hardening production ML systems are going to be fun. I think exploiting ML
will be the new kid on the block just like how we were introduced to XSS
exploits. See [https://github.com/cchio/deep-
pwning](https://github.com/cchio/deep-pwning)

~~~
mtgx
Indeed. It's already starting to happen:

[https://www.csail.mit.edu/news/fooling-googles-image-
recogni...](https://www.csail.mit.edu/news/fooling-googles-image-recognition-
ai-1000x-faster)

~~~
username223
Trying to "harden" million-parameter models trained on a relatively small
number of relevant examples will be a nightmare to make web security look
easy.

PS -- NIPS is in Long Beach now? What a shame.

~~~
robotresearcher
It left Vancouver a few years ago and has been moving around. It's getting too
big for many cities now.

~~~
username223
Too bad... Vancouver/Whistler was an awesome place to have a conference, even
if the weather was rainy and the skiing so-so. A NIPS too big to be hosted by
anything but a mega-city sounds depressing.

------
vog
That reminds of the old days when automatic "self-learning" SPAM
classification began.

Back then, spammers sent deliberately gibberish messages. The goal was that
users (rightfully) marked those as SPAM, somehow disturbing the machine
learning and thus weakening the overall SPAM recognition.

Alas, I don't know if this was actually working, and if so, how large the
effect was. This would be an interesting bit of history.

------
dghughes
When I saw the "oily" legs on reddit I was curious if such an illusion could
be used to fool AI camera surveillance. The recent article on China's
surveillance network came to mind.

Oily legs illusion
[https://i.imgur.com/14U9rqn.jpg](https://i.imgur.com/14U9rqn.jpg)

------
wallstprog
Interestingly, William Gibson includes something very much like this as a plot
point at the end of "Zero History."

