
Fooling Production-Grade Classification Systems with Adversarial Traffic Signs [pdf] - mpweiher
https://arxiv.org/ftp/arxiv/papers/1907/1907.00374.pdf
======
kuu
Just if someone is interested, here is the link to the "landing" page, with
the metadata and abstract of the paper:

[https://arxiv.org/abs/1907.00374](https://arxiv.org/abs/1907.00374)

------
bb123
Makes me think that there is still a long way to go here. All of the changes
they made to the signs leave them still easily readable by humans. One could
see how reflections, weather and other environmental factors would distort a
road sign more than this. Speed limit signs are possibly the hardest example
though because it requires the system to recognise the exact number on it,
rather than just its existence. I wonder how much harder it would be to get it
to misclassify a stop sign?

~~~
FeepingCreature
The road sign has to be modified in the specific manner of the attack; not
just any distortion will do.

------
Merrill
The training set for the classifiers must not contain real-world traffic signs
which are weathered, rusted, peeled, bent, smudged, and spray paint tagged.

~~~
logfromblammo
Don't forget perforated by shotguns.

And stickered.

And those ones you sometimes find in malls, apartment complexes, or office
parks that are an architectural standard sign with the traffic control sign
symbol painted on one side, so you can't identify what the sign is from other
angles. I hate those.

------
lolc
Wow I've read about adversarial images before but didn't think attacks would
be this practical. It's weird to think that the added smudges have any
relevance on the result when there is so much contrast. It appears easy to us
because we're constantly training, cross-checking, and adjusting to conditions
without much conscious interference.

At some point, not now, we will accept what the machine says. Machines will be
better (faster and more accurate) at number recognition than us. But this
paper shows how far behind they are still. In the context of cars I think the
machines will just refuse to drive like we do. They (really their builders)
will demand much better controlled conditions to operate safely.

~~~
quickthrower2
We can still be fooled by sirnple things. Whats a siRNple?

~~~
mannykannot
Dealing with typos is the sort of robustness that people have, but that these
image recognition programs are not displaying.

------
theon144
Nice! I really appreciate the real-world testing and not just "proving" the NN
can be fooled by running it against image files.

This always seemed to me like a huge omission in reports about adversarial
attacks, sure it can be fooled with pixel-perfect changes of input data, but
would it survive literally any "real-world"-like environment? I am quite
surprised it does!

------
b_tterc_p
[https://m.xkcd.com/1958/](https://m.xkcd.com/1958/)

Not quite the same problem but the result is pretty much the same. The fact
that computer vision can be fooled by attacks like these are lower risk than
we might imagine. The more important question is what is the likelihood that
such conditions occur naturally and randomly.

Edit: also, the summary comment about this being the first time a physical
object was used to fool a commercial classification system sounds...
dubious...

~~~
dexen
The XKCD you linked is reasonable on the surface of it, but if you mull over
it a bit more, you'll see the three glaring omissions:

1) people care much, much less about "hurting" automated systems, and

2) people feel much less responsibility for accidents & failures if they are
separated by a level of indirection of automation

3) people feel much less responsible for mass actions that affect anonymous
people far away

Together, those three factors will indeed cause more incidents of people
attempting to fool the self-driving cars. Especially if done by displaying
malicious images via mass devices, like the light-up info boards, electronic
billboards etc.. I don't expect much more actual accidents, tho.

~~~
empath75
Also, people aren't generally fooled by bad road lines.

~~~
b_tterc_p
Perhaps not. And they certainly aren’t fooled by adversarial speed signs. But
they are fooled by plenty of other things which are equally easy to implement
and equally dangerous.

------
Retric
This seems like a meaningless attack. If you’re modifying a sign, you can
simply replace it with any other sign.

What exactly is the utility?

~~~
theon144
The near-invisibility of the attack to a layman's (human) eye. I would think
this kind of attack would take a non-negligible time to discover, unlike
simply replacing a sign which is obvious to any random passerby.

~~~
Retric
Why would you think so? The signs that worked would look very odd when driven
past. Tiny pictures on a screen don’t do these things justice.

On the other hand a different sign is just a different sign.

