
Adversarial design printed on a shirt to fool object recognition algorithms - ciccionamente
https://www.vice.com/en_us/article/evj9bm/adversarial-design-shirt-makes-you-invisible-to-ai
======
scrumper
Lovely. A bit of social proof hacking could go a long way to making these kind
of adversarial designs more common on the streets - hire some actors to go
round the city with CV-defeating makeup on, or these T-shirts, or these
garments: [https://www.vice.com/en_ca/article/qvgpvv/adversarial-
fashio...](https://www.vice.com/en_ca/article/qvgpvv/adversarial-fashion-
clothes-that-confuse-automatic-license-plate-readers) (though I wonder if
those designs might be shut down by copyrights on license plate designs?)

(As an aside I got a kick out of reading "some kind of hypebeast Supreme x MIT
collab")

~~~
SkyBelow
Don't forget to have them carry around 3D printed full auto assault turtles.

------
isthispermanent
So 100 human sized objects get detected by the algo, and then 1, wearing this
t-shirt, that fits most of the parameters doesn't. Very, very easy to adjust
the algorithm to account for a t-shirt. This is cute, at best.

It's also then super easy to say that the individual wearing the shirt is
likely to try to usurp monitoring. In practice this type of thing will likely
make you a more prevalent target for monitoring along the lines of "what do
you have to hide?".

Not that I agree at all with large-scale monitoring or think anyone should
prove that they don't have something to hide. Only that it paints the target
on your back.

~~~
crooked-v
> Very, very easy to adjust the algorithm for a t-shirt.

The operative point here is not 'a shirt', but a visual pattern that tricks
deep learning-style classifiers into wildly misidentifying something. There's
no 'very easy' way to counteract that other than retraining on a new dataset
or switching entirely away from a deep learning system.

~~~
Kalium
As I understand it, adversarial designs generally work on one specific
recognition system. So working around this attack would be very achievable
with three or more recognition systems and a consensus check.

This particular paper is based around attacking YOLOv2.

~~~
polishTar
I think these types of adversarial attacks are even easier to foil than that
because they're specific to one particular set of _weights_. Even really
really small changes in the training data or model could invalidate the attack
if I understand correctly.

~~~
andrewflnr
I know there has been work in generating adversarial images that work against
multiple models. That kind of thing is probably only going to get better, to
say nothing of particular sets of weights in a single model.

------
godelski
> No one's going to start carrying cardboard patches around

Uhhhh... why not? You can put them on hats, backpacks, arm patches, or a lot
of things. I get that they are suggesting it would be uncomfortable to have a
stiff shirt, but there are easy solutions here.

I'm not trying to undermine the research here (because it is good research)
but I think the reporting could be a little better.

As for the research, I wished they had compared it to more accurate models. I
think this would greatly help a reader to understand the limitations of the
work. YOLO and faster-RCNN are great for "real-time" but don't have the
greatest accuracy. They trade accuracy for speed (more accurate models are
pretty slow). While I do think YOLO is more similar to what would be used in a
real life setting, it would be great to know how the design works for more
accurate models (this wouldn't require significantly more work either, since
you're just testing against pretrained models). If the researchers stumble
across this comment I would love to know if you actually did this and what the
results were (or if you see this comment and try it against a more accurate
model). (I do also want to say to the researchers that I like this work and
would love to see more)

------
cc439
The joke about Juggalo facepaint is both true and funny but I think there is
some actual merit to that idea. Camo clothing (and I don't mean the kind you
see everyone wearing at rural WalMartss) goes in and out of fashion every
couple years. Military-style jackets, boots, and caps (think of a
stereotypical anarchist style) are also perennially in style with certain
crowds. I don't think it's too far fetched to imagine a future where camo
facepaint becomes fashionable enough to be widespread, there's also a lot of
artistic potential available in non-traditional patterns and colors.

I can't really see a way for AI cameras to get around properly applied
facepaint, especially varieties that are IR absorbent or reflective. I hold
the human brain in very high regard when it comes to pattern/symbol/shape
recognition and if facepainting techniques are good enough to trick human
visual processing, it's going to be good enough to fool any existing AI. For
an example of what I mean by proper technique, refer to this video:
[https://youtu.be/YpzUr3twW4Q](https://youtu.be/YpzUr3twW4Q)

The trick is in getting enough people to adopt such a strategy that you can't
be identified through simple exclusion. I think the idea of camo/other
facepaint isn't so foreign and unappealing as to never come into common
fashion.

~~~
RogerL
> I can't really see a way for AI cameras to get around properly applied
> facepaint,

In video people move, and 3D information can be recovered unless their faces
are painted with something like Black 2.0. At which point why not just wear a
mask?

~~~
pazimzadeh
Can a person's gait be used as identifying information?

~~~
Jamwinner
Yes, and the 'rock in the shoe' model has been trained against as well. Good
luck.

------
floatingatoll
The arXiv paper contains images of the shirts and methodologies:

[https://arxiv.org/pdf/1910.11099.pdf](https://arxiv.org/pdf/1910.11099.pdf)

------
3pt14159
Colour me skeptical. There are multiple ways to capture features and the shirt
may fool one set of algorithms but I highly doubt they'll fool them all.

~~~
jadell
Like any good security protocol, this wouldn't be the only line of defense. A
combination of adversarial clothing, makeup, hair style, and accessories would
be used. And constantly evolving, making countermeasures harder. Security is
always reactionary, you can't defend against an attack you've never seen
before.

~~~
notus
> Security is always reactionary, you can't defend against an attack you've
> never seen before

Yes you can, that's part of the appeal of applying machine learning to
security. They don't rely on things like signatures or existing heuristics to
identify things as malicious.

~~~
danShumway
Machine learning does rely on heuristics, it just builds the heuristics on its
own. If it runs into an attack that doesn't use any of the attack vectors it's
learned to guard against, it will fail.

Think of it like your body. It learns to identify viruses. Does that mean
you're immune from novel viruses or new strains of the flu?

~~~
notus
I think it was implied that I meant heuristics that humans have added
themselves. The point of it all is to allow models to make generalizations
about things it hasn't seen before. This can be done with a combination of
supervised and unsupervised techniques.

~~~
danShumway
> heuristics that humans have added themselves

I don't think this is a meaningful distinction. Who cares whether the new
heuristic is being added by a machine or a human?

You still need to keep feeding the neural network data to learn from, and it
will still choke when it sees novel data that doesn't align with the
heuristics it developed.

That's the entire reason adversarial AI works. The reason the Trippy T-shirt
makes you invisible to some current AI systems is because it exploits the
heuristics they've built using data that these systems are unfamiliar with and
haven't learned to process yet. If it was possible to build an AI system that
could defend against novel attacks, the Trippy T-Shirt wouldn't be able to
fool them.

------
papln
This t-shirt defeated 2 CV model, R-CNN and YOLOv2.

We need better deployed testing suites that can test an adversarial model
against many popular classifiers, not just 2.

Even so, the paper itsef shows tht their tshirt doesn't make the wearer
undetectable, only partially-undetectable. A security system won't ignore you
just because it only saw you 10% of the time you were present (unless it's an
Uber self-driving car).

~~~
godelzilla
In a world of click-bait "invisible to AI" is the same as defeating two models
most of the time.

------
gwbas1c
Would be nice if the article had bigger images of the shirt!

------
classified
I'm only waiting for the first SWATting incidents triggered by an algo
"recognizing" a turtle of mass distraction.

------
NovemberWhiskey
Straight out of William Gibson's "Zero History"!

~~~
tumba
In Zero History, the purpose of the shirt was not to fool the algorithm, but
to trip a deep "gentlemen's agreement" between intelligence agencies to make
invisible anyone bearing a certain pattern in order to protect the
intelligence apparatus.

It will always be difficult to sustainably defeat recognition algorithms and I
expect this to be an arms race along the same lines as other counter-
surveillance techniques.

Gibson's suggestion that deeply coded and secret exceptions to mass
surveillance might be used to protect state actors seems to me a plausible and
concerning aspect of these developments.

------
alep
We also tested fooling YoloV2 using t-shirts, but as mentioned in the paper,
we got mixed results. You can fool the object detection only if you get a
frontal exposure to the camera without any torsion / rotation / bending of the
t-shirt, which is pretty hard in real life. Would be interesting to see if you
can train adversarial examples robust to multiple angles. We thought to put
these t-shirts out for sale for fun and to send a message: #donottrack.
[https://stealth.cool](https://stealth.cool)

------
jmartinpetersen
You can fool all the AIs some of the time, and some of the AIs all the time,
but you cannot fool all the AIs all the time.

~~~
nathancahill
Carl Sandburg said that.

~~~
nathancahill
_Deep_ dive in to the backstory behind the quote:

[http://www.taxhelp.com/lincoln.html](http://www.taxhelp.com/lincoln.html)

------
mdorazio
I’m confused how this helps beyond body recognition. It seems to me that the
focus these days is on facial recognition where you would be training your
model to look for facial features rather than whatever is on that shirt. Is
this supposed to somehow fool that as well by tricking it with false face
features or something?

~~~
Jaygles
I can imagine a scenario where a system doesn't attempt to look at a face
before it determines there's a full human in the frame.

Ultimately if a system is designed to only look at faces then this method
would likely not be effective.

------
mikece
Reminds me of the "hack" that was done to the Samaritan system in the
excellent TV series "Person of Interest." Granted, you have to suspend
disbelief on many points of AI to enjoy that show but I never understood why
they couldn't work around the bug that was placed in the system that prevented
the identification of seven people. In the examples cited, like tricking the
AI into thinking that a turtle was a gun, there's an easy fix once the
misclassification is noticed. I suspect the "t-shirt of invisibility" will
similarly be accounted for in the system and that people seen wearing it will
be targeted for MORE scrutiny as it could be presumed they are trying to hide
in plain sight and that there might be a nefarious reason for it.

~~~
WrtCdEvrydy
> they couldn't work around the bug that was placed in the system that
> prevented the identification of seven people

The explanation given was that one server per person would invalidate some
portion of the overall profile so the identity would be misclassified (for all
main characters)

------
css
I have always wondered what the minimum amount of makeup needed to be
"invisible" to facial recognition would be. In some cyberpunk future I could
see people breaking up their features with thin black lines or something to
fool cameras.

~~~
pugworthy
What you're looking for is CV dazzle or dazzle makeup. Check out
[https://en.wikipedia.org/wiki/Computer_vision_dazzle](https://en.wikipedia.org/wiki/Computer_vision_dazzle)
and some more practical (?) information at
[https://cvdazzle.com/](https://cvdazzle.com/)

~~~
im3w1l
Do these still work? I expect designs like this to age quickly. Try reverse-
image-searching them :)

------
pgeezy
Where can I buy this merch?

~~~
papln
[https://www.cafepress.com/p/custom-holiday-
gifts](https://www.cafepress.com/p/custom-holiday-gifts)

------
darepublic
I know about adversarial attacks, but are they widely applicable? I would
think an attack that works on one algo might not work on another.

------
ackbar03
Baidu security gave out something similar at defcon Beijing. It was pretty
cool conceptually but it really was just a gimmick

------
calebm
This reminds me of that recent article about how zebra stripes have been shown
to reduce bug bites when painted on cows:
[https://news.ycombinator.com/item?id=21201807](https://news.ycombinator.com/item?id=21201807).
Probably a similar effect on object detection algorithms.

------
diego_moita
Interesting: all the authors have Chinese names. I wonder if any of them has
any relatives on the Xinjiang Uygur region.

------
pcstl
It's going to be fun to see this whole recognition proof clothing turn into a
low-key "war" as states demand better recognition systems that can bypass this
kind of thing, and privacy activists keep developing new ways of fooling AI.

------
kevin_thibedeau
The license plate shirts should be made with state department diplomatic
country codes.

------
foxyv
I wonder what would happen if a self-driving car came across something like
this. Would it classify the pedestrian as "Nothing" then run them over?

------
floatrock
One of the most common uses of this tech in the US is automatic license plate
readers.

Without getting into a debate about expectations of privacy on public roads
vs. building a perpetual government database that tracks where every car is
effectively at all times of day, another application of this tech would be a
bumper decal.

I think most reasonable people would agree obscuring the license plate on a
public road is not the solution (well, with the exception of Florida Man who
racked up a $1MM fine when he was finally caught doing that through toll
booths for a year), but a decal like this wouldn't interfere with any
officer's human duties.

------
bitL
That should be easily solvable using 3D convolutions and processing a short
clip (~10 frames) instead of a single picture.

------
chrisa
It only works until those pictures are used to counter-train the AI, right? So
is this the high-tech arms race of the future?

~~~
ceejayoz
That, or they just ban it.

[https://en.wikipedia.org/wiki/Anti-
mask_law](https://en.wikipedia.org/wiki/Anti-mask_law)

------
cozzyd
Great way to get run over by an Uber self-driving car!

~~~
netsharc
You don't even need this shirt for that to happen..

------
Odenwaelder
A wearable captcha!

