
Google’s AI thinks this turtle looks like a gun - champagnepapi
https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed
======
falcolas
This is a problem, but perhaps not because innocent turtle owners will be
required to be registered with the government.

It's a problem because similar AI techniques are being used to classify
YouTube videos as advertiser friendly, or not. It's a problem because similar
AI techniques are used to classify your emails as spam or ham. To shut down
"abusive" Google accounts. To direct and classify search results (and present
"facts" on those same). To censor documents stored in Google Docs. To classify
people as homosexual or dangerous.

Even if you're lucky enough to get an appeal through to an actual human, it
can still take days, weeks, or months to get those AI classifications fixed.

~~~
thisisit
The adversarial effect might come to full force not during a YT video or
Google Docs but when an actual human is involved. An example of which is the
Palantir Story:
[https://news.ycombinator.com/item?id=15600859](https://news.ycombinator.com/item?id=15600859)

From the article:

"In fact, on two separate occasions, police shot at trucks misidentified as
belonging to Dorner, injuring three civilians. “We said [to Palantir], ‘We
need an application that can span multiple units within an agency…multiple
agencies within a county... and multiple counties within a state,’” says
Jackson. “[They developed an application] based on lessons learned from
Dorner.” That application, called ClueMan, short for Clue Manager, has just
gone live at JRIC."

Palantir or any AI/ML software seller will tout it's system as infallible, and
mistakes, as low as they might turn out to be, will have real chilling
effects.

------
sctb
Discussed:
[https://news.ycombinator.com/item?id=15601479](https://news.ycombinator.com/item?id=15601479)

