Hacker News new | past | comments | ask | show | jobs | submit login
PatchAttack: A black-box texture-based attack with reinforcement learning (arxiv.org)
47 points by jchook 42 days ago | hide | past | web | favorite | 10 comments



Help me out. I am naive about neural networks. I have skimmed the paper, read the abstract and he conclusion and looked at the examples.

Does this not illustrate what is the fatal flaw in image recognition based approaches with neural networks, that their failure modes are inscrutable?

80% or 95% of the time they do well but the corner cases where they do poorly they fail in ways that are entirely unlike the was our brains' systems fail. Unpredictably. So they can be useful for non critical applications but not critical applications. Like self driving cars....

Stages of grief here... I was looking forward to my car with a cocktail cabinet that would drive me to parties and home again... I believed the hype five years ago. Is this why progress has stalled?


Not a full answer, but specifically for image recognition, there's been some exciting work by Chris Olah and others to visualize exactly what's going on in neural networks. Some of this work has been really fascinating, identifying what specific neurons or sub-networks seem to focus on.

One overview can be found here: https://distill.pub/2018/building-blocks/

So I think others in the space have the same frustrations about the lack of insight into these models, and we're working on ways to get better answers out of these black boxes.


Shower thought: vinyl car wrap with a bunch of pictures of stop lights at various angles


"Reckless endangerment: A person commits the crime of reckless endangerment if the person recklessly engages in conduct which creates substantial jeopardy of severe corporeal trauma to another person."

https://en.m.wikipedia.org/wiki/Endangerment


Here in New Zealand you cannot legally place images of official road signs next to or near roads. If you wanted to illegally stop vehicles, there are cheaper ways of blocking a road!


Some sort of rule on top can simply negate this by requiring x amount of area with respect to the image


Original title was "PatchAttack: A Black-box Texture-based Attack with Reinforcement Learning" which seems better. The phrase "99% with black-box RL" is particularly inscrutable.


Changed now. Thanks. Submitted title was "PatchAttack: Image classifier adversarial attack, 99% with black-box RL".

Submitters: please use the original title and then, if you like, add a comment to the thread explaining what you think is important about the article. You'll have more room for your explanation that way, people won't complain, and you won't be breaking the site guidelines: https://news.ycombinator.com/newsguidelines.html


Is there a policy for shortening long original titles? Some research papers can be a bit wordy in their titles.


Just that the limit is 80 chars. There's almost always some fatty bit to cut - either the baity lede, or the half the details after the inevitable colon.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: