Hacker News new | past | comments | ask | show | jobs | submit login

As I understand it, adversarial designs generally work on one specific recognition system. So working around this attack would be very achievable with three or more recognition systems and a consensus check.

This particular paper is based around attacking YOLOv2.




I think these types of adversarial attacks are even easier to foil than that because they're specific to one particular set of weights. Even really really small changes in the training data or model could invalidate the attack if I understand correctly.


I know there has been work in generating adversarial images that work against multiple models. That kind of thing is probably only going to get better, to say nothing of particular sets of weights in a single model.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: