Hacker News new | past | comments | ask | show | jobs | submit login

Wouldn't having a small portion of adverserially modified images in your training set improve the robustness of the model?

It's a known technique to do this kind of thing intentionally to train models that are more resistant to adverserial attacks. One reason people don't do this is the cost of running PGD is so high, but in this case your adversaries are doing it for you free of charge.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: