Hacker News new | past | comments | ask | show | jobs | submit login

Stanford cs231n has a little demo illustrating how easy this is, if you know the model weights, called "Fooling images". In short, you backprop to the data to find out which pixels were important for the classification result. Then you ever so slightly modify these, right upon the point the model miss classifies. [1]

[1]: https://nbviewer.jupyter.org/github/madalinabuzau/cs231n-con...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: