Hacker News new | past | comments | ask | show | jobs | submit login

1. Those are examples for a network that does not use blurring. You have the be careful because, remember, the adversary can tailor their examples to whatever preprocessing you use. So the adversarial examples for a network with blurring would look completely different, but they would still exist. Randomness could just force the adversary to use a distribution over examples, and it could mean they are still able to fool you half the time instead of all the time. However, I wouldn't trust my intuition here: that is really a question for the machine learning theory researchers (whether there is some random scheme that is provably resilient or if they're all provably vulnerable, or proving some error bounds on resilience, etc.).

2. The problem of projecting an image onto a car's camera already implies you'd be able to do it for a few seconds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: