Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Robust-ML.org, a community-run reference for SOTA adversarial defenses (robust-ml.org)
2 points by logane on June 26, 2018 | hide | past | favorite | 1 comment



Hi HN! I'm one of the authors of this resource. Here's a little bit of background on why we created this resource:

Adversarial machine learning is a relatively new but rapidly developing field. It's easy to see why people are excited about this research area: ML systems are being increasingly deployed in the real world, and yet, they're very easy to fool with maliciously perturbed inputs. There have been dozens of proposed attacks and hundreds of proposed defenses against malicious inputs to machine learning systems. To help researchers keep up with developments in this field, we created this community-run reference for state-of-the-art adversarial example defenses.

Unlike most subfields of ML, security is a negative goal: the goal is to produce a machine learning system that can't be fooled. Showing that a system can't be fooled is really hard.

Measuring progress in traditional machine learning can be done through a monotonically increasing objective: if a paper increases accuracy on a given benchmark from 94% to 95%, progress has been made. Future papers may improve on the benchmark, but accuracy will not decrease. In contrast, measuring progress in adversarial machine learning is exceptionally difficult. By definition, the metric used to measure accuracy on a given defense is success on the best attack (that respects the threat model), which may not exist at the time of publication. This is why future third-party analyses of defense techniques are important.

robust-ml.org lists current defenses along with analyses of the defenses, making it easy to get a complete picture of which techniques have been shown to be broken and which techniques currently seems to be working.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: