Hacker News new | past | comments | ask | show | jobs | submit login

A new subfield of adversarial ML that considers similar challenges to adversarial NLP: topological attacks on graphs for attacking graph/node classifiers.

Both problems (NLP & graph robustness) are made much more challenging compared to adversarial robustness/attacks on image classifiers due to their combinatorial nature.

For graphs, canonical notions of robustness wrt classes of perturbations defined based on lp norms aren't so great (e.g. consider perturbing a barbell graph by removing a bridge edge- huge topological perturbation, but tiny lp perturbation!)

I think investigating robustness for graph classifiers should also help robustness for practical nlp systems and visa-versa. For example, is there any work that investigates robustness of nlp systems, but considers classes of perturbations defined on the space of ASTs?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: