
Universal Adversarial Triggers for Attacking and Analyzing NLP - new_hn_acct
http://ericswallace.com/triggers
======
new_hn_acct
Hello, one of the authors here. Our work introduces the notion of Universal
Adversarial Triggers: a sequence of tokens that when prepended to any input
will cause an NLP model to produce a desired output. Feel free to check out
the blog, the linked demos, or ask any questions.

