
Targeted Attacks on Speech-to-Text [pdf] - khc
https://arxiv.org/abs/1801.01944
======
kleer001
If asked I would have answered yes, of course this is possible, but it never
occurred to me. I wonder if these adversarial attacks are possible in ML
because there's too much overfitting in the models or they're too high rez or?
Can someone in this space help me grow a better intuition of how these work?

