I have a feeling that the inscrutability of deep learning models is going to provide a playground for fraud. There are some papers that I cannot find at the moment (Deep Flaws?) related to detection of model and input fouling via adversarial inputs.
I'm approaching ML from 20 years in infosec and am a bit undecided where I should focus my efforts. Using ML as a tool or defending ML against threats. Ultimately both will be applicable.