
Hacking Neural Networks: A Short Introduction - jonbaer
https://github.com/Kayzaks/HackingNeuralNetworks
======
mholt
If you like this kind of thing, I wrote a little document summarizing the
security and privacy weaknesses of neural networks during grad school a couple
years ago:
[https://matt.life/papers/security_privacy_neural_networks.pd...](https://matt.life/papers/security_privacy_neural_networks.pdf)

I _think_ most of the content in there is still relevant/valid, but the field
moves so fast (with dozens of papers written every day on average, it's
actually one of the main reasons I switched my emphasis from ML to Internet
security)...

~~~
dijksterhuis
Nice read, thanks.

Yeah, I’ve always had a weird feeling about whether I should have pivoted to
deep fakes, cryptonets, or ML for security applications since I was a few
months into PhD.

Think I’ve found me a good non-image niche though (thank Christ for no free
lunch theorem.)

------
s_Hogg
I think this sort of thing is handy for reinforcing to people that neural nets
are still just code, and should be treated as such. That means testing and
reliability standards built into deployment. The last exercise in particular
rams it home really well - you don't need to do adversarial attacks by means
of inferring gradients and all that jazz, when you can just spot a buffer
overflow. This is a great repo, which I'm (ML Lead) going to make my co-
workers read.

------
ngcc_hk
It has come. Guess the trick is do you understand your own neural network so
that you can hack it and defend it as well.

------
cairo_x
NeuralOverflow... sweet hacker name.

