
DeepRobust- Pytorch Library for adversarial attack and defense in deep learning - ChandlerBang
https://github.com/DSE-MSU/DeepRobust/#
======
KKKKkkkk1
I never understood the whole notion of attacks on deep-learning systems and
all the research effort spent on defending against such attacks. After all,
we're talking about systems that are brittle in the face of non-adversarial
non-attacks, e.g. [http://news.mit.edu/2019/object-recognition-dataset-
stumped-...](http://news.mit.edu/2019/object-recognition-dataset-stumped-
worlds-best-computer-vision-models-1210).

The language of attacks and defenses implies that we are approaching the kind
of robustness that we expect from say a bank app, when in fact we are
lightyears away from that.

~~~
0xab
I'm the lead author of the paper you cited. Glad you enjoyed our work :)

Sure, you can break systems. That doesn't mean that they aren't useful! In
many cases a system will see the same boring input many times over. People are
often willing to be a bit flexible and help out when it happens to misread
something. The fact that you can intentionally break systems like that, and
that you can break them in a particular direction, like making them always
think there's no danger in an image, is really worrisome.

Our work shows that your autonomous car won't always work well; that its
vision system has some systematic error which we can characterize now.
Adversarial attacks show that someone can intentionally make your car see a
lane, whenever and wherever they feel like it, and drive you off the road.
It's a whole different ballgame, and the language of attack and defense really
fits well.

------
ChandlerBang
DeepRobust is a pytorch library for adversarial attack and defense methods on
images and graphs. It is demonstrated that some imperceptible perturbation on
the input data can fool the systems based on deep learning.

------
aaaaasd
I’ve been working on GCN for a while and I recently read something about the
robustness of GCN then I found this. Probably it is useful to get a quick
start:)

------
hannxu
I think the robustness might be a big issue for safe AI. I’m still trying to
learn something related. Do you have any advice where I should start?

