
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks - hardmaru
https://gandissect.csail.mit.edu
======
chairmanwow
This is somewhat tangential, but I really love how they published this
research. This method has lots of interactive media visually explaining the
result of their project. It was really easy to understand what they did, and
fun to play around with the provided examples. I wish more research would end
up online in a format like this!

~~~
ArtWomb
I agree. Magenta, Tensorflow, Keras all have WebGL accelerated client side
javascript libraries so interactive demos could become the norm.

The Scientific Paper is Obsolete

[https://www.theatlantic.com/science/archive/2018/04/the-
scie...](https://www.theatlantic.com/science/archive/2018/04/the-scientific-
paper-is-obsolete/556676/)

~~~
gmiller123456
I'd be willing to get that Magenta, Tensforflow, Keras, and even WebGL will be
obsolete long before the scientific paper in it's traditional form become
obsolete. Animations and interactivity are cool and all, but the timelessness
of plain text has already lasted thousands of years. And even though anyone
who could speak those languages also died thousands of years ago, we're still
able to interpret the meaning of a lot of them.

------
joaorico
If you are interested in learning about GAN, checkout GANLab, by Google Brain
and others:

[https://poloclub.github.io/ganlab/](https://poloclub.github.io/ganlab/)

It's similar to tensorflow playground, but for GAN:

[https://playground.tensorflow.org/](https://playground.tensorflow.org/)

~~~
maticaputti
amazing playground, thanks for sharing!

------
mendeza
I am super excited about this work and network dissection. The idea of causal
units or finding neurons that correlate to objects in an image will have huge
impact on image manipulation tasks. I am excited to try this to control GANS
on how they generate images.

~~~
xiphias2
It's cool, it would be great if the neurons would be classified with machine
learning by the name of the objects they encode (and probability of that
encoding beimg correct), so not just a few object types could be added /
removed

~~~
SubiculumCode
Remember this? Grandmother Cell
[https://en.wikipedia.org/wiki/Grandmother_cell](https://en.wikipedia.org/wiki/Grandmother_cell)
[https://www.scientificamerican.com/article/one-face-one-
neur...](https://www.scientificamerican.com/article/one-face-one-neuron/)

Startling, but in the end, not a correct description for the brain. The
analogy here is obvious.

------
tobr
I would like to have a GAN brush in Photoshop. Let me pick a color, brush
size, etc like usual, but gently interpret what I paint in the direction of
the GAN output, similar to the effect of a textured brush.

------
SubiculumCode
GANs, seem to me, the closest analogy to how the brain (or parts of it) might
function. Moreover, when I try to imagine something in my mind's eye, it feels
and looks to me something like what these GAN's are producing.

~~~
taeric
This feels too much like an appeal to wishful thinking.

I can't say that you are wrong. But this doesn't exactly fill me with
confidence. Probably doesn't help that I don't really have a minds eye. I am
probably clinging too heavily for some "underlying truth", as well.

Reminds me of the complaints I'll see, where folks bemoan that nobody learns
the reason math works anymore. Only, that doesn't really make sense. Few of us
ever really learned "why math works" because it turns out that is not nearly
as straight forward as folks assert that it is.

~~~
SubiculumCode
Well, I did refer to it as an _analogy_. However, I do think there are
probably several useful analogies between machine learning and neuroscience.

For example: Sparse Encoding
[http://www.mit.edu/~9.54/fall14/Classes/class07/Palm.pdf](http://www.mit.edu/~9.54/fall14/Classes/class07/Palm.pdf)

~~~
taeric
Oh, it is definitely an analogy. I should have stressed more that I don't
think you are wrong. Not just that I can't say you are wrong. I really don't
think you are.

My concern is that I'm just not sure how far that analogy helps. Unlike old
analytic models, we don't have much in the way of analyzing these new models.
We can only talk towards how well they perform on fixed data sets.

There are some interesting results in transfer learning. But, I suspect most
of the truly amazing results have been essentially cherry picked in the
process. (That is, blind pigs and troughs, and all of that.

I hope I'm wrong. I really do.

------
rexpop
Great, another of numerous ways for us to lie to each other.

