Hacker News new | past | comments | ask | show | jobs | submit login
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks (csail.mit.edu)
180 points by hardmaru on Nov 29, 2018 | hide | past | favorite | 18 comments



This is somewhat tangential, but I really love how they published this research. This method has lots of interactive media visually explaining the result of their project. It was really easy to understand what they did, and fun to play around with the provided examples. I wish more research would end up online in a format like this!


I agree. Magenta, Tensorflow, Keras all have WebGL accelerated client side javascript libraries so interactive demos could become the norm.

The Scientific Paper is Obsolete

https://www.theatlantic.com/science/archive/2018/04/the-scie...


I'd be willing to get that Magenta, Tensforflow, Keras, and even WebGL will be obsolete long before the scientific paper in it's traditional form become obsolete. Animations and interactivity are cool and all, but the timelessness of plain text has already lasted thousands of years. And even though anyone who could speak those languages also died thousands of years ago, we're still able to interpret the meaning of a lot of them.


If you are interested in learning about GAN, checkout GANLab, by Google Brain and others:

https://poloclub.github.io/ganlab/

It's similar to tensorflow playground, but for GAN:

https://playground.tensorflow.org/


amazing playground, thanks for sharing!


I am super excited about this work and network dissection. The idea of causal units or finding neurons that correlate to objects in an image will have huge impact on image manipulation tasks. I am excited to try this to control GANS on how they generate images.


It's cool, it would be great if the neurons would be classified with machine learning by the name of the objects they encode (and probability of that encoding beimg correct), so not just a few object types could be added / removed


Remember this? Grandmother Cell https://en.wikipedia.org/wiki/Grandmother_cell https://www.scientificamerican.com/article/one-face-one-neur...

Startling, but in the end, not a correct description for the brain. The analogy here is obvious.


Is it me or superlatives are used more commonly in todays communication?


I would like to have a GAN brush in Photoshop. Let me pick a color, brush size, etc like usual, but gently interpret what I paint in the direction of the GAN output, similar to the effect of a textured brush.


GANs, seem to me, the closest analogy to how the brain (or parts of it) might function. Moreover, when I try to imagine something in my mind's eye, it feels and looks to me something like what these GAN's are producing.


This feels too much like an appeal to wishful thinking.

I can't say that you are wrong. But this doesn't exactly fill me with confidence. Probably doesn't help that I don't really have a minds eye. I am probably clinging too heavily for some "underlying truth", as well.

Reminds me of the complaints I'll see, where folks bemoan that nobody learns the reason math works anymore. Only, that doesn't really make sense. Few of us ever really learned "why math works" because it turns out that is not nearly as straight forward as folks assert that it is.


Well, I did refer to it as an analogy. However, I do think there are probably several useful analogies between machine learning and neuroscience.

For example: Sparse Encoding http://www.mit.edu/~9.54/fall14/Classes/class07/Palm.pdf


Oh, it is definitely an analogy. I should have stressed more that I don't think you are wrong. Not just that I can't say you are wrong. I really don't think you are.

My concern is that I'm just not sure how far that analogy helps. Unlike old analytic models, we don't have much in the way of analyzing these new models. We can only talk towards how well they perform on fixed data sets.

There are some interesting results in transfer learning. But, I suspect most of the truly amazing results have been essentially cherry picked in the process. (That is, blind pigs and troughs, and all of that.

I hope I'm wrong. I really do.


I agree. It is similar to pareidolia or akin to how dreams are actually imperfect but in ways that your neural network classifying the imagery would be able to ignore and work with.


I would pick variational auto-encoders (VAEs) as the closest analogy to how the brain functions. Brain observes data and encodes it to a latent vector. When we imagine something or dreaming, brain decodes latent vector to video.

Moreover, usually our brain cannot imagine something as sharp and real as GAN's output. It's more like a blurry image from VAE's output.


Some suggest that a principled connection between VAEs and Generative Adversarial Networks (GANs) can be had using Adversarial variational autoencoders.

https://avg.is.tuebingen.mpg.de/publications/mescheder2017ar...


Great, another of numerous ways for us to lie to each other.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: