Hacker News new | past | comments | ask | show | jobs | submit login

GANs, seem to me, the closest analogy to how the brain (or parts of it) might function. Moreover, when I try to imagine something in my mind's eye, it feels and looks to me something like what these GAN's are producing.



This feels too much like an appeal to wishful thinking.

I can't say that you are wrong. But this doesn't exactly fill me with confidence. Probably doesn't help that I don't really have a minds eye. I am probably clinging too heavily for some "underlying truth", as well.

Reminds me of the complaints I'll see, where folks bemoan that nobody learns the reason math works anymore. Only, that doesn't really make sense. Few of us ever really learned "why math works" because it turns out that is not nearly as straight forward as folks assert that it is.


Well, I did refer to it as an analogy. However, I do think there are probably several useful analogies between machine learning and neuroscience.

For example: Sparse Encoding http://www.mit.edu/~9.54/fall14/Classes/class07/Palm.pdf


Oh, it is definitely an analogy. I should have stressed more that I don't think you are wrong. Not just that I can't say you are wrong. I really don't think you are.

My concern is that I'm just not sure how far that analogy helps. Unlike old analytic models, we don't have much in the way of analyzing these new models. We can only talk towards how well they perform on fixed data sets.

There are some interesting results in transfer learning. But, I suspect most of the truly amazing results have been essentially cherry picked in the process. (That is, blind pigs and troughs, and all of that.

I hope I'm wrong. I really do.


I agree. It is similar to pareidolia or akin to how dreams are actually imperfect but in ways that your neural network classifying the imagery would be able to ignore and work with.


I would pick variational auto-encoders (VAEs) as the closest analogy to how the brain functions. Brain observes data and encodes it to a latent vector. When we imagine something or dreaming, brain decodes latent vector to video.

Moreover, usually our brain cannot imagine something as sharp and real as GAN's output. It's more like a blurry image from VAE's output.


Some suggest that a principled connection between VAEs and Generative Adversarial Networks (GANs) can be had using Adversarial variational autoencoders.

https://avg.is.tuebingen.mpg.de/publications/mescheder2017ar...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: