Hacker News new | comments | show | ask | jobs | submit login

Sounds like you got it, more or less. Current views of how object recognition works in the brain are a lot like current deep net models of object recognition (e.g. Alexnet and beyond): a heirarchical series of processing steps in which units at successive processing stages get more selective for specific things, and more invariant to image variation (size, position, lighting, etc). One view of holistic face perception is that it is just the natural consequence of having units tuned to whole faces (or to large portions of the face). But why this should be implemented in humans as a specific category-selective patch of the brain is an open and fascinating question, that I am now hopeful network modeling may inform.

Interesting. In the past, I've read researchers commenting that "neural networks" was mostly unfortunate terminology because it implied the similarities between the physical patterns of the brain and connections between nodes in a neural network were a surface similarity that probably didn't offer insights into how the brain really worked.

But you're saying that there may be more similarity than we thought. I remember way back when there was some evidence of things like horizontal- and vertical-feature detection. It sounds as if there is still some evidence of this but perhaps more plastic than was once imagined.

As the Marr intro chapter explains so beautifully, there are many levels of analysis in cognitive science and cognitive neuroscience. Units in deep nets are very different from actual neurons, and the backprop methods used to train deep nets have no resemblance to how human brains get wired up. But for the case of object recognition at the level of representation, there are striking similarities between deep nets optimized for invariant object recognition, and parts of the primate brain that carry out this task. See this brilliant and seminal paper: http://www.pnas.org/content/111/23/8619.long

I work with neural networks for complex scene processing and object detection on the roads. Best of part of my job is watching a network "learn/train itself" to classify various object categories.

Are there any good theories on what happens during the training process of the brain (for example, while learning a new skill or something very basic/simple) and how individual neurons get affected by this "learning" process? So, I understand from a psychological perspective, we see the brain as this beautiful system but I am asking from physiological perspective. What kind of changes can we observe in neurons when we learn something new?

P.S: Thanks a lot for your replies. Means a lot. :)

Here is one cool example: https://www.sciencedirect.com/science/article/pii/S089662731... And this one I have not read but is by reputable people in a good journal and looks fun: https://www.sciencedirect.com/science/article/pii/S089662731...

Thanks, both of them look really relevant. Will read. :)

Thanks! I look forward to diving into this.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact