Hacker News new | comments | show | ask | jobs | submit login

Hey,I'm at lecture 2.6,you're talking about the recognition of faces and it's relationship with the holistic image view/image processing.

So, can I go ahead and abstract out the ability of mind being discussed? Basically, given a category, this vision processing module in brain is processing different features of the image(here feature in the machine learning sense). And these categories can be hierarchical. Like faces, humans, creatures, this can be a hierarchy that the brain may be referring to when it is trying to identify a face and switches to the mode where it needs a holistic image view rather than some isolated parts of brain. I understand that imagining how this happens biologically(physiologically) is hard for me.

My question is, am I correct in the above inference? I want to suggest an experiment now :D :P




Sounds like you got it, more or less. Current views of how object recognition works in the brain are a lot like current deep net models of object recognition (e.g. Alexnet and beyond): a heirarchical series of processing steps in which units at successive processing stages get more selective for specific things, and more invariant to image variation (size, position, lighting, etc). One view of holistic face perception is that it is just the natural consequence of having units tuned to whole faces (or to large portions of the face). But why this should be implemented in humans as a specific category-selective patch of the brain is an open and fascinating question, that I am now hopeful network modeling may inform.


Interesting. In the past, I've read researchers commenting that "neural networks" was mostly unfortunate terminology because it implied the similarities between the physical patterns of the brain and connections between nodes in a neural network were a surface similarity that probably didn't offer insights into how the brain really worked.

But you're saying that there may be more similarity than we thought. I remember way back when there was some evidence of things like horizontal- and vertical-feature detection. It sounds as if there is still some evidence of this but perhaps more plastic than was once imagined.


As the Marr intro chapter explains so beautifully, there are many levels of analysis in cognitive science and cognitive neuroscience. Units in deep nets are very different from actual neurons, and the backprop methods used to train deep nets have no resemblance to how human brains get wired up. But for the case of object recognition at the level of representation, there are striking similarities between deep nets optimized for invariant object recognition, and parts of the primate brain that carry out this task. See this brilliant and seminal paper: http://www.pnas.org/content/111/23/8619.long


I work with neural networks for complex scene processing and object detection on the roads. Best of part of my job is watching a network "learn/train itself" to classify various object categories.

Are there any good theories on what happens during the training process of the brain (for example, while learning a new skill or something very basic/simple) and how individual neurons get affected by this "learning" process? So, I understand from a psychological perspective, we see the brain as this beautiful system but I am asking from physiological perspective. What kind of changes can we observe in neurons when we learn something new?

P.S: Thanks a lot for your replies. Means a lot. :)


Here is one cool example: https://www.sciencedirect.com/science/article/pii/S089662731... And this one I have not read but is by reputable people in a good journal and looks fun: https://www.sciencedirect.com/science/article/pii/S089662731...


Thanks, both of them look really relevant. Will read. :)


Thanks! I look forward to diving into this.




Applications are open for YC Winter 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: