Out of all the modalities, vision is easily the one we know the most about. And we do so at a fairly deep level. The discussed work seems fine but it's not the groundbreaking insight it's made out to be. Great PR work from the involved scientists (or their enterprising university marketing department).
(The book used to be freely available on a website hosted by Harvard Med, but I can't seem to find it anymore.)
FWIW, I work a little bit in computational neuroscience. While I think "ground breaking" is an exaggeration, and I wish the article spent more time explaining the general thinking in the field and why it matters, the content is not terribly written for an article of this type and length. And it should be emphasized that the point of this modeling is really understanding the biology of the primate visual system; what it says about the general problem of vision is a separate question.
Disclaimer: I was not involved in this work, but did collaborate with one of the scientists extensively in the past.
> Not only are LGN cells scarce — they can’t do much either. LGN cells send a pulse to the visual cortex when they detect a change from dark to light, or vice versa, in their tiny section of the visual field. And that’s all.
[I think the "scarcity" is real, but some areas have better coverage than other. But I really don't remember anything similar to the other part of the model.]
This looks like an binary toggle encoding (and that the receiving end must remember and count how many pulses received to know if that part is dark or light).
I vaguely remember something like that the neurons in that part or nearby send pulses periodically, and the time between pulses is smaller (or bigger?) when there is more light. (Or perhaps keep the time between pulses, but use double/triple/... pulses more light.) Perhaps add some slow adaptation to the light level, so after a time at a fixed light level the neuron uses the default interval between pulses. I'm not sure about the actual encoding, but all of what I remember are very different from the encoding in the article.
This penultimate paragraph directly contradicts the headline. I know that writers don't have much control over their headlines but this is endlessly frustrating as a reader.
> Their work is the first of its kind.
No and no.
Shapley has great work in this area -- other biologically-plausible models of visual cortex indeed cite his work -- but this article is making grandiose claims with little knowledge of a deep field of work.
A lot could be going on here from an information theoretic standpoint, compressed sensing, error correction, time series prediction all operating in concert to reconstruct a model of the world inside our minds.
Almost 15 years ago, Jeff Hawkins proposed a similar explanation as to why biological neural networks are so much more effective than artificial neural networks. If this paper bears out, it will mean he was right track. His theory was based on 3 criteria that he believed to be essential to understanding the brain:
> The second criterion was the importance of feedback. Neuroanatomists have known for a long time that the brain is saturated with feedback connections. For example, in the circuit between the neocortex and a lower structure called the thalamus, connections going backward (toward the input) exceed the connections going forward by almost a factor of ten! That is, for every fiber feeding information forward into the neocortex, there are ten fibers feeding information back toward the senses. Feedback dominates most connections throughout the neocortex as well. No one understood the precise role of this feedback, but it was clear from published research that it existed everywhere. I figured it must be important.
This is not anything real scientists believe. In science, only experimental data can tell you how something works. Math models are just math models (not to imply they're not useful in understanding how things work, but they certainly don't are not sufficient.
"Comprehensive population models, such as the one we have presented here, seek to link cellular properties and network structure to dynamics and function, the ultimate goal being to use these models to test hypotheses and to suggest future experiments. We propose that for areas of the brain about which there are sufficient data, such as the visual cortex, it is time to move to next-generation models that are more comprehensive, data driven, and dynamic. Such a move would constitute a paradigm shift in computational neuroscience, and the present model is a step in that direction."
Basically they want more than just a simplified mathematical model, something that accounts for all the measured data. I'm not sure this is the right approach, and only some example application would show if it is. I don't think they applied their model yet or whether it can be applied, just browsed the first paper.
What could they have possibly thought the key was previous? And is the journalist of preschool intellect, or simply an amazingly ignorant baccalaureate?