
A deep learning framework for neuroscience - wei_jok
https://www.nature.com/articles/s41593-019-0520-2
======
etbebl
I see a lot of criticism here saying things like "DNNs have nothing to to with
brains, they weren't designed to work like brains, and any resemblance is
surely just an artifact of training them to do brain-like things."

The fact is, there have been neuroscientists working with neural network
models with greater and lesser complexity than DNNs for decades. They've been
utilized to great profit outside of neuroscience lately, but that doesn't make
them not an abstraction of some aspects of cortical computation.

We don't quite understand how brains could perform or approximate backprop
yet, but it's the only training algorithm that has been remotely successful at
training networks deep enough to do human-like visual recognition. So many
people take that as a big clue as to what we should be looking for in the
brain to explain its great performance and ability to learn, rather than a
reason to disqualify DNNs entirely.

There's plenty of modeling work going on with more traditional biophysical
models, such as those that include spiking, interneuron compartments,
attractor dynamics, etc. This is just an attempt to also come at the problem
from the other direction, starting from something that we know works well (for
vision) and trying to figure out how to ground it in biophysical reality.

~~~
endswapper
I don't think anyone is trying to disqualify DNNs. I think the difference
might be an abstraction for a neuron vs an abstraction for the brain. Success
or value doesn't necessarily equate to "human-like." The paper seems naive to,
or ignore, prominent, long-running/standing related research that provides a
stronger foundation and as far as I can tell includes what they propose. So,
at least for me, I'm not sure what the contribution is.

------
bra-ket
function optimization in deep learning sense has nothing to do with
neuroscience, I hope they don't think of fitting this model to brain processes
just because it's popular

~~~
neuronerd
In a range of domains, in particular higher level brain areas, DL models
trained on imagine are already the best predictive models of brain function.
If they are better than all other models at describing the data, why would we
say they have nothing to do with neuroscience?

~~~
briga
As far as I know there is no evidence that the brain has any analogue to the
back-propagation used to train pretty much all modern neural networks. Back-
propagation is a good way to optimize neural networks, but it doesn't seem to
be the way brains optimize neural networks.

~~~
neuronerd
well, the original paper has a pretty good summary of how the brain may
actually do backdrop.

~~~
joycian
What do you mean by "original paper" here?

------
endswapper
I find it disappointing that the paper makes no mention of Numenta, TBToI or
HTM. How is what they are proposing not already included in Numenta's work
(informally, of course)? Plus, Numenta's work seems to go much further
confronting biological plausibility head-on.

~~~
ckrailo
I wish there was more content about that on HN in general.

The book On Intelligence by Jeff Hawkins was a fantastic read on HTM and
similar concepts. ([https://amzn.to/2JyQDF3](https://amzn.to/2JyQDF3))

------
dr_dshiv
Anyone up for a summary? I didn't get much from the abstract.

~~~
kristjankalm
they're arguing that articifial neural nets are useful models of brain
function and anatomy. a lot of people in the field of neuroscience strongly
disagree, hence their attempt to outline the utility of ANNs.

~~~
skohan
I also tend to be skeptical that ANNs are very useful as a model for brain
function. In vivo neural networks are _so complex_ and _so dynamic_ when
compared ANNs.

In my opinion, the fact that even such a massively simplified model of one
specific subtype neural processing has been able to give as powerful results
as we have seen from Deep Learning should give us an appreciation for how much
there still is for us to learn about this staggeringly complex system.

I would guess that the next great advancements will come from using better
understandings of the brain to build better ANNs, not the other way around.

~~~
iciac
Learning's likely to be bidirectional. ANN (as a mathematical analogue) is
independent to the biological function (the original and key inspiration).
Advances in network architecture (e.g. the recent trend towards skip
connections and parallel processes) is likely to give insight to how an
underlying, more complex system is likely to operate. In particular,
systematic errors made by ANNs under given frameworks have a tendency for
existing in some form in psychology and biology. Since conceptual thinking
from both domains can directly feed towards each other, it's a rare bootstrap
moment with the potential for rapid advances in both directions.

~~~
skohan
> Advances in network architecture (e.g. the recent trend towards skip
> connections and parallel processes) is likely to give insight to how an
> underlying, more complex system is likely to operate.

 _Maybe_. The thing about these advances in ANNs is, so we have _any_ reason
to believe they have _anything_ to do with the way biological neural networks
work? It _might_ be the case that these kinds of advances correlate to a more
accurate understanding of how our brains process information, or it might also
be the case that these are just optimizations on a mathematical model which is
fundamentally different to biological intelligence.

To me advances in the other direction are much more compelling. We actually
know quite a lot about how biological neural networks work. The way that
electrical and chemical signals are transmitted is quite well understood, and
can be accurately modeled through mathematical models derived from physics and
physical chemistry. At the moment, the problem seems more to be more about how
to accurately model this system at scale which we already have tons of data
on.

It's not that I think these innovations in ANNS have no value, it's just that
it seems that ANNs are quite tangential to neuroscience.

~~~
crawfordcomeaux
From studying ANNs, I've reconceived of how I view myself from a programmatic
perspective. I have used the resulting models to change myself in useful ways.
If they're not perfectly accurate, they may still be accurate enough to be
useful.

The trick, to me, is to avoid falling into the trap of thinking imperfect
models aren't useful. Then the accuracy matters less.

An example of a useful intuition was realizing choosing to believe something
is a skill and I can choose to believe the opposite of anxious thoughts to
safely defuse anxiety as long as I'm meeting my needs.

I know people who've been in therapy for a long time before learning that one,
so I'm gonna keep using ANNs as a guide for self-hacking. It's way too useful
to me.

~~~
skohan
It's fine and good that ANNs can serve as a metaphor for your own mind. That's
something very different than saying they're going to be useful in unraveling
the scientific mysteries of the brain.

------
coward12345678
My 8 month old daughter suffers from cortical visual impairment after
contracting bacterial meningitis caused by e coli during the birthing process.
She had to have a bilateral cranitomy to have isolated areas of the infection
carved out of her brain tissue.

Looking at this article, i wonder if we'll ever be able to figure any of this
out. I feel pretty hopeless about the entire situation.

~~~
CodiePetersen
I don't think we need to have a full understanding of the brain to make
progress on those fronts. If you look at neuralink
([https://www.youtube.com/watch?v=r-vbh3t7WVI](https://www.youtube.com/watch?v=r-vbh3t7WVI))
there is some pretty amazing brain computer interfacing that is already
happening. If we assume Moore's law hold up with neural sensor resolution,
then within 10-15 years there will be as many input sensors as there are cones
in the human eye. There are also plenty of other already existing technologies
that can help aid in making her life easier.

I'm not saying "don't worry about it" as an uncle with two nieces I know how
difficult it would be just to ignore something. You want your family and
dependents to have happy full lives and every little struggle makes you worry
and think. But what I am saying is, have a little hope. When you see articles
like this, they are normally talking about theoretical and philosophical
meanings of what intelligence and consciousness is, there is plenty of solid
practical and applied science and progress that relates to your daughter's
situation.

Don't get yourself worked up about the ponderings of math and computer nerds.
The real life changing stuff is not being done in AI right now, its being done
in universities, hospitals, and laboratories by scientists, doctors, and
professors.

------
endswapper
This might be a little late, but it may be helpful:

[https://singularityhub.com/2019/10/03/deep-learning-
networks...](https://singularityhub.com/2019/10/03/deep-learning-networks-
cant-generalize-but-theyre-learning-from-the-brain/)

------
tudorw
I'm not sure, but I think it's trying to say that deep learning as it stands
is modeled on one aspect of a model of the brain, by developing out the 3
aspects they identify and having them act in unison would be potentially a
good thing, disclaimer, I am neither a neuroscientist nor deep learning
expert!

~~~
skohan
What I understood is that they're saying deep learning relies on understanding
neural processing in 3 parts: objective functions (activation functions
maybe?), learning rules (I guess like back-prop/gradient descent?) and
architecture (I assume network structure)?

So it sounds like they want to use this componentization of neural processing
to try to understand biological neural networks better.

~~~
etbebl
The objective function is the entire function the network is training toward,
i.e. in a classification task it's the correct mapping of images to labels.
The idea here is that real brains also optimize their weights to compute
certain useful objective functions.

