
An Existential Crisis in Neuroscience - seek3r00
http://m.nautil.us/issue/81/maps/an-existential-crisis-in-neuroscience
======
ImaTigger
I am a computational cognitive neuroscientist, an have worked at many levels.
I find each kind of data and model useful to some extent, but I have to admit
that the least useful, are, to my mind, those at the detailed neural network
level, like the ones discussing in this paper. Somewhat more useful are higher
level dynamic architecture models, and, at the highest level, cognitive
models, which constrain the behavioral target we are trying to explain. I
personally (as one can tell from my other posts here) find the dynamics brain
development models to be the most compelling as overall models, but they are
not particularly explanatory at the detailed level. Brain science is trying to
do the hardest thing you can imagine, that is, explain the most complex
machine in the known universe. We persist, but no one entering this field
should have very high expectations of near term grand successes.

~~~
lambdaloop
As a counterpoint, I am a computational neuroscientist who transitioned form
working in human cognition to fruit fly motor control. Fruit fly neuroscience
in the past decade has advanced tremendously. With the latest tools, we can
record activity from specific genetically labeled neurons while stimulating
others. We have identified specific groups of neurons to stimulate to get the
fly to groom, walk, turn, and even walk backwards. The full fly brain has been
scanned with similar techniques and the connectome is beginning to be mapped
out (e.g. see this very recent post from Google AI research
[https://ai.googleblog.com/2020/01/releasing-drosophila-
hemib...](https://ai.googleblog.com/2020/01/releasing-drosophila-
hemibrain.html) ).

I find that as we gain new tools to study the nervous system more
specifically, both data and models of how neurons are organized at the circuit
level become more important. To advance on an analogy in the article, it's
like trying to explore the dynamics of NYC without a map. For instance, it's
hard to tell how/why people interact with central park if you don't even know
where they live. The more specifically you are able to pin down people, the
more it matters where exactly they live to understand.

Granted, the fly is much simpler than humans or even mice, and it will likely
take decades and new tools for us to study humans in this way. However, when
we get there, mapping out the brain connections will be crucial to make sense
of it all.

~~~
gedy
> We have identified specific groups of neurons to stimulate to get the fly to
> groom, walk, turn, and even walk backwards

This made me realize we may one day willing allow human brain controls to get
us to do things we don't want - Work, Exercise, etc.

~~~
nine_k
Maybe making human brain actually crave such activities would be even more...
valuable.

~~~
asdff
Far easier to dump meth in the water cooler

------
dr_dshiv
What we need is a Newtonian model of the brain. A model that is incomplete and
"wrong", but useful and generative. While Newtonian physics may be "wrong", is
much easier to learn than quantum physics or relativity, etc.

Neuroscience usually focuses on precision details, but doesn't aim to tell big
picture stories. There are a few exceptions, however, like Karl Friston's free
energy reduction model.

~~~
Waterluvian
Don't we have that in the model that describes what the parts of the brain do?

Proven to be wrong when you watch people with serious brain injuries re learn
skills, but shown to have value by how it can predict what tumors or injuries
will do to someone's ability.

~~~
conistonwater
Those models and explanations are not nearly as helpful as one might think. A
lot of "neuroscience" explanations of everyday behaviour are mostly nonsense
that sounds plausible and appealing
([https://www.nature.com/articles/nrn3817](https://www.nature.com/articles/nrn3817),
[https://www.mitpressjournals.org/doi/abs/10.1162/jocn_a_0075...](https://www.mitpressjournals.org/doi/abs/10.1162/jocn_a_00750)).
In this context plausible doesn't even mean plausible to a neuroscientist, it
means more like consistent with the nonsense a layperson has heard before.

What would be really helpful is a model that can _add_ to things we already
know. Saying "studying for an exam engages the X part of the brain and uses
the Y neurotransmitter" adds literally nothing to your understanding of
studying (you can find out much more about studying by talking to people that
are good at it and who have done a lot of studying themselves), it's just
taking an everyday activity and identifying the small but still vastly complex
portion of the brain that is activated more than others. Imagine being told
that a particular bug in a 1-billion-lines-of-code codebase is due to some
code within a 10-million-lines-of-code portion of it: that's great but how
helpful is it really?

~~~
JackFr
In general I'm very skeptical of much neuroscience. (If an article ever says
'emerging neuroscience shows...' you can be assured what follows is almost
certainly BS.) Yet I found the senior researcher in this article quite
refreshing about the limitations of their research.

------
_xerxes_
The article touched upon the C. elegans connectome. There are a few
interesting projects attempting to simulate the creature.

[https://en.wikipedia.org/wiki/OpenWorm](https://en.wikipedia.org/wiki/OpenWorm)

[https://en.wikipedia.org/wiki/WormBase](https://en.wikipedia.org/wiki/WormBase)

~~~
buboard
tbf it's debatable if there 's a lot to learn from C.elegans. Simple animals
have been studied for decades from aplysia to the mouse. But those are not
behaviours that are interesting when attempting to learn more about the human
brain. The Allen institute's connectome project is more relevant to mammals,
even if it's only a tiny volume of the mouse cortex, in order to mildly
constrain models of brain function. Even if we had the whole brain, it s too
large to be simulatable. These data help our understanding, and we 're lucky
we have amazing tools to probe brains at this moment. But we need more and
better theories to put them to good use

~~~
monadic2
How do you validate your models if you can't validate a simpler one first?

~~~
buboard
c elegans has very primitive, simple behaviours. It's not really possible to
get something useful out of it about either our cognitive functions or our
brain disorders. The things that regard single cell pathologies (e.g.
plasticity) are already studied in vitro in mammalian cells. There s probably
many cognitive phenomena that only become apparent in large brain sizes, so i
m not sure this method scales up.

~~~
monadic2
In what situation WOULD you be able to expect to extract "something useful"
about "our cognitive fucntions or our brain disorders"? It seems silly to
think we could learn anything about such complex things without understanding
something simpler first, hence the approach of validating models of simpler
structures.

What would you recommennd?

------
SubiculumCode
From the article, a quote that is enlightening: if I asked, ‘Do you understand
New York City?’ you would probably respond, ‘What do you mean?’ There’s all
this complexity. If you can’t understand New York City, it’s not because you
can’t get access to the data. It’s just there’s so much going on at the same
time. That’s what a human brain is. It’s millions of things happening
simultaneously among different types of cells, neuromodulators, genetic
components, things from the outside. There’s no point when you can suddenly
say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New
York City.’ ”

~~~
randcraw
True, but the equating of a brain with a city is largely specious, especially
when trying to understand cognition. Yes a brain regulates many low level
parasympathetic processes. But it's the decision making and memory systems
that we most want to understand and replicate in silico. And the coordination
of a city of mostly independent self-interested humans is a poor analogue for
the biological bases for a coherent thought process, unless it's only the
medulla or pons that we hope to model.

------
brian_spiering
There is a good chance that complete brain mapping will be similar to whole
genome sequencing. The result will be interesting but only answer a limited
subset of questions.

------
stevebmark
I'm confused. This article doesn't say anything. It makes no points and has no
insight. "There's a lot of data in neuroscience?" Is that the message? An
unusual number of Nautilus articles frontpage HN like this one, where there
doesn't seem to be any value in the article itself. What is going on?

~~~
Balgair
It's that the author is undergoing a 'crisis', not that the field is. The
title is clever, not descriptive.

More broadly, there is a crisis. Our statistical methods/understandings are
not working out in these large N-dimensional data sets, at least for the
researchers that were raised on excel and not numpy.

Aside: I'm surprised that the FAANGs haven't revolutionized statistics yet.
When you have 'phase changes' where a LOT more data becomes available, you get
to see very low probability events. It's happened in psych, in bio, in physics
most famously, in politics, in economics, etc. We have a LOT more data now in
for statistical use, but it's still just Poisson distributions and t-tests.
What gives?

~~~
johnc1231
It might vary from field to field, but in genetics Excel is a complete
nonstarter, and even numpy is not going to work on sufficiently large
datasets. The project I work on, [http://hail.is](http://hail.is), was started
to help deal with this.

~~~
dang
It doesn't look like that project has been discussed on HN before. One of you
should definitely post it! If you email hn@ycombinator.com we can give you
some tips on how to do that, and possibly also put the submission in the
second-chance pool (described at
[https://news.ycombinator.com/item?id=11662380](https://news.ycombinator.com/item?id=11662380)).

------
mjfl
To identify what the author feels is missing in neuroscience: in order to
understand something, you need to figure out how to describe two things about
it (1) what its state is at any point in time, and (2) how that state evolves
in time. Connectomics gives you the beginnings to solve (1), but it doesn't go
the whole way. There's a fundamental misunderstanding that you can collect
exabytes of data and glean understanding from it just because how much you had
to sweat getting the storage and collecting it. That's not how it works, it
needs to have structure too. I wish more biologists / neuroscientists
understood this.

~~~
meowface
I think they understand this. This is demonstrated by there being little real
understanding of ~300 neuron worm connectomes.

The point of the article is discussing how mapping the full human connectome
is only going to be a small next step towards understanding what's actually
going on. That doesn't mean it's not worth doing.

------
martythemaniak
I highly recommend this lecture by Jeff Lichtman, where he describes the
machine they've built to slice the brain and the software they have written to
visualize and make sense of this vast amount of data:

[https://www.youtube.com/watch?v=2QVy0n_rdBI](https://www.youtube.com/watch?v=2QVy0n_rdBI)

------
gbjw
Tangentially related: Could a Neuroscientist Understand a Microprocessor? [1]
(short answer: not with current analytic tools)

[1]
[https://journals.plos.org/ploscompbiol/article?id=10.1371/jo...](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268)

~~~
SubiculumCode
This is an over-rated article by those outside the field of neuroscience.
While it brings up a few good points, it fails to acknowledge the
sophistication of our neuroscience and behavioral techniques, and the value of
converging evidence across levels of description. Also, its relatedness to the
present article is tenuous at best.

~~~
SubiculumCode
I don't care how often I get down voted for making the above comment in
response to posts about this article. I am a neuroscientist and I will defend
my field from overrated, simplistic criticisms that happen to appeal to the HN
crowd's sensibilities.

~~~
hsitz
Do you have any suggestions for what the average layperson could read to get a
better understanding of contemporary neuroscience? Any articles or books you'd
recommend?

~~~
SubiculumCode
That is a bit difficult because the field is very broad, and I don't tend to
read popular science books. Mostly, I'd look up recent review articles on
Google Scholar...

------
htfu
Generally too purple for its own good but a very interesting read! The Borges
reference (and C. elegans conundrum) makes me as a layreader really appreciate
how little we actually know about the endgame for all this data.

But there is such elegance in "rudimentary" DNN's giving us the ability to at
all assemble this stuff.

------
jedberg
The author should go talk to some astrophysicists. They have a similar problem
-- humans are unlikely to ever understand how the entirety of the cosmos
works, but it's still interesting to learn about the small bits.

------
chiefalchemist
"We don’t understand how their interactions contribute to behavior,
perception, or memory. Technology has made it easy for us to gather behemoth
datasets, but I’m not sure understanding the brain has kept pace with the size
of the datasets."

Exemplifies:

\- Data is not information. \- Information is not knowledge. \- Knowledge is
not understanding.

We've not even left the gate of the first tier. Both exciting and
intimidating, but mostly humbling. Or should be.

------
m3kw9
I believe we are at the part where we think how the city works is by mapping
it, but you got sewers, pipelines, everything underneath that you haven’t
really dug into. There is a lot more inside a neuron that can be mapped. Let’s
just say your map isn’t detailed enough

------
soup10
If you've seen some of the high resolution videos of neural activity captures
from even simple fish, the slightest motor movements activate hundreds of
thousands of cells in a chaotic pattern. Neural circuitry is not neatly laid
out like a silicon chip, its a forest of inter-connectivity that resists
analysis even with extremely detailed visualization and data captures.

~~~
erikpukinskis
I think it’s interesting that people think we’ll be able to make something
that does more in a smaller space.

As if there was something other than the laws of physics preventing natural
selection from testing smaller structures.

Or that there’s something (other than the demands of the computation itself)
constraining the architectures that were tested through natural selection.

~~~
jayd16
We out design nature all the time. Evolution doesn't find perfection, it finds
whats good enough.

~~~
randcraw
Right. Every tool invented by humans is superior in achieving a certain end
than is our inborn equivalent. We don't have to faithfully model nature to
surpass it.

Likewise manmade models based in mathematics and statistics have long proved
more accurate in predicting outcomes than the human mind, even though we know
the mind doesn't employ math.

Human-made machines have made it possible for elephants to fly. Nature never
will.

------
pfdietz
I think they need to fully figure out how individual neurons work before they
move up to larger circuits.

------
m3kw9
“ connectomics and whether he thinks we’ll ever have a holistic understanding
of the brain. His answer—“No”

As a scientist to say something you don’t fully understand as impossible
really pisses me off.

------
elinchrome
Maybe it doesn't really matter.

------
erikerikson
We'll simplify and create models, as we do in maps and all else, identifying
principles and prioritizing.

------
HenryKissinger
"If the human brain were so simple that we could understand it, we would be so
simple that we couldn't."

~~~
rusk
A mind understanding itself may violate the second law, but a few minds
understanding one mind should be theoretically possible!

~~~
notahacker
The organization of human thought to achieve that level of understanding might
be an issue. From the original article: 'the data footprint of all books ever
written come out to less than 100 terabytes, or 0.005 percent of a mouse
brain'

~~~
rusk
That formulation makes the goal seem remote but not implausible. We have the
technology to index and analyse 10 Terrabytes certainly and even gmail
apparently is an exabyte ... [0]. A mouse brain given your premise is 20
petabytes. Given even linear progress this class of problem is not
insurmountable.

[0]
[https://en.m.wikipedia.org/wiki/Exa-](https://en.m.wikipedia.org/wiki/Exa-)

------
tus88
Physicists don't understand gravity...neuroscientists don't understand the
main...maybe the universe is a giant brain and the stars are neurons, the big
bang was conception and we are bacterial growth. Better than all current
theories.

------
RocketSyntax
I am planning to solve this =) just cracked the inner workings of ANNs (future
SHOW HN) and am going to read a book on computational neuroscience tomorrow.

~~~
allovernow
Not to be discouraging, but if you're just starting to learn about artificial
neural networks, you have a long way to go...

That said, in the few years of experience I've had with ANNs, they really do
seem like an intuitive analog for human learning, at a high level. And
thinking about training problems in this way, approximately "what kind of
training data would I need to train a small child to do this" can be more
helpful than one might have otherwise expected.

I think we're just a handful of major breakthroughs away from true AI,
assuming compute and memory continue to scale. Certainly within 100 years.

