
Could a neuroscientist understand a microprocessor? - bigdatabigdata
http://biorxiv.org/content/early/2016/05/26/055624
======
gumby
This title sounds like an homage to Lazebnik's famous essay "Can a biologist
fix a radio?"
([http://www.ncbi.nlm.nih.gov/pubmed/12242150](http://www.ncbi.nlm.nih.gov/pubmed/12242150))

~~~
LionessLover
For the lazy (saves some searching on the page and two clicks), the direct
link to the full text:

[http://www.cell.com/cancer-
cell/fulltext/S1535-6108(02)00133...](http://www.cell.com/cancer-
cell/fulltext/S1535-6108\(02\)00133-2)

I'll bookmark both those papers - and I rarely bookmark anything (instead
relying on Google and the eternal onslaught of ever newer links and issues).

That paper is real fun to read, an excerpt:

> How would we begin? First, we would secure funds to obtain a large supply of
> identical functioning radios in order to dissect and compare them to the one
> that is broken. We would eventually find how to open the radios and will
> find objects of various shape, color, and size (Figure 2). We would describe
> and classify them into families according to their appearance. We would
> describe a family of square metal objects, a family of round brightly
> colored objects with two legs, round-shaped objects with three legs and so
> on. Because the objects would vary in color, we would investigate whether
> changing the colors affects the radio's performance. Although changing the
> colors would have only attenuating effects (the music is still playing but a
> trained ear of some can discern some distortion) this approach will produce
> many publications and result in a lively debate.

If you need a real good laugh, _do_ read it now, it gets even more funny after
the quoted paragraph! It's almost as good as that (much shorter) paper on
PubMed where they examined the sterility of farts in operating rooms
([http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1121900/](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1121900/)).

~~~
agumonkey
One night I had a dream a biologist not knowing what a computer is would spend
hours describing that mysterious non uniform squarelet mater engraved with
latin letters. What a curious organ.

~~~
TeMPOraL
Laugh all you want, but if ever an UFO crashes on Earth and we find a
mysterious alien computer, we will need people with that mindset to reverse-
engineer it... ;).

~~~
agumonkey
We need finer reverse engineering that mapping in IMO.

------
return0
What a fun approach. It's true that traditional neuroscience methods are too
crude to give insight to brain functions and results are often over-
interpreted. Neuroscientists know that however [1], and that's why attempts to
simulate large circuits like the blue brain project or the simulated cat model
produced nothing more than "oscillation statistics". Despite their crudeness,
there are cases where circuits (like the amygdala) were causally linked to
behavior by early electrophysiology .

There is hope however, as there are now optical and molecular methods that
make it possible to observe , activate, and inactivate individual neurons,
which allows making causal inferences[2].

1:
[http://compneuro.uwaterloo.ca/files/publications/eliasmith.2...](http://compneuro.uwaterloo.ca/files/publications/eliasmith.2013a.pdf)

2:
[https://www.sciencedaily.com/releases/2015/05/150528142815.h...](https://www.sciencedaily.com/releases/2015/05/150528142815.htm)

~~~
nickledave
This paper doesn't describe "traditional" methods. The author test analyses
used _today_. The first simulation specifically tests whether inactivating
"cells" (components in the microprocessor) can let us make causal inferences.
They conclude that it can't. Whether that's a fair conclusion is up for
debate, but they are definitely critiquing the most cutting edge techniques in
the field

~~~
return0
Couple that, however, with the ability to tag the cells participating in an
experience, the ability to specifically reactivate them, and the ability to
correlate them with other experiences. Their analogy in general has a big
flaw, in that we know already that the brain does not have the serial logic of
a circuit. The output of the brain is also not predetermined behavior, but
rather dynamics between large, sparse populations, which somehow combine in
interesting ways. This problem may be interesting to study in a more rigorous
theoretical manner, but i don't think their approach may be taken seriously as
an indicator of anything.

------
ramblenode
+1 for a cool and thought-provoking idea but -1 for more sloppy/faulty
reasoning and arrogance than could be overlooked by a sincerely interested
reader.

I am sympathetic to the angle this is coming from. Yes, the field has a
serious problem with shoddy statistics and post-hoc theories optimized for
sexiness. But this paper vastly oversimplifies the problem domain to the point
of trivializing it.

The approach of this paper is to take techniques from a largely mysterious
domain, apply them to a vastly different domain that we understand much
better, and conclude that because we can't replicate our prior understanding
of the known domain there must be a problem with the tools. My guess is that
this approach looks less ridiculous on the surface because of how entrenched
the brain-computer metaphor is--but don't forget it is just an anology: a
knowingly incorrect abstraction. As the authors admit, lesioning a circuit
board is totally different from lesioning a brain. So why on earth should we
be talking about how much we can learn from brain lesions based on our
understanding of microprocessor lesions? (not a rhetorical question--I am
genuinely curious if I am missing something here because this absolutely
confounds me)

------
dclowd9901
I assume there are some neuroscientists here. I was trying to imagine how the
brain thinks, and how it comes to conclusions and taps into a breadth of
information so quickly.

One way I tried to conceive of it was that when the neurons in your brain
fire, they compose patterns. These patterns -- the orders and timing of
neurons firing -- might be likened against something like a hash table,
wherein you represent data as a serialized pattern.

For instance, when I think of a dog, my brain fires some base neurons that are
associated with the size of a normal dog, and some of its most basic
attributes: 4 legs, fur. These could also be the same regions of the brain
that would fire when I think of a cat, or a raccoon.

Does this in any way represent how the brain actually works?

~~~
pking
I'm a computational neuroscientist (and data scientist and software engineer
and startup founder).

The general principle in the brain is that the individual neurons (there are
25 billion in the cerebral cortex) represent statistically frequent
occurrences of experienced phenomena. So if you know a lot about dogs and
interact with dogs a lot, you will have a lot of neurons that "represent"
different things about dogs (dog categories, dog behaviors, aspects of dog
appearances, etc.).

The neurons "represent" perceptual experiences in a collective way called a
population code. In one study on humans, a neuron was found that fired when
viewing pictures of Jennifer Aniston but only when Brad Pitt was not in the
photo. This does not mean the neuron had the sole job of representing Jennifer
Aniston, but only that it was "tuned" to this perceptual occurrence. The
tunings of neurons are distributed to "cover" in a statistical fashion the
range and components of experiences. This particular human subject had perhaps
watched many episodes of Friends.

What is still unknown is how the structure of a visual scene is represented.
Neurons have been found for edges, contours, shapes, motion, depth, and
objects. The mystery is how they all work together to parse and compose the
scene. This is hard to determine because it is usually only possible to
"listen" in on a few neurons at a time with electrodes, whereas it takes
hundreds of millions to represent the visual world.

Regarding how visual mental imagery works, here is a post I made on Quora
about that in case it is of interest: [https://www.quora.com/How-can-we-see-
images-in-our-minds](https://www.quora.com/How-can-we-see-images-in-our-minds)

~~~
niels_olson
Can I float a conceptual conversion by you? I'm a pathologist (I look at human
brain regularly and interact with neurologists, neurosurgeons, and
neuropathologists) and my undergrad is in physics. I like your general
representation. Here's my conversion:

Let's start with a large matrix operation, something a deep neural net neuron
would do. Let's imagine that matrix is a sheet with colored dots instead of
numbers.

We don't care so much about the order of the rows, but the direction of the
columns holds some meaning. So we can connect the top of the matrix to the
bottom, like a tube. Now the left side isn't exactly a beginning, and the
right side isn't exactly the end, but there's this sort of polarity.

These ends of our tube (which was a sheet) are rings, and those rings can be
reduced to points, something like Grothendieck's point with a different
perspective at every direction (or in this case, many directions, one
direction per row). But the left point and the right point are still
different.

Now I have a polarized bag.

Like a neuron.

I could be silly and imagine the gates and channels on the neuron surfaces
could be mapped to elements in the matrix like those colored dots, but I
rather doubt the analogy extends quite that far...

And neurons don't absorb many giant tensors and emit one giant tensor. But
they do receive their signals at many different points on the surface. So
there is this spatial element to it. And there are many different kinds of
signals (electrical, neurochemical, all the way to simple biochemical, like
glucose). So there's this complexity that an inbound tensor would represent
nicely. And they do in fact emit a single signal sent to many neighbors.

Anyway, that's my personal matrix-to-neuron conversion.

Is that sensical?

~~~
PeterisP
It feels like a wrong analogy - the large matrix operations are not really
want a deep neural network is doing, it's an implementation detail, an
artifact of how it's efficient to represent a large number of neurons and
their connections.

The results of those tensor operations (not in their total, but each
particular output number) may have some very rough analogy to the changes in a
particular single synapse "connection strength" caused by various biochemical
factors as a result of neuron operation, but the _whole_ tensor operation
doesn't map to any biological concept smaller than e.g. "this is a bit similar
to how a layer of brain cells changes its behavior over weeks/months of
learning". A machine learning iteration updating all of the NN weights is a
rough correspondence to the changes that, over time, appear in our brains as a
result of experience and learning (and "normal operation" as well).

I have seen an interesting hypothesis on how the particular layout of a
dendritic tree and it's synapses can encode a complicated de facto "boolean-
like" formula out of all the "input" synapses (e.g. a particular chain of
and/or/not operations on a 100+inputs), instead of essentially adding all the
inputs together as much of artificial neural networks assume, but I'm not sure
about if how such hypothetical "calculations" actually are used in our brains.

------
Phemist
Interesting paper, thanks! I'll need to work through it more at a later time,
because I got stuck at the Lesion studies argument. I'm sorry if this seems a
bit ranty..

As for lesion studies, I wonder why there was no mention of single and double
dissociation? Only single dissociation was discussed, and although it is
tempting to say lesioned area X is uniquely responsible (a combination of
necessary and sufficient) for behaviour Y, it only tells you something about
its necessity, and nothing about its sufficiency. This is generally known in
Psychology & Neuroscience, which is why almost all studies base themselves on
double dissociation. It allows you to say sensible stuff about differences
between behaviours at the same level of description (Space invaders vs Donkey
Kong being different, for example), due to differences in a lower level of
description.

Also, I feel like the importance of topology is somewhat disregarded in
discussion of these lesion studies? As we go lower down in levels of
description - systems, to clusters, to individual neurons or even molecules,
topology of the units of description becomes more and more important. Imagine
enumerating every transistor on the chip and switching their connectedness
around to the point where behaviour Y can still be performed, but lesioning
transistor X no longer makes behaviour Y impossible. So we don't actually
obtain a "large number of identical radios", we obtain a large number of
functionally identical, but topologically different, radios, and then shoot
metal particles at them to understand under which conditions area X does
produce behaviour Y. This, to me, seems a lot less futile and a lot more
informing then what the section on lesion studies leads me to believe?

~~~
nickledave
I also noticed the lack of a "double dissociation" study. Fwiw there's a ton
of studies right now using optogenetics to transiently "lesion" areas where
the authors have not done double dissociation type experiments to determine
the specificity of their effects. My guess is the authors were in part
critiquing that

------
internaut
This really is a great question.

My guess would be that they might make progress in the direction of physics
but not in the direction of higher order abstractions further up the stack.
Those would just be interpreted as rules of the universe or background noise.

------
JackFr
"The fundamental unit of biological information processing is the molecule,
rather than any higher level structure like a neuron or a synapse; molecular
level information processing evolved very early in the history of life."
[http://www.softmachines.org/wordpress/?p=1558#more-1558](http://www.softmachines.org/wordpress/?p=1558#more-1558)

------
raphman_
Somehow related: [https://aeon.co/essays/your-brain-does-not-process-
informati...](https://aeon.co/essays/your-brain-does-not-process-information-
and-it-is-not-a-computer)

------
hyperion2010
One of the major outstanding questions I have as a neuroscientist is whether
the classic experimental approaches used here can ever get us to the
'understanding' the is needed to build something that looks like a brain. I
have been leaning toward the thought that we are likely to make faster
progress by letting the synthetic biologists have a shot at it, even if it
means we will be stumbling around in the dark if we stray even slightly from
the steps they learn to take.

------
danielam
Related, in a general sense (and with some of the comments here in mind):
[http://edwardfeser.blogspot.com/2012/03/scruton-on-
neuroenvy...](http://edwardfeser.blogspot.com/2012/03/scruton-on-
neuroenvy.html)

------
kensai
Very insightful article. I will try to "cell it" to our next Journal Club when
we will be talking about the latest thinking on Brain-Computer Interfaces. 8-)

------
simonster
This is an interesting idea, and the paper is pretty well thought out. But I
think one source of information not sufficiently explored is anatomy, which
would help a great deal with the microprocessor, although it seems to help
less with the brain. If you have the connectivity of the entire microprocessor
(as the authors have determined using microscopy), then you can probably
determine that there are recurring motifs. If you can figure out how a single
transistor works, then you can figure out the truth tables for the recurring
motifs. That takes care of most of Fig. 2. The only question that remains is
if you could figure out the larger scale organization.

Anatomy also helps with the brain, but not nearly as much. People are still
trying to figure out how the nematode C. elegans does its relatively simple
thing even though the full connectome of its 302 neurons was published 30
years ago. But in larger brains, the fact that neurons are clustered according
to what they do and projections are clustered according to organization in the
places that they project from provides at least some level of knowledge. We
are not merely applying Granger causality willy-nilly to random neural
activity. We know what's connected to what (at a gross level) and in 6-layer
cortex we even have an idea of the hierarchy based on the layers in which
projections terminate (which is how Felleman and Van Essen got Fig. 13b in the
paper).

OTOH, I think our failure to understand many neural networks at a conceptual
level is quite disturbing, and perhaps a sign that the kind of conceptual
understanding we seek will be forever beyond our reach. The authors mention
this toward the end of the paper, although I think they overstate our
understanding of image classification networks; I've never seen a satisfying
high-level conceptual description of how ImageNet classification networks
actually _see_. One possibility is that we simply don't have the right
concepts or tools to form this kind of high-level description. Another
possibility is that there simply is no satisfying high-level way to describe
how these networks; there are only the unit activation functions,
connectivity, weights, training data, and learning rule. We can find some
insights, e.g., we can map units' receptive fields and determine the degree to
which different layers of the network are invariant to various
transformations, but something with the explanatory power of the CPU processor
architecture diagram in Fig. 2a may very well not exist.

I hope that the brain's anatomical constraints provide some escape from this
fate. Unlike most convolutional neural networks, the brain has no true fully
connected layers, and this may serve to enforce more structure. We know that
there is meaningful organization at many different scales well beyond early
visual areas. At the highest levels of the visual system, we know that patches
of cortex can be individuated by their preferences for different types of
objects, and similar organization seems to be present even into the
"cognitive" areas in the frontal lobe. It remains to be seen whether it's
possible to put together a coherent description of the function of these
different modules and how they work together to produce behavior, or whether
these modules don't turn out to be modules at all.

~~~
stochastician
Author here -- so actually we've done a fair bit of anatomical work recovering
motifs, even while looking at the processor [1] -- we took that out of this
paper based on several readers' recommendation that the content was "too new"
and that people wouldn't understand it. That said, without the ability to then
directly probe those specific circuits, it's quite hard to figure out what the
motifs are doing, and motif finding in general is an incredibly challenging
problem, especially at scale.

[1]
[http://ericjonas.com/pages/connectomics.html](http://ericjonas.com/pages/connectomics.html)

------
lettergram
My wife (a neuroscientist) does...

~~~
khedoros
Your wife performs (or, can perform) arbitrary experiments on classic
microprocessors to see if popular data analysis methods from neuroscience can
elucidate the way they process information?

