
Could a Neuroscientist Understand a Microprocessor? (2017) - KenanSulayman
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268
======
tbenst
I wrote a lay summary of this paper that compares and contrasts with the whole
brain imaging and stimulation literature for those interested in a lighter
read!

[http://www.neuwritewest.org/blog/can-we-reverse-engineer-
the...](http://www.neuwritewest.org/blog/can-we-reverse-engineer-the-brain-
like-a-computer)

~~~
darsnack
Great summary! At the end you mention the difficulty extrapolating beyond an
HH neuron model. I think curious readers will find the work of Jim Smith
([https://fcrc.acm.org/plenary-speakers/james-e-smith-
plenary](https://fcrc.acm.org/plenary-speakers/james-e-smith-plenary))
interesting in this regard. His work starts with the possible information
representation scheme (temporal coding <=> binary coding) and a compute unit
(SRM0 neuron <=> transistor) and builds up the equivalent of Boolean
logic/algebra from there.

As opposed to a neuroscientist understanding a processor, Jim is a computer
architect using his techniques to understand the brain.

------
jonnycomputer
In contrast, I was underwhelmed. Neuroscientists typically try to develop
tasks that create specific contrasts that highlight aspects of specific sorts
of computations. They didn't do anything like that.

For example, in the field I work in, we very very reliably get signal that
track reward prediction errors in the striatum (e.g. BOLD response in fmri)
during reward learning tasks.

~~~
eli_gottlieb
>For example, in the field I work in, we very very reliably get signal that
track reward prediction errors in the striatum (e.g. BOLD response in fmri)
during reward learning tasks.

Yeah, but that's a property of how the BOLD response correlates with the task
structure, not a map of what computations the brain actually does.

Yeah, ok, I'm shortening things by a lot, but neuroimaging really is quite
fraught in terms of what sorts of computational-level inferences we can draw
from task-based experiments.

~~~
jonnycomputer
In these experiments, the BOLD signal reliably scales with reward prediction
error, which is a computation, the difference between the reward expected and
received. In short, we fit reinforcement learning models to in-task behavior,
then correlate parameters of that model to neuroimaging acquired during that
behavior.

fMRI does have quite a few limitations; BOLD signal is believed to reflect the
amount of work being done, loosely, speaking, but it is not a computational
signal in of itself; for reinforcement learning models we'd like to measure
dopamine itself. Two decades ago, the pioneering work of Peter Dayan and Read
Montague established that dopamine neurons report a prediction error signal
(see:
[http://static.vtc.vt.edu/media/documents/SchultzDayanMontagu...](http://static.vtc.vt.edu/media/documents/SchultzDayanMontague1997Science.pdf)),
but recording of neurotransmitter release in humans is hard, and frankly, long
thought impossible (given ethical constraints). However, recent work by Read
Montague has done just that. See:
[https://www.pnas.org/content/113/1/200](https://www.pnas.org/content/113/1/200)
Keep tuned.

~~~
mattkrause
Do you have any insight into why it took so long to do this in hmans? Fast-
scan cyclic voltammeter was done in animals ~15 years ago by Wightman's group.

~~~
jonnycomputer
ethics, mainly. unlike macaques (etc.) we don't just stick probes in people.
however, Parkinson's and epilepsy patients are getting implants for deep brain
stimulation; Montague's group is working with hospitals performing these
procedures.

[https://psychcentral.com/news/2015/11/29/new-insights-
into-d...](https://psychcentral.com/news/2015/11/29/new-insights-into-
dopamine-function-in-parkinsons-patients/95495.html)

~~~
mattkrause
I wasn't sure if there was some advance in voltammeter techniques, or if it
just took forever to find a willing patient and convince ethics to green-light
it.

DBS has been FDA-approved since the late 1990s (and even longer under IRB/IND
stuff), so there have been lots of opportunities to stick probes in humans'
heads.

~~~
jonnycomputer
There actually is a technical advance here; I just don't know enough about it
to discuss intelligently (sorry). I would reach out to the researchers
involved if interested.

------
oceliker
Previous discussion:
[https://news.ycombinator.com/item?id=16817832](https://news.ycombinator.com/item?id=16817832)

This is one of my favorite papers of all time and I'm happy to see it on the
front page again.

------
mywittyname
I think their lesion approach is a little awkward and probably doesn't map
that well between the fields. Removing a transistor, IMHO, is more akin to
removing an eye or some other body part. Yeah, it allows one to determine if
that transitor was important, but its impact on behavior is due to damaged
hardware. Plus, the software probably can't be written to compensate for
missing transistors.

Instead of removing transistors from the CPU to determine if it was important
to the "behavior" of the game, I think it would be more analogous to delete
portions of memory and see if that alters game behavior. Even then, I think
it's much more likely to cause a crash if done without some a priori
information.

------
adenadel
This is sort of a spiritual sequel to

Can a biologist fix a radio?--Or, what I learned while studying apoptosis.
[https://www.ncbi.nlm.nih.gov/pubmed/12242150](https://www.ncbi.nlm.nih.gov/pubmed/12242150)

------
SubiculumCode
This article keeps coming up over and over; I suspect that it appeals to the
HN crowd's background, but really, the article, and others like it,
underestimate the sophisticated behavioral methods we cognitive
neuroscientists employ, as well as the value of convergent lines of evidence
taken from molecular to systems neuroscience.

------
6gvONxR4sf7o
Lately I've been wondering an analogous question in NLP. Could modern NLP
techniques learn what software source code does?

~~~
The_rationalist
A neural network doesn't understand anything but sure it could predict the
program output sequence based on the input sequence. Would there be any useful
use case to such a technology?

~~~
jakeinspace
Maybe for simple programs. Of course, an arbitrarily deep neural net can
emulate any compiler (a function from plain text to machine instruction
sequences). But predicting output is limited by the halting problem and
complexity.

~~~
fnrslvr
Whether a neural net could be _trained_ to emulate a compiler seems like the
more pertinent question.

I've had the displeasure of trying to explain classical learning hardness
results to someone who kept repeating back to me "but RNNs are Turing
complete!" If anything, the expressive power of the model you're trying to
train pushes back against your efforts to train it to do something useful.

------
jugg1es
I would be interested to see any followup studies that looked into the things
they discuss at the end about isolation of computational processes.

------
AbrahamParangi
"What I cannot create, I do not understand"

~~~
brmgb
Pretty much everyone on this planet could at some point create a brain (with
assistance). I don't think anyone will pretend to understand it.

~~~
brianhorakh
You mean create a brain by convetion i.E. having intercouse of course.

Given raw cellular material, gtca biological code samples, design an organic
chemical engine "brain" capable of self replication, repair and inference?

Its sorting through the 4.5 billion years of leftover legacy dna code that
makes it tedious to understand.

~~~
bitwiser
So what you're saying is....reproductive organs are a layer of abstraction.
_brain explodes_

~~~
bronzeage
literally mind fuck.

------
mantoto
A computer chip works fundamentally different. This paper doesn't make sense
at all.

------
BadThink6655321
Of course a neuroscientist can understand a microprocessor. That’s easy. Let
me know when a microprocessor can understand a neuroscientist.

~~~
microcolonel
Indeed, the title leads you in an unsatisfying direction.

~~~
QuesnayJr
I think the title is extremely witty, since you expect to see the sentence in
a different order.

