
Could a Neuroscientist Understand a Microprocessor? (2017) - eggspurt
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268
======
zkms
Testing analytical methods of a field against engineered artefacts is a good
idea but there is a fatal flaw here; devices that do a fetch-decode-execute-
retire loop against a register file and a memory bus have perversely little in
common with what neurobiology is concerned with. A more appropriate artefact
would be a CPU _and its memory_ (where NOP'ing out code or flipping flags
corresponds to "lesioning"), or even better, an FPGA design (where different
functions work in parallel in different locations on the silicon, much like
brains).

That the tools of neuroscience choke on a 6502 is as much of an indictment of
the former as my inability to fly helicopters is an indictment of my fixed-
wing airmanship; not coping well with notoriously perverse edge cases outside
your domain of expertise isn't inherently a sign of failure (it's not a
licence to stop improving, of course). Brains and 6502s are quite literally
entirely different kinds of computing, much like designing for FPGA is weird
and different from writing x86 assembly or C.

A far more interesting question is "could a neuroscientist understand an
FPGA?".

~~~
foldr
>devices that do a fetch-decode-execute-retire loop against a register file
and a memory bus have perversely little in common with what neurobiology is
concerned with.

A key point of the article is that we can't really be sure this is the case,
since the analytical tools used by neuroscience arguably wouldn't reveal this
kind of structure even if it did exist.

~~~
gus_massa
The nice part of evolution is that most of the times you can see some leftover
of the intermediate steps. For example, for the eye you can find animals with
different complexity of eyes from a simple flat photosensitive area to a full
eye of vertebrates (or cephalopods, that have a different but similar eye
design).

So in most case you can get some people to specialize and understand the
simple models and create concepts and tools to understand the more complex
models.

In electronic circles you still have floating around a lot of mini-integrated
circuits with 20-50 transistors that are easy to understand. And you can learn
to group individual transistors in mall groups that do something useful (for
example limit the output current, simulate a resistors, amplification, ...)

Then you can learn to decode the intermediate models with 100-1000
transistors, and then the models with a few thousand of transistors, and then
...

So, it's very suspicious that there are no animals with a minibrain with is
finite automata of 3 states.

There are also some cases where all the intermediate steps dissapared, for
example the transition form prokaryote(bacteria) to eukaryote (animal, plants,
protozoa, ...) And IIRC nobody understand the intermediate steps. But there
are some clues, many structures are shared between prokaryotes and eukaryotes,
mitochondria are probably trapped bacteria (they have their own DNA and too
many external membranes, ...)

~~~
foldr
>So, it's very suspicious that there are no animals with a minibrain with is
finite automata of 3 states.

What is your basis for saying that no such animals exist? Exactly how many
states there are depends a great deal on the level of analysis. How many
states does a 6502 have? At the physical level, an enormously large (possibly
even infinite) number. At the level of analysis appropriate for programming
one, considerably fewer.

------
isoprophlex
Great work.

"In other words, we asked if removed each transistor, if the processor would
then still boot the game. Indeed, we found a subset of transistors that makes
one of the behaviors (games) impossible. We can thus conclude they are
uniquely necessary for the game—perhaps there is a Donkey Kong transistor or a
Space Invaders transistor. "

A fantastic comment to show that describing a system is not the same as
understanding the system!

~~~
SubiculumCode
It might start that way, but then they begin to ask, how do Space Invaders and
Donkey Kong differ? Ah yes, in space invaders, you cannot move up and down,
but in Donkey Kong you can. OK lets create a Space Invaders task where you can
also move up and down. Does adding that function make the "Donkey Kong"
transistor break the modified Space Invaders behavior? Yes?! So that
transistor might not be about Donkey Kong, but for moving up and down. and so
on...

An example. Early research which lesioned the hippocampus, lesions brought
about impairments on memory tasks...but not all memory tasks. Memory of
individual items, or feeling of familiarity without recollection seemed to be
relatively preserved. Particularly affected however, were memories involving
relations between pairs of items...but not always...those item pairs could be
remembered by constructing a story about them, or making one item a feature of
another item..so it seemed that item relations that were arbitrary were
particularly affected by lesions, and showed more "activity" in neuroimaging
studies. and so on. The hippocampus seems to fulfilling a role of binding
high-lvel percepts into memory traces for which there is not some
lawful/generalizable relation. This was a broad over view..but this goes
beyond characterizing a brain region a "donkey kong".

~~~
ajuc
> Does adding that function make the "Donkey Kong" transistor break the
> modified Space Invaders behavior? Yes?! So that transistor might not be
> about Donkey Kong, but for moving up and down. and so on...

Only there's no transistor uniquely responsible for moving stuff up and down.
There are transistors responsible for particular bits of output of particular
machine code commands, like ADD or MOV. But they are commonly used by almost
all the code, so the most probable difference between code triggering the
error and a code that's working correctly - would be "how big values we're
working on", and if the values are in the "correct" range - that transistor
will make a difference. It very well might be that x coordinates are big and y
coordinates are low, so it's working for y but not for x. In that particular
level of that particular game, assuming the memory was in a particular state
before you started that game.

The problem lies in trying to assign too high-level meaning to stuff that
works on much lower level of abstraction. It's very similar to alchemy or
astrology. Searching for correlation between unrelated events and basing
elaborate theories on that.

~~~
SubiculumCode
If that transistor was not about moving up and down, you'd have to explain why
lesioning that transistor seems to break games with up and down motion. As
what you say is correct, scientists would soon find other tasks it breaks that
do not appear to have up and down motion. You might move to questions about
complexity of objects, coding of directions, etc. Theories would emerge and
begin to be related to emerging understanding of archtectures, transistor
function, controllers. It evolves.

Aside from that, I object to the view point more generally. Can one not
understand something about how cultures work without knowing how brains work?
Can one not know something about how an ape behaves without understanding
their gut-biome? I just disagree with the view that the only worthwhile
understanding of phenomena is from the bottom...that there is some true level
of description at which something must be understood.

~~~
ajuc
> Can one not understand something about how cultures work without knowing how
> brains work?

Yes, you can, but that's psychology or sociology, not neuroscience. The goal
was to understand brain, and be able to replicate it or modify it.
Understanding just the behaviour is like understanding that pressing "fire"
will shoot in Space Invaders. It's something, but you don't need to dissect a
CPU to know that, and it doesn't move you closer to creating your own CPU or
modifying this one.

> the only worthwhile understanding of phenomena is from the bottom

That's misrepresentation of my point. I wrote that in the CPU example
neuroscience tools failed because of missing the abstraction levels between
transistors and behaviours.

> there is some true level of description at which something must be
> understood

There is optimal level of description, yes. And notation makes a huge
difference.

~~~
SubiculumCode
Neuroscience works at a number of levels of description or size. Neuroscience
which focuses on synapses Neuroscience that focuses on dendritic tree or whole
neurons, Neuroscience that ask questions of small groups of neurons, and
Neuroscience that focuses on large scale networks. Most try to relate that
function at that level to behavior in some way and to bridge the gap between
levels of description.

When we try to understand deep learning, do we try to understand it at the
level of individual transistors, at the level of a 'neuron', at layers, or at
the level overall design? Well it depends, right?

~~~
ajuc
I'm not a scientist, but it seems the various groupings of neurons yout
enumerated are still hardware level, and not the software level.

Maybe there really is no software level between hardware level and behaviours,
but it seems unlikely to me, because I can think about abstract ideas using
that brain.

------
chrisfosterelli
It's an interesting idea here. The paper is arguing that current brain
analysis methods don't work well in an alternative environment with a lot of
data, so maybe the methods are the problem instead of our lack of data in
neuroscience.

However, I think this misses part of the point. We use these methods _because_
we have very little data available. There are tons of interesting new ways to
analyze brain data that I think computational neuroscientists are dying to
explore, but don't have enough data to do so. If we had a lot more data, we
might not be using these approaches.

~~~
bordercases
This explains the diversity of methods and their emphasis on statistical
inference. But it isn't necessarily true, that the current best choice for
methods implies that the outcomes from using these methods are good. In this
case, if we expect that these methods are underpowered in the ecological
context, then we should expect them to perform better in a more controlled
one, or else they would be useless. But given the author is correct... then,
well, there's obviously room for more thought.

------
yoz-y
Older article in similar vein: "Can a biologist fix a radio?"
[http://math.arizona.edu/~jwatkins/canabiologistfixaradio.pdf](http://math.arizona.edu/~jwatkins/canabiologistfixaradio.pdf)

~~~
Sniffnoy
Possibly worth noting that this article explicitly credits "Can a biologist
fix a radio?" as inspiration.

~~~
no_identd
Also available here: [http://www.cell.com/cancer-
cell/fulltext/S1535-6108(02)00133...](http://www.cell.com/cancer-
cell/fulltext/S1535-6108\(02\)00133-2)

------
nicodjimenez
Really happy to see this article. This viewpoint is not new, but it is still
far from being mainstream.

A big issue touched upon in this article is that the space of possible
dynamical systems represented in the brain is large, and trying to collect
data is not a practical way of trimming this search space. It's more useful to
look at types of dynamical systems that have certain stability properties that
are desirable for computation.

But the issue then becomes that these dynamical systems become mathematically
intractable past a few simplified neurons. So it's really hard to make
progress either by looking at data, or by studying simplified dynamical
systems mathematically.

There is a third option. Evolve smart dynamical systems by large scale brute
force computation. Start with guesses about neuron-like subsystems with
desirable information processing properties (at the single neuron level, such
properties are mathematically tractable). Play with the configurations, the
rules of evolution, the reward functions, the environment, everything. This
may sound a lot like witchcraft but look at how far witchcraft has taken
machine learning in recent years (deep learning is just principled
witchcraft). This is IMO the only way we will learn how biological
intelligence works.

~~~
xpuente
I think is that the question is really close to how we design a modern
Processor. Certainly is important to look at the data (i.e. how behaves a real
machine under actual workloads). That data might suggest design improvements
(through some form of experience, art and intuition). You use simulation the
test those "potentially" good ideas. Most of them are discarded...

Perhaps neuroscience should move in the same direction: how I think cortex
work? Test in under (simple) working conditions. See if it makes sense. Move
to more complex working conditions. Rinse an repeat.

In actual processors math is rather useless too... beyond some niches. Even in
susceptible issues such as formal verification of coherence protocol design,
mathematical tools are rather limited (due to state explosion).

------
SubiculumCode
This paper offers some good points but exhibits a number of flaws that limit
its applicability to the utility of current neuroscience methods. For a
generally thoughtful conversation, see
[http://www.brainyblog.net/2016/08/30/could-a-
neuroscientist-...](http://www.brainyblog.net/2016/08/30/could-a-
neuroscientist-understand-a-microprocessor-2/) from 2016. One comment that I'd
like to highlight from this conversation is pasted below:

 __ _" But no attempt is made to analyze the similarities and differences in
those behaviors. All three game behaviors rely on similar functions. Depending
on the level of similarity between the behaviors, you might think of it as
trying to find a lesion that only knocks out your ability to read words that
start with “k” versus words that start with “s.” That’s an experiment that’s
unlikely to succeed. But if the behaviors are more like “speaking” vs
“understanding spoken words” vs “understanding written words” then it’s a more
reasonable experiment.

The authors argue that neuroscientists make the same mistake all the time;
that we are operating at the wrong level of granularity for our behavioral
measures and don’t know it. That argument denies the degree to which we
characterize behaviors in neuroscience, and how stringent we are about
controls.

The authors point to the fact that transistors that eliminate only one
behavior are not meaningfully clustered on the chip. But what they ignore are
the transistors that eliminate all three behaviors. Those structures are key
to the functioning of the device in general. To me, those 1560 transistors
that eliminated all three behaviors are more worthy of study than the lesions
that affect only one behavior, because they allow us to determine what is
essential to the behavior of the system. You can think of those transistors as
leading to the death of the organism, just as damage to certain parts of the
brain cause death in animals."_ __

------
aavaas
People do reverse engineer chips by photographing them. [https://youtu.be/aHx-
XUA6f9g](https://youtu.be/aHx-XUA6f9g) (Reading Silicon: How to Reverse
Engineer Integrated Circuits). But as far as I know, the same cannot be done
with the brains even if we can photograph it. I guess the 3D structure of the
brain compounded with high interconnection between neurons does not make it
easy.

~~~
otabdeveloper2
Or perhaps the brain works nothing like a CPU.

(The very idea that it does is actually laughable, akin to medieval fantasists
imagining that flying machines must have huge white wings with fleshy
feathers.)

~~~
aavaas
Yes, a brain does not work like a CPU. As noted by another user in this
thread, understanding brain is a considerably harder problem. "CPUs are
deliberately engineered so that their different functions are nicely decoupled
and easy to reason about but brains are the result of a very messy
evolutionary process" -SilasX

~~~
amelius
Engineers are also the result of a very messy evolutionary process ;)

~~~
Sharlin
Being the result of a messy evolutionary process doesn't give you any special
insight into other messy evolutionary processes.

------
websterisk
I had the odd, but unique, experience of taking "Computer Engineering" and
"Formal Logic" (a neurology/history-of-thought course) during the same
semester. One observation from that experience is that there is a great deal
of cognitive overlap in our representation and communication of those fields
of study. Typically, I would see that overlap as being indicative of broad
similarity.

Reading this and the comments makes me question the similarity of the fields
somewhat. Perhaps it is just our tools for comprehension that are shared
between the two rather than any deeply tactical, functional commonality.

To that end, I think that experts in these fields could communicate very
effectively with each other once some vocabulary had been sorted out. How
effective one expert would be in the other's field is less clear to me.

------
jonnycomputer
what i don't like about this sort of thing is: the only guaranteed way to
succeed in the (apparent, revealed) objectives of the paper is not try very
hard.

The obvious problem here is the clear mismatch between the behaviors and their
research objectives and methods.

If they wanted to understand transistors, they'd do what cellular
neuroscientists do, and isolate and manipulate individual transistors inputs
and measure the outputs.

If they wanted to understand how clusters of transistors, whose activities are
tightly coupled (as you'd expect them to be in a logic gate), then you'd
isolate those, and manipulate the inputs and measure the outputs.

If you wanted to understand higher levels of organization, using a lesion
approach, you need to decide how much to lesion. In the brain, function is
localized in clusters of related activity, and there is usually a lot of
redundancy. Single neuron lesions are not usually enough to have noticeable
effects. But even then, a lesion approach is more interesting when you couple
it with real experiments. Consider this paper by Sheth et al.
[https://www.nature.com/articles/nature11239](https://www.nature.com/articles/nature11239),
which had subjects perform a cognitive control task before a surgical lesion
to the dorsal anterior cingulate, coupled with single unit recordings, and
then had them perform the same task after the lesion. The experiment yielded
pre-lesion behavioral and neural evidence of a signal related to predicted
demand for control, and post-lesion, the behavioral signal was abolished.

Of course, the Sheth paper would not have been possible without the iterative
improvements in understanding made by prior work, including Botvinick's neural
models of conflict monitoring and control. That is, its iterative; and this
cpu paper was never intended to be iterative.

------
stochastician
Author here, happy to answer any questions! Always a pleasant surprise to find
stuff you do making it to HN.

------
e12e
> An optimized C++ simulator was constructed to enable simulation at the rate
> of 1000 processor clock cycles per wallclock second.

Following links in through "code and data":

[http://ericmjonas.github.io/neuroproc/pages/data.html](http://ericmjonas.github.io/neuroproc/pages/data.html)

I found:

[https://github.com/ericmjonas/neuroprocdata](https://github.com/ericmjonas/neuroprocdata)

But I couldn't find any link to the c++ code. Surely the emulator is also
needed in order to be able to reproduce the research?

A bit of a shame they used closed source games - I'm not sure how one would go
about obtaining copies (legally). But it would be interesting to try
replication via other places/demos - as they only model booting anyway.

------
mhneu
"This example nicely highlights the importance of isolating individual
behaviors to understand the contribution of parts to the overall function. If
we had been able to isolate a single function, maybe by having the processor
produce the same math operation every single step, then the lesioning
experiments could have produced more meaningful results. "

I submit that this direction is an important one to pursue.

~~~
SilasX
That also highlights why understanding the brain is a much harder problem:
CPUs are deliberately engineered so that their different functions are nicely
decoupled and easy to reason about. Brains are the result of a very messy
evolutionary process that never came close to optimizing for "easy reasoning
for refactoring".

~~~
rz2k
Yet if a particular CPU is produced in enough quantity and is cheap enough
people will engineer uses out of it that the designers never thought of, and
had no intention of making possible.

It reminds me of talking to another player in an online Risk game where they
didn't understand what an AI player was trying to do. The code was open source
and something like only three functions, but in practice they did something
completely different.

------
Animats
It would be interesting to repeat this for a GPU.

A CPU has so much hardware common to most instructions that any failure will
take it down completely. That's less true of a GPU, where a failure of one of
the massively parallel units is likely to manifest as some alteration of the
output image.

~~~
jonhendry18
It would be interesting to repeat it with a microprocessor connected to two
motor control boards, two ADC boards connected to sensors, and an LCD. Present
them as black boxes to the neuroscientist. Run code on the microprocessor that
exercises the peripherals, instead of games.

Creating a "lesion" in the motor control boards would effect behavior of the
attached devices, as well as, perhaps the output on the LCD. Similarly for the
sensor boards.

Once the "lesions" have let the neuroscientist determine the function of the
peripherals, they could look at the effect of lesions in the microprocessor on
the functions of the peripherals and system as a whole when running various
programs.

Maybe a program that exercises the "left side" motors, a program that
exercises both sides, etc.

Maybe a microprocessor alone is too small of a unit of functionality, akin to
studying an amygdala in a petri dish.

------
sqln00b
I didn't read it but the abstract's first sentences sound as if it's rather
about "Do the issues neuroscientists face when examining the human brain
persist when they examine a microprocessor instead?"

------
tim333
Of course a neuroscientist could understand a microprocessor by other methods.
The point of the article is the usual methods of neuroscience would have
limited results though I think in general in science people use whatever
methods they can think of to figure what's going on and the methods of
neuroscience are probably the best people can come up with for figuring
brains. Though there are also interesting results from the AI researchers
mucking about with artificial neural networks also.

------
aaavl2821
From some conversations with neuroscientists, it seems that one issue that
limits investment in new tools to measure the brain is that its easier to get
a publication by analyzing an existing data set in a new way, or even
generating and analyzing a bigger / different data set with fMRI or eeg or
clinical data, than it is to develop a novel tool to measure the brain (like
optogenetics). but there are a lot of advances being made in new tools to get
better data on how the brain works

~~~
jonhendry18
I would guess doing research with optogenetics in the brain is probably an
order of magnitude more expensive than non-surgical methods. And developing
new tools that involve surgery is probably an order of magnitude more
expensive still. At least.

~~~
aaavl2821
It probably is, but it is also at least an order of magnitude more beneficial
to the field than some of the computational work. There are some groups
working on non-invasive optogenetics methods, which are still early stage but
really cool

------
x_istor
It's definitely easier to understand a microprocessor than chemical-oriented
protein systems that have mostly evolved into an operable state by chance.

A CPU is founded on a limited set of basic components that possess reasonable
qualities, behave consistently, and only scale to large quantities with
identical repetition.

Just leave out the deeper materials science and solid state quantum physics
behind the "why" of how transistors operate.

------
kazinator
This feels like a silly strawman being made out of the methods used in
neuroscience. In a microprocessor, a single bit being flipped the wrong way
potentially stops the whole show; your Donkey Kong game from 1981 doesn't run
at all. By contrast, you will not anywhere near fully incapacitate a brain by
lesioning a single neuron, if at all.

------
acchow
I wonder, if an alien species stumbled upon a DVD, would they be able to
decode the video contained within it?

------
lallysingh
> This suggests current analytic approaches in neuroscience may fall short of
> producing meaningful understanding of neural systems, regardless of the
> amount of data.

Using the same analytical techniques against a cpu who's design is unknown at
time of analysis. Nice meta-analysis, clickbait title.

Edit: reformatted, thanks.

~~~
WalterGR
Prefixing lines with spaces breaks formatting on mobile. I've reformatted your
excerpt here:

> This suggests current analytic approaches in neuroscience may fall short of
> producing meaningful understanding of neural systems, regardless of the
> amount of data.

------
aj7
This isn’t even wrong. It follows Drexler’s method. Just keep writing and
writing and writing.

------
bakhy
is this something like this XKCD? :)

[https://xkcd.com/1588/](https://xkcd.com/1588/)

------
sigi45
It is garbage :(.

Srsly a CPU has nothing to do with a brain at all. It doesn't make sense to
use technics from one for the other.

I have no idea how anyone comes up with such an idea and even publishes it.

A Brain itself is everything. Ram and CPU.

A CPU is just a CPU there is no state in physical form.

A CPU is a turing machine, a brain isn't.

------
dschuetz
That question ist weird. "Sure, why not?" would be my reply to that. I thought
one should avoid yes/no questions for a paper?

I'm not a neuroscientist (I cannot afford medical education), nor am I a
microprocessor engineer (yet). But I understand how systems work, so I might
have a chance to understand how neural networks work (as models and their real
counterparts) and I might have already an understanding on how microprocessors
are designed by principle. So, yes, a neuroscientist who decides to visit some
lectures on digital logic circuits and microprocessor design might have a
chance to understand it! I'm really confused about this quenstion.

~~~
B-Con
No offense, but it seems like you didn't click through.

From the abstract:

> here we take a classical microprocessor as a model organism, and use our
> ability to perform arbitrary experiments on it to see if popular data
> analysis methods from neuroscience can elucidate the way it processes
> information. Microprocessors are among those artificial information
> processing systems that are both complex and that we understand at all
> levels, from the overall logical flow, via logical gates, to the dynamics of
> transistors. We show that the approaches reveal interesting structure in the
> data but do not meaningfully describe the hierarchy of information
> processing in the microprocessor.

The idea is to apply the modern neuroscience approach to a microprocessor to
see what level of understand of the microprocessor is extracted. tl;dr: The
high-level "meaning" of the processor's design is not extracted.

The purpose seems to be to examine the inherent limitations of modern
neuroscience by applying it to a design that we do understand quite well apart
from neuroscience, something we ourselves designed.

------
menssen
“Could a computer scientist understand a brain?”

~~~
lallysingh
Could a doctor understand a car?

~~~
bitexploder
Can a car understand a road?

~~~
kss238
Autopilot apparently can't

------
fizixer
Give me a neuroscientist who's willing to learn, and one week.

