
A neuroscience experiment failed to build a connectome for a 6502 chip - artsandsci
https://www.wired.com/2017/03/atari-chip-set-off-bitter-war-among-neuroscientists/
======
stochastician
Original author of the paper here, happy to answer questions. It's a little
strange refreshing HN and discovering new popular press about your paper.

~~~
stochastician
Also in the time-honored tradition of HN, the title isn't quite right. 1. we
didn't build the connectome, the visual 6502 team did. 2. It's such a good
connectome that we can simulate everything perfectly -- a Hansonian "Em", if
you will. 3. It's still hard with total visibility to use contemporary
techniques to achieve understanding.

~~~
eli_gottlieb
What's the basic point of a connectome? What understanding did anyone _expect_
to achieve from making one? Sure, maybe you can emulate the device, but unless
you're a very ardent philosophical connectionist, you're not understanding
much. And if you are a very ardent philosophical connectionist, you're
claiming that knowing the connections gives us all there is to understand,
with no deeper principles than synaptic weights being available.

It's as silly as expecting to get a good theory of artificial intelligence by
studying artificial neural networks.

(I'm aware of the irony in the above statement, but stand by it earnestly. An
excellent engineering artifact whose functioning we can't explain is _not_
scientific understanding. An excellent engineering artifact whose functioning
we _refuse_ to explain is bad philosophy, too.)

~~~
stochastician
I think many in the neuroscience community would agree that there's a lot we
can learn from connectomes, and there's a lot of value from just having a
canonical, authoritative map of the underlying neuroanatomy. Every year there
are a few new neuroanatomy papers that say "surprise, area X projects to area
Y!" and that's sort of embarrassing given that many people have been studying
an area for 30 years and the sudden appearance of dopaminergic projections
upsets all their previous models.

Also, there exist brain areas and regions where we do in fact have a few good
good models, and connectomics has the potential to help us resolve them -- see
[http://www.nature.com/nature/journal/v500/n7461/full/nature1...](http://www.nature.com/nature/journal/v500/n7461/full/nature12450.html)

~~~
apl
Even in these apparently simple feedforward sensory networks, connectomics
haven't been the anticipated panacea. There's been a flurry of follow-up
papers to Takemura et al., essentially refuting the suggested model.

Turns out, even where they should connections don't constrain circuits to a
sufficient degree.

~~~
stochastician
Awesome, do you have a good cite for that? The last time I paid attention in
this space was a year+ ago, when the vision people were trumpeting these
results, so I'd love to know more about the current thinking. It's been my go-
to example for "connectomics will help with some things", but I'm not a
sensory physiologist.

~~~
apl
Try this:
[https://www.ncbi.nlm.nih.gov/pubmed/26234212](https://www.ncbi.nlm.nih.gov/pubmed/26234212)

I guess the key lesson is -- don't rely on a single approach, because its
limitations may well lead you astray. Applies to connectomics, physiology,
modelling, etc.

------
taneq
I don't think the title's quite accurate. It sounds like they had no trouble
_building_ a connectome for the chip, they just couldn't then reverse engineer
the chip's functionality by applying standard neuroscience techniques to said
connectome while it ran in a simulator.

It's basically saying that even IF we had a full connectome of the brain that
we could run in a simulator (and presumably produce a simulated personality)
that we'd be no closer to actually understanding how the damn thing works.

~~~
ivraatiems
Unfortunately, producing a working simulation of the brain doesn't mean the
simulated brain would actually... work. Humans aren't created full form with
personality and behavior ready to go, they learn over years. A brain in a
vacuum would have none of that input.

~~~
ccrush
I think there are a few instances of medical science bringing back persons
from comas that may help with identifying what would happen to a fully formed
brain that was completely shut down for a decent period of time. Presumably,
reproducing the physical structure of such a brain and giving it similar
inputs would lead to an experimentally similar result.

The question I would like to see answered is what happens if we do this to
someone with a grown brain and wake them up in a simulation of a space ship
that traveled to a far away planet. Again presumably, the person would need
nutrition and care and education for some time, but then would they be able to
carry out significant advances on behalf of humanity. Would they even want to?
What if they feel betrayed by their fate and creator? Then again, who doesn't?
Someone write a book about this.

~~~
drzaiusapelord
That's not the criticism he's making. He's saying to get to a functional adult
brain you must have a few cells that work together, then a fetus brain, then a
baby brain, then an infant brain, then a toddler brain, etc all the way to an
adult brain and all the care and learning needed in those stages to make it
all work.

You need the morphology and structures that happen in nature to get the
structures and functionality needed to properly simulate a brain. We simply
don't know about enough about learning, development, brains, and consciousness
to just plug into a simulated adult brain and make it all work. We can only
follow the path nature has laid out.

I'm sure you can skip some parts like fetal stages, but we develop language at
a very specific range of ages using specific techniques. We can't skip the
language acquisition stage and then expect our simulated brain to magically
know language. We can't skip the angry "me" stage of a toddler and expect the
mind to have self-assertiveness. We can't skip the sexual and social awakening
stage of teenhood and expect the mind to understand complex social concepts
and sexuality, etc, etc.

The only scenario I can imagine out of this problem is a per neuron copy of a
working brain on the cellular level. I imagine this isn't, or won't ever be,
in the cards for practical reasons. Or if you could someone divine the
algorithms that create consciousness, assuming this is even possible on a
transistor based machine, then that would also be a work-around. Right now we
can't do either, so following nature's approach is the probably the sanest way
forward.

I wouldnt be surprised if the first AI ends up being more a biological
'computer' than anything having to do with x86 instruction sets and
transistors, but that's just a guess on my part.

------
glial
A microprocessor is admittedly a poor metaphor for a brain. However, I think
the article still makes a strong point, which is that neuroscientists use many
experimental and signal-processing techniques whose utility hasn't even been
demonstrated _in principle_ , much less in practice. To me that's the largest
take-away from this article.

To use a set of techniques to investigate the operation of a device whose
function we fully understand, and still fail to draw meaningful conclusions,
raises serious questions about the usefulness of those same techniques in a
much more complicated and less-well-understood domain.

~~~
SubiculumCode
and yet we know alot. Hippocampal damage causes deep amnesia, prefrontal
damage leads to poor inhibition, introspection, etc. I could go on.

~~~
glial
We do know a lot, but I would argue that much of what we know is _not_ a
result of the techniques used in this article. The things you mention are
results of lesion experiments or observations, which are decades (or more)
old.

Most of what you mention is sort of negative information - what happens when
the brain breaks. How it works is a fundamentally different question, and I
suspect we'll only get real understanding by building computational models
that are able to do similar computations to those the brain does.

------
supergarfield
I don't think the IC/brain comparison is particularly revealing. One of the
key differences I see is that an IC will stop working (or at least very
severely malfunction) if you disable a transistor. A brain on the other hand
will probably keep working even if a sizable amount of neurons die. So little
to no useful info is gained by suppressing a transistor, while lesioning a
part of the brain could still produce insight if you observe behavioral
changes.

That being said disabling neurons sounds a little crude and may not help much
in gaining a finer understanding.

Then again I know nothing about neuroscience :)

~~~
comex
A processor also operates with a high level of indirection. The 6502 doesn't
have transistors dedicated to jumping or shooting; it has transistors
dedicated to 'add value to accumulator', an instruction that might appear
hundreds of times in the game code even on a 6502, with a different purpose
each time. If they instead tried the lesioning approach on _software_ , i.e.
nopping out one instruction at a time, I think they could relatively easily
identify the function of various sections of the code. Alternatively, if they
analyzed a video game implemented directly in hardware rather than using a CPU
(say, the original Breakout), that would probably work better too.

Of course, nobody really knows how much indirection is used by the brain, but
it's probably closer to a direct hardware implementation than a CPU. One
respect in which it differs from both, of course, is plasticity; neither
hardware nor software created by humans is given to constant self-
modification.

~~~
posterboy
> neither hardware nor software created by humans is given to constant self-
> modification.

Of course it is. A VM running on a CPU is basically just constantly self
modifying data with the help of modular hardware. I'd hypothesize, the brain
has inert structures at it's core, too (which is why we are all alike), and
the rest is the data of a state machine (an IC being part of _the rest_ , from
a phenomenological point of view). Of course the complexity might be beyond a
state machine or linear bounded automata, i.e. deterministic (think quantum).

------
nacc
One question to ask is then: what method is useful to figure out the function
of an unknown microprocessor? The engineers sometimes reverse engineer chips
with electron microscope, with a perfect understanding of electronic
principle, a rudimentary idea of how the microprocessor works, what the high-
level function of the chip is, etc. And even equipped with this knowledge
cracking a chip still takes a lot of effort, and may not succeed at all.

The tools of a neuroscientist would be laughable in comparison, yet
neuroscientists have cracked the auditory code (and you can have an artificial
cochlea now), the lower visual system is close to be cracked, we have a pretty
good understanding of hippocampus and how space is represented in the brain,
and many other accomplishment. These are all results from the laughable tools
we have for investigating brain.

Having a full connectome of the brain, and running the brain in a simulator
will be a huge step of understanding it. We don't know how to use the data now
simply because we don't have the data yet, and therefore no effort is putting
on interpreting the data. However the usefulness can be glimpsed from neural
structures where the connections are clear: the peripheral system, spinal
cord, midbrain, and lower sensory and motor regions in the brain. We
understand them far better than regions we don't even know where it connects:
claustrum comes into mind.

A simulator of the brain, I imagine, will be similar to the human genome
project: nobody will understand the whole thing quickly, but it will hugely
prop neuroscience forward, sometimes in ways we cannot imagine now.

------
dontreact
Title needs to be changed. As mentioned by the author elsewhere in this
thread, the connectome (map of connections) was so good that they managed to
build a perfect simulation. The point of the article seems to be that in this
simulation, typical non-connectomics techniques used today in Neuroscience
(highly specific lesions which in the real brain are created by things such
optogenetics) were not able to elucidate how this simulation worked.

------
zw123456
I get it the point they were trying to make is that you cannot understand the
software by looking at the hardware. But I also agree that the analogy is very
weak.

------
medymed
There's a big input/output issue here. Which they address, but nonethless
which makes the project failure not surprising.

The inputs and outputs for a brain/neural network, e.g. pictures as inputs as
recognized object as outputs, are clear. They are not clear for a chip-
restricted view where your input binary stream is a post-transformed signal
from an Atari joystick, and the output is a pretransformed signal to a display
where the pong paddle moves. These complex transformed inputs and outputs at
the chip-restricted perspective tremendously reduce the likelihood of making
sense of anything.

If we know the adaptor functions for the signal to the first layer of neurons
in a simple convolutional neural network, for example, and the output layer is
indentifiable, it might be easier to track which neurons track which features,
etc.

But there might be no clear intermediate output layers to a brain, so you
might have a look at inputs and outputs wrt various subnetworks to see which
is most likely to be the true functional subset corresponding to some
definitely-occuring function (another obstacle to define). Or assume a
function is occuring and see which neurons classify it under the most
circumstances. But with so many connections that sounds like a big rough game
to play too.

I can see how this project if successful would be another step to creation of
the sentient robot overlords but I'm curious as to what people see as the more
immediate medical applications would be. Some of the biggest breakthroughs in
brain health in the last 20 years have been about getting rid of clots
(current best is arguably manual labor) and immune modulation and new
biochemical drugs whose effects would require excruciatingly sophisticated and
accurate biomolecular connectomic models to model and may be a long time away.

------
mrob
Previous discussion:
[https://news.ycombinator.com/item?id=11780565](https://news.ycombinator.com/item?id=11780565)

