
Single Artificial Neuron Taught to Recognize Hundreds of Patterns - sprucely
http://www.technologyreview.com/view/543486/single-artificial-neuron-taught-to-recognize-hundreds-of-patterns/
======
otoburb
Direct link to the arXiv paper for that want to dive in right away:
[http://arxiv.org/abs/1511.00083](http://arxiv.org/abs/1511.00083)

~~~
yid
That preprint seems to be missing every single figure. From the text,

> The key feature of the model neuron is its use of active dendrites and
> thousands of synapses, allowing the neuron to recognize hundreds of unique
> patterns in large populations of cells.

"active dendrites" and "thousands of synapses" sounds an awful like
abstracting away a complex mathematical model to fit a particular definition
of "single neuron".

~~~
al-king
They're researching the behaviour of real neurons by making computational
models, rather than creating a "neural network" model for AI purposes, so the
definition of "single neuron" they're using is meant to reflect the original
meaning of the term.

~~~
yid
But if the hypothesis is that thousands of little local functions can learn to
recognize feature vectors, that's been a well-tested assumption in
discriminative machine learning models for decades. Is there a biological
twist to this finding?

~~~
al-king
It's mostly interesting biologically - it relates specifically to how
synapses' vicinity to the main cell body affects their action, or more
generally how the spatial configuration of a neuron's connections affects its
computational function.

The gist seems to be that more distant synapses can't initiate an action
potential, or firing of the neuron, but can prime the neuron to fire in
reaction to synapses closer to the cell body. This means that outer synapses
can provide 'context', e.g. indicate that prior steps in a sequence have been
recognised, while inner synapses can cause firing if the context is fulfilled,
i.e. context + necessary condition (recognition of the most recent step in the
sequence) = action potential.

It's not computationally surprising, but it is a specific cellular mechanism.

------
dkural
This MIT Tech Review article is based on only a single, non-peer-reviewed
preprint, with missing figures, written by non-experts. The preprint presents
a "theory" with no data to support it. I guess it would take real work to do
real reporting.

~~~
subutaia
The figures are at the end of the document (this is pretty common for journal
submission). The supporting data (in addition to the simulations) is in the
form of references to existing experimental literature. Again, a pretty common
format.

------
jjtheblunt
Seems like the MIT Technology Review has been, for years, sensationalizing
results; it comes across as desperate.

------
rdlecler1
If we want real AI we need to spend more time reverse engineering the salient
ingredients used by biology. I'm biased to take a functional approach to all
of this and because these systems have to be robust to noise, I don't think
you need to model every minute detail, but rather you just need to capture the
core computational elements. This looks like it could be an interesting
discovery toward that end.

------
Gravityloss
Maybe I'm over-interpreting, but how is it possible we didn't know something
so basic? I would have expected neurons to have been thoroughly instrumented
and tested.

~~~
djoshea
It's a great question, and I think this is a pretty good insight into the
general state of the field of neuroscience at the cellular level. We know a
lot of details about things going on inside neurons, and a lot of details
about synapses. And these details are often specific to one of the hundreds of
different types of neurons that are found in different parts of the brain.
(e.g.
[http://www.neuroelectro.org/neuron/index/](http://www.neuroelectro.org/neuron/index/)).
And we do have methods to measure and manipulate various things in neurons in
a dish. But the dynamics of a neuron's voltage are complicated, non-linear,
and time-varying, and there are many parameters (e.g. the concentrations of
numerous ionic species and other small molecules, many of which we probably
don't even know about yet).

Even then, going from these messy biological details (e.g. these 20 proteins
assemble into a particular form and release this neurotransmitter from this
synapse when X happens) to an explanation for how the neuron works at a more
algorithmic level is hard, and the field isn't there yet. Assembling and
abstracting the details is hard and it's one of the goals of theoretical
neuroscience. The complexity is probably a symptom of our lack of
understanding, rather than the cause of it, i.e. there probably are a lot of
details that we can abstract away in a simpler functional model.

I haven't read the paper, and I'm only vaguely familiar with Hawkins et al.'s
HTM work. But I disagree with the claim at the end of the TR piece that these
predictions are imminently testable. Thinking up a specific experiment to try
and disprove theoretical ideas is often the hardest part of experimental
neuroscience.

------
dharma1
is anyone using the HTM/Numenta stuff? how well does it work?

~~~
oxtopus
See [https://github.com/numenta/nab](https://github.com/numenta/nab) for
anomaly detection results on real-world data, with comparisons to Twitter and
Etsy Skyline.

------
mjpuser
These models are based on cortical columns, and therefore only pertain to
cortical activity. It would be interesting to see how other parts of the brain
could be modeled, like the thalamus and hippocampus.

Anyhow, it would also be great to see if this could reproduce the ocular
dominance column patterns.

------
logicallee
imagine what you could do with a hundred billion of them. wait, for obvious
reasons I can't ask you to imagine that.

~~~
YeGoblynQueenne
I'm sad that your comment has been downvoted. I was looking forward to a turn
or two of the old "one thing you could do with a hundred billion of them is
imagine what you could do with a hundred billion of them" etc.

~~~
logicallee
I think people didn't get it - I'll spell it out next time :)

