
Learning through ferroelectric domain dynamics in solid-state synapses - ltrls23
http://www.nature.com/articles/ncomms14736
======
UhUhUhUh
It appears that we might have jumped on the transistor metaphor too quick and
to intensely. I believe not everything is binary in the brain, even if the
resulting executive action is. Patterns, for example, might not be.

~~~
jamessb
Who are "we"?

For example, "Almost everybody accepts that the brain does a tremendous amount
of analog processing. The controversy lies in whether there is anything
digital about it." [1]

[1]:
[http://www.rle.mit.edu/acbs/pdfpublications/journal_papers/a...](http://www.rle.mit.edu/acbs/pdfpublications/journal_papers/analog_vs_digital.pdf)
(page 30 of the PDF, labelled 1630)

------
mschuster91
Correct me if I'm dead wrong here, but isn't "software machine learning"
taking advantage from all the neurons being "interconnected", similar to a
brain? How does that work with physical (discrete?) components as in this
case?

~~~
smaddox
There's good reason to believe that different parts of the brain are quasi-
specialized to particular applications. In a sense, this applies to particular
software artificial neural networks (ANN) as well, particularly if the various
hyper-parameters are fixed (number of neural units per layer, etc). One of the
primary advantages of software ANNs over hardware ANNs (which don't really
exist yet) would be the ability to easily change the hyper parameters.

Hardware implementations of ANNs, such as might be designed based on these
FTJ-based artificial synapses, would have some fixed hyper parameters, and
thus would be pseudo-specialized. This disadvantage could potentially be more
than compensated for by a dramatic learning speedup and power-usage reduction.
Transistors are highly scaled and low power, but it takes a lot of them and a
lot of time to simulate each neural unit.

On a separate note, the best-performing software ANNs don't emulate spike time
dependant plasticity, which is believed to be the primary learning mechanism
of the human brain. Instead, they use variations of backpropagation and
gradient descent, which is almost certainly not how the human brain learns. It
remains to be fully understood how the two compare at various tasks. Most
likely, they will have different strengths and weaknesses, making each useful
in their own right.

~~~
cbennett
It's far from clear that STDP is sufficient to the brain's learning
mechanisms, though it is certainly necessary at some scales and stages.

The possibility space between relatively simple and insufficiently general
unsupervise/clustering approaches and rigid SGD schemes is large, and probably
contains the brain's true inference engine. Personally, I am excited by some
of the ideas brought forward in this Bengio paper:
[https://arxiv.org/pdf/1602.05179.pdf](https://arxiv.org/pdf/1602.05179.pdf)

------
smaddox
I find it remarkable that their simulations exhibit sparse encoding. Is this a
know property of artificial neural networks based on spike-time dependant
plasticity?

I can imagine how it might emerge in this particular implementation from the
electric current following the path of least resistance through the circuit,
thereby preventing adjacent neurons from reaching criticality. This mechanism
never occured to me before reading this article, though. Is anyone aware of
any prior art on this topic?

~~~
cbennett
Check methods-- the simulations exhibit sparse coding because the model was
built that way, in particular, it assumes LIF (leaky integrate and fire)
output neurons that can inhibit each other. In fact, this assumed inhibition
is the only reason it works as is. Else, many neurons would probably fire
simultaneously in an unstructured crossbar without a set learning rule.

Nevertheless, a STDP type learning rule can inspire interesting applications.
One of my co-advisors authored an article [1] which shows a completely
unsupervised classification on the MNIST challenge in a crossbar environment,
achieving 93 \% . Nothing like state of the art CNNs etc, but considering this
was done without labels, that's pretty impressive.

[1][http://www.ief.u-psud.fr/~querlioz/PDF/Querlioz_PIEEE2015.pd...](http://www.ief.u-psud.fr/~querlioz/PDF/Querlioz_PIEEE2015.pdf)

