
Reservoir computing using dynamic memristors for temporal information processing - lainon
https://www.nature.com/articles/s41467-017-02337-y
======
cs702
For deep learning and ML practitioners who are unfamiliar with this, the big
deal about memristors is that they have _memory_ and therefore produce
different readout outputs for different _input sequences over time_ \-- out-
of-the-box, without requiring any training.

These researchers took a network of memristors, called the "reservoir," added
a linear layer with a SoftMax on top of the reservoir, trained this hybrid
network on a lower-resolution variant of MNIST (feeding pixel values over
time, as varying voltages), and achieved classification accuracy superior to a
tiny neural net despite having only 1/90th the number of neurons.[1]

Note that they only trained the added layer; they did NOT have to train the
reservoir. Figure (a) in this image has a simplified diagram of the reservoir
+ added layer architecture:
[https://www.nature.com/articles/s41467-017-02337-y/figures/1](https://www.nature.com/articles/s41467-017-02337-y/figures/1)
\-- only the matrix Θ had to be learned.

The potential here, over time, is for having highly scalable hardware
components that can be plugged into neural net architectures as and where
needed for learning to recognize and work with sequences out of the box.

PS. For clarity's sake, I'm ignoring a lot of important details and playing
fast and loose with language. If you're really curious about this, I highly
recommend you read at least the abstract and introduction of the Nature paper,
which is well-written and straightforward to follow.

[1] [https://news.engin.umich.edu/2017/12/new-quick-learning-
neur...](https://news.engin.umich.edu/2017/12/new-quick-learning-neural-
network-powered-by-memristors/)

------
Seanny123
Whelp. I was super wrong. I had predicted I wouldn't see memristor-based
neuromorphic hardware in the next 10 years.

That being said, I feel like they're over-selling the capabilities of
reservoir computing. Yeah, [you can stack
them]([https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016...](https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2016-175.pdf))
and get pretty high accuracy on a few tasks, but it's still not competitive
with traditional Deep Learning.

~~~
p1esk
[https://www.nature.com/articles/nature14441?WT.ec_id=NATURE-...](https://www.nature.com/articles/nature14441?WT.ec_id=NATURE-20150507)

------
dnewms
For those without a technical background:
[https://news.engin.umich.edu/2017/12/new-quick-learning-
neur...](https://news.engin.umich.edu/2017/12/new-quick-learning-neural-
network-powered-by-memristors/)

