
Memristor – The missing circuit element (1971) [pdf] - dayve
http://www.cpmt.org/scv/meetings/chua.pdf
======
smalley
Oh man, if you like that paper do I have the list for you. We used this list
while doing some literature review when we were doing research on HPs
memristors
[http://webee.technion.ac.il/people/skva/new_full_reference_l...](http://webee.technion.ac.il/people/skva/new_full_reference_list.htm).
That covers a pretty wide range of topics related to memristors

~~~
deepnotderp
Beautiful!

I have a question for you, lots of people have been claiming memristors are
right around the corner for almost half a century at this point, what do you
think about the viability of the memristor in the future?

~~~
smalley
In the enterprise storage market I think they're probably quite close to being
a volume product. The versions I was working with were a few years back and
admittedly a different chemistry than is used these days. The issue I think
they'll have in the short run is that these will be quite expensive relative
to NAND. I don't think you're going to have a memristor based disk in your
home PC for quite a while. I do know that there are already some space systems
that utilize memristors and reram already. The really novel applications (as
in not just big hunks of fast memory) are probably much further out if they
even go anywhere.

~~~
kovrik
So do memristors exist or not? Wiki claims they are still 'hypothetical'.

~~~
smalley
They 100% exist. We had a wafer worth of cutup die of the TiO_2-x type devices
all the way back in 2007. HP officially published results of a physical device
as well ([http://www.hpl.hp.com/news/2008/apr-
jun/memristor.html](http://www.hpl.hp.com/news/2008/apr-jun/memristor.html))

~~~
aidenn0
My understanding of the TiO2-x devices is that they are strongly non-linear;
which makes them very useful for binary storage, but less so for analogue
memristor-y uses.

~~~
elcritch
Wouldn't non-linear mem-resistors make pretty ideal neural nets? Most
activation functions are modeled as non-linear functions anyway.

Edit: Appears some HP associated labs had success with just such a chip!
Surprised it hasn't been more success in that field.

[https://www.technologyreview.com/s/537211/a-better-way-to-
bu...](https://www.technologyreview.com/s/537211/a-better-way-to-build-brain-
inspired-chips/)

~~~
deepnotderp
Because no one wants neuormorphic hardware, we want 16-bit hardware that
allows us to do math. Contrary to popular opinion, neural networks is more
about math than biology. Having 1-bit or analog weights/activations is a royal
pain.

~~~
elcritch
That's a good point. Most of our current CS / Hardware tech and experience
relies on digital computing. Makes me wonder if instead of using neuormorphic
hardware as the deployment vehicle, it'd make sense to use it as a specialized
high speed trainer. Whereas most of these research projects seem to be
assuming that they'd be deployed as endpoint/client devices.

The alternative would be to use the annoying/tedious analog hardware to
significantly (possibly) reduce the amount of time/energy costs to train a
neural net. Then have special equipment to measure the learned weights and
convert them to 16-bit weights that'd be easier to deploy.

Facebook/Google are showing that custom chips for deep learning make economic
sense. That market feature could tip the scales for specialized neuormorphic
training chips which deal with analog circuitry but offset those costs &
complexities by driving efficiency and/or speed while still deploying via
traditional digital chips.

Of course, this path relies on the assumption that an analog circuit could be
much faster or energy efficient. It's not too far fetched though based on the
current numbers and time for training vs biological system energy costs.

[edit: grammar]

~~~
deepnotderp
Therein lies your problem, I've been telling anyone who'll listen that _data
movement_ , even _on chip_ is our energy hog, not computation or even reading
from the memory banks. How does analog hardware deal with that?

~~~
elcritch
In the case I listed above, the data (i.e. the training weights) only needs to
be read once at the end of training. It'd require move expensive instrumenting
of the memristor's which is expensive and bad for general purpose computation
unit.

However, comparing the energy needed to modify the neural net data in-situ via
analog signals vs shuffling that same data in digital form back-and-forth
repeatedly to simulate the analog process seems to provide a viable use case.
A quick mental check seems to lean toward back-propagation being "cheaper" in
the analog processes as the data doesn't need to be moved while the
calculation is performed as part of the same signal propagation via the
properties of the analog circuit.

In other words its cheaper to move the computation "units" to the data than to
move the data to the computation engine for this particular case. Performing
the training in digital form requires repeatedly shuffling all the weights for
_each_ training iteration. That process is expensive. Luckily the inherent
nature of the back-propagation algorithm adapts to the "sloppiness" of analog
circuitry. Transferring the final weights to digital form could require final
but light post-processing training to remove particulars of the underlying
analog circuits.

But replicating and distributing the final trained model would be more
efficient since it only requires a single shuffling of the weight data to
apply the neural net and get an answer. Applying the trained model via
standard digital means should be cheaper/easier for all the reasons you
mentioned previously.

[edit: grammar & clarity]

------
cr0sh
There's another similar such element - which has a somewhat confusingly
similar name:

[https://en.wikipedia.org/wiki/Memistor](https://en.wikipedia.org/wiki/Memistor)

It was developed by Bernard Widrow in 1960, whereas the memristor was
envisioned and named in 1971 - but not found to actually exist until 2008.

The memistor was most "famously" used to develop a couple of artificial
intelligence (neural network) systems, ADALINE and MADALINE:

[https://en.wikipedia.org/wiki/ADALINE](https://en.wikipedia.org/wiki/ADALINE)

Note that if you try to google "memistor", you'll maddeningly get suggestions
and results for "memristors" instead - even when you tell google to show you
only results for memistors only; this makes finding information about them
online difficult. It's best to add to your search "adaline", "madaline",
and/or "widrow".

The main difference between the two devices are that the memistor is a three-
terminal device, versus the memristor being a two-terminal device. You can
think of the memistor as being a "memory transistor" vs the memristor as a
"memory resistor". This is gross simplification, of course.

One other interesting thing about ADALINE is how simple the memistors are to
construct; you can effectively re-create ADALINE at home, as Widrow's paper
shows:

[http://www-isl.stanford.edu/~widrow/papers/t1960anadaptive.p...](http://www-
isl.stanford.edu/~widrow/papers/t1960anadaptive.pdf)

On the topic of memristors - they can be DIY'd as well:

[http://sparkbangbuzz.com/memristor/memristor.htm](http://sparkbangbuzz.com/memristor/memristor.htm)

Finally - if you're interested in this kind of thing (that is, implementing
hardware analogs of brain functionality), google "neuromorphic computing"...

------
throw_away
Fun fact: Leon Chua is Amy Chua's (of Tiger Mom fame) father.

------
danmaz74
Anybody knows what happened to to HP's "memristor"? Was it only vaporware?

~~~
indolering
Still in the lab, "the machine" switched to using DRAM until it gets cheap
enough to scale. It's really mind-boggling how much we demand of new
technologies, we've refined magnetic storage over many decades. These things
at least a decade to go from laboratory ... proof-of-existence to commercially
competitive product.

~~~
Quequau
As far as I can tell there is more than just HP's memristor based storage
level, random access, non-vol memory technologies stuck in R&D pipelines right
now. Eventually (or perhaps hopefully) they'll make it to market and I think
will make for a more interesting state of computer hardware than there has
been in some time.

~~~
indolering
We are asking a brand-new technology to catch up with stuff that has had an
entire industry optimizing it for decades. It's going to take decades for a
single company to get things up to speed.

------
acd
Brains have the equivalent of memristors. Changing the value of the memristors
gets harder the more charge it has. That is why spaced repetition as a
learning method works as a memory technique.

When we sleep the brain organsizes and decides what to keep/forget. Artificial
intelligence probably needs to emulate sleep.

------
exabrial
Ok, I get that capacitors and inductors are opposites, but I've yet to
understand the significance of why the memristor is the opposite of a
resistor... Chalk it up to my primitive knowledge of analog design.

Couldn't theoretically there be a mempacitor and a memductor as well?

~~~
cycrutchfield
This diagram might help you understand:
[https://en.wikipedia.org/wiki/Memristor#/media/File:Two-
term...](https://en.wikipedia.org/wiki/Memristor#/media/File:Two-terminal_non-
linear_circuit_elements.svg)

------
Black-Plaid
Here's a nice talk about memristors:

[https://www.youtube.com/watch?v=bKGhvKyjgLY](https://www.youtube.com/watch?v=bKGhvKyjgLY)

