
How will memristors change everything?  - zackham
http://highscalability.com/blog/2010/5/5/how-will-memristors-change-everything.html
======
btilly
My main concern with the hype around memristors is how quickly the technology
will scale. Yes, they have great promise. But if technical problems make them
4 times bigger and 8 times slower than existing technology, nobody will adopt
them. And without volume adoption, they won't have enough investment dollars
to be able to exceed Moore's Law. Which means that we won't care about them
until well after Moore's Law runs out of steam.

This is not a theoretical failure mode. It has happened before in the computer
industry. Multiple times.

For example it is why Transmeta died. Their goal was to have a simple chip
that was so fast that they could emulate the x86 faster than the x86 could
run. They failed. However one of the design goals was less heat (because heat
was a major scaling barrier), which translated into having an emulated x86
chip that with much lower power. Given that they had a simpler architecture
and had already solved heat problems that were killing Intel and AMD they
hoped to iterate faster, and eventually win. But the investment asymmetry was
so huge that they couldn't execute on the plan. And Intel was able to reduce
their power enough to undercut Transmeta on the niche they had found, and
Transmeta couldn't survive. (Intel was aggressive because they understood the
strategy and the potential. Transmeta was always going to be something that
either wiped out the existing industry or died with a whimper, and Intel knew
which outcome they'd prefer.)

~~~
modeless
HP has already fabricated chips with memristors that are smaller, faster, and
lower power than Flash. The problem I see is write endurance. They match Flash
in this area as well, but that's not good enough. If memristors are to be
applied to anything but storage (e.g. DRAM, SRAM, logic, neural nets) they
will need to be many orders of magnitude more durable.

~~~
hristov
If that were the case, then HP should already be selling memristor memories to
compete with flash memories. Flash is already a huge market, HP does not have
to wait until memristors can compete with RAM.

~~~
modeless
Give them a little time; the paper was just published last month! Commercial
availability is expected in 2013 according to
<http://www.technologyreview.com/computing/25018/>

------
DanielBMarkham
This is an excellent article. I agree that memristors will change everything.

I'm a bit perplexed about the timeframe, though. I'd like to think 5 years,
but my gut says it's more like 12-15 years before it's all different. And it
will be _very_ different.

~~~
swernli
I agree about the timeframe. I think the issue is not when the technology will
be ready, but when the money will be ready to embrace the disruption. So the
tech will be ready in 5 years, but the products and paradigms wont make their
way into the wild (outside of academia) until 6 or 7 years after that.

~~~
DanielBMarkham
Yes it's not a tech issue, the issue goes much deeper than that. It will mean
a major change in architectures and languages, and that's going to require
some killer applications to gain adoption. I imagine the tech will show up and
languish -- while the rest of us try to figure out how to get our heads around
it.

Even after adoption, there will be major friction from the corporate and
governmental sectors -- lots of money has been invested in doing things the
old way. Early adopter consumers in 12-15 years. Rest of the world? Perhaps a
good bit longer.

------
pohl
There's a presentation by R. Stanley Williams linked to in the article, and it
is well worth watching.

<http://www.youtube.com/watch?v=bKGhvKyjgLY#>

It has some tidbits I didn't expect. Teaser: "one of the guys in my group
has...built a compiler to compile C code using implication logic rather than
NAND logic and, interestingly enough, when we play with that the compiled code
we get is always more condensed...by about a factor of 3"

~~~
plesn
I the basic circuit componants are R, L, C and now M. I wonder, how one can
build a transistor out of R,L,C ?

------
javanix
_You can't just drop a memristor chip or RAM module into an existing system
and have it work. It will take a system redesign._

If these truly are a viable storage system, building an interface that's
reasonably easily adaptable to the current computing paradigm shouldn't be too
difficult. Traditional HDDs and SDDs both play nice with SATA, for instance,
despite using wildly disparate methods of storing bits.

~~~
randito
Better yet, configure a small portion of the memristors as IMP CPU compute
clusters to handle the work needed to adapt it.

------
paulbaumgart
_With memristors you can decide if you want some block to be memory, a
switching network, or logic. Williams claims that dynamically changing
memristors between memory and logic operations constitutes a new computing
paradigm enabling calculations to be performed in the same chips where data is
stored, rather than in a specialized central processing unit. Quite a
different picture than the Tower of Babel memory hierarchy that exists today._

That part is mind-blowing.

And I'm wondering, if this all works out, will the whole multi-core thing and
all the trouble that comes with it (from a software standpoint) be pushed back
for another decade?

And what implications would that have for programming languages? Seems like it
would mean JavaScript wouldn't be so much worse than Erlang after all (cf.
<http://news.ycombinator.com/item?id=1304599> ).

(Yeah, I know, it's a little bit of a stretch.)

~~~
jerf
Actually, it's the other way around; it sounds like they are talking about
having lots of processors literally physically spread amongst the data. That's
sort of more Erlang-y, although it isn't actually Erlang-y either. Erlang
pretty deeply assumes homogeneity of the computational units it is accessing.
(It does allow you to start a process on a given node, and that's a start, but
the VM's strategies for memory management and copying on a given node would
have to change quite substantially to cope better. Or map one node to each
processor-in-the-memory in which case I'd expect too much copying between
nodes to occur without a _lot_ of care taken by the programmer, sort of
fighting the language the whole time. Erlang lets you control where processes
run, but not directly control where data lives beyond that.) Basically this
future seems to be NUMA writ large, and while an "Erlang-inspired" language
may be the way forward, Erlang itself would not make this transition without
substantial changes. (Might be feasible, though.)

But Javascript as it stands today is even worse off, as are most languages. We
don't really have much that could cope with this right now in a clean way.
(Anyone know a language that really handles NUMA well? And I do mean a
language, not a library for C or something. Something slick, not something
that merely "permits" working with it.)

~~~
btilly
_Anyone know a language that really handles NUMA well?_

Depending on what you mean by well, any language that offers fork() as a
primitive does work. The trick is that this allows you to create 2 copies of a
process that share little enough that a scheduler can safely move them from
one machine to another. By contrast with threading you have the problem that
the scheduler cannot know when it is safe to schedule two threads on distant
CPUs.

Of course this only lets you scale embarrassingly parallel problems.

For another approach that could be made to work reasonably well, try Go. Its
central idea is that you pass messages to the processing job, and not vice
versa. Figuring out how to schedule threads is a complex task, but run-time
scheduling heuristics should do a reasonable job of that for most problems.
(Writing algorithms that reliably avoid having bottlenecks will be an
interesting challenge.)

And a third approach to watch is Parallel Haskell. Because of the guarantees
it offers, the opportunities to rewrite the program at run-time based on what
is appropriate are extremely interesting.

------
10ren
Positronic? (the brains of asimov's robots).

CPU with data is a bit like objects - except asynchronous, with true message
passing. Like smalltalk or Erlang (or web services for that matter).

Brain images, with parts lighting up depending on how active they are,
suggests that much of our brains aren't being used most of the time, but come
online as needed. It's as if one part calls another part, except the "call"
doesn't block. I'm not being very coherent here.

------
Nwallins
I hereby suggest that we pronounce these things as memri-stors rather than
mem-ristors.

~~~
proee
It may sound better, but you're slaughtering the root of the word, which is
Resistor. Memristors are useful as a storage element but that's only one
possible utility, not the fundamental of the device itself.

~~~
Nwallins
Absolutely. The comment was very much tongue-in-cheek. :)

------
sown
Wow. When I asked this question I wish I had been as through and thinking
critically as this guy.

It's so hard to guess what this means but I wonder if I should start writing a
memristor VM just to see what could been done.

Even if they are here in 5 years I bet i'll bet longer than that before we
really, truly know what to do with them.

~~~
mortenjorck
Yes! A VM is exactly what we need right now for hackers to start wrapping
their heads around the memristor paradigm. Even if it's only practical for
very lightweight proofs-of-concept, it would be a major step for mindshare.

------
tlb
The theoretical density they give neglects some overhead. If you divide the
area of a chip by the size of a transistor, you should get 100 billion
transistors on a chip, but in fact you can only get about 1 billion. The rest
is overhead: wiring, power, isolation, etc. Probably a similar overhead will
apply to memristors.

Also, in the 5+ years it takes to make them practical and reliable,
transistors will make progress too. So it's unfair to compare the theoretical
density of a new technology to the achieved density of an existing technology.

------
stcredzero
For one thing, orthogonal persistence will become the norm. A computer that
needs to boot up will become a quaint thing of the past. Capability systems
might become widespread because of this, resulting in finer grained security
throughout the entire computing world. Small, cheap, but voluminous and low
power memory stores will allow for greatly increased capabilities for
computerized tags of physical objects. Vernor Vinge's localizers or Bruce
Schneier's Internet of Things could come about because of this technology.

------
jacquesm
I've lost the link but someone here posted a PDF on propagation networks a
while ago. I can see that married to this technology somehow and become the
ideal computing fabric.

I'll dig around a bit to see if I can find the link.

~~~
DanielBMarkham
Yeah this is true functional programming writ large.

The data, which is usually huge in comparison to the code, moves around slowly
and transparently. The programming goes to the data. I'm willing to be the
next step would be embedded memristor blocks in a human body, providing
multiple PetaBytes of both storage and processing capability (and perhaps with
some sort of primitive neural interface)

~~~
jacquesm
> I'm willing to be the next step would be embedded memristor blocks in a
> human body,

I think that's more than just a little while off, that sort of enhancement, if
it ever comes to pass.

Ok, finally found it, posted it here:
<http://news.ycombinator.com/item?id=1322135>

~~~
DanielBMarkham
Well we're already embedding RFIDs in people, so chips with limited radio
potential and some rudimentary processing power are a done deal.

The question becomes how these embedded processors will change over time. I
don't think it's too unreasonable to imagine the memristor tech being used and
adoption rates going up. The neural interface, of course, was extremely
speculative. But the rest of it seems trivial. At least to me.

~~~
pyre
My thoughts would be that heat would become an issue. RFID chips are much
lower power.

~~~
DanielBMarkham
Don't know.

Should be easy enough to do the math.

I would assume some sort of bio-friendly heat sink, perhaps a silver lace
spread through the abdomen or something.

Fortunately getting rid of heat is already well understood.

------
tocomment
Would it be worth buying HP stock? They could own the next generation of
computers.

------
Groxx
_This allows multiple petabits of memory (1 petabit = 178TB) to be addressed
in one square centimeter of space._

Given the sugar-cube reference earlier, and that nothing's 2D, much less tons
of stacked circuits, I assume they mean one _cubic_ centimeter. Taking that
assumption, I point out this problem with that storage claim:

Heat. Good luck dissipating that little cube. Heat's one of the biggest
reasons we don't have _way_ higher power machines now, you just can't
continually stack things together.

* goes back to reading __* very interesting article, though. I'll have to watch the video too. Homework first, though :|

~~~
Tuna-Fish
Memristors are completely powered off when not accessed, making heat scale
entirely with access speed, not chip size.

Also, memristors are not stacked, they are deposited on a surface a layer at a
time. This is a meaningful difference -- stacked chips are much harder to
manufacture, and have much worse thermal conductivity than what is essentially
a solid chunk of TiO2.

------
izendejas
Even if things don't pan out as Stanley Williams says, there will be enough
innovation atop the technology to make many things a reality.

I have known about the memristor from the beginning--before all the "hype"--as
I've interned at HP Labs and I drool at the thought of their vision for
intelligent systems with in-memory processing. I mentioned this in another
post, but I predict the memristors will make computer vision possible. This
means autonomous vehicles, better airport security, etc. Similarly, anything
involving sensors and machine learning, will lead to unimaginable progress.

Nanotech is real folks. The question isn't "if?", it's "when?" It will go
through many iterations, but it's now real.

------
pohl
Could someone explain the author's conversion of petabits to terabytes?

 _(1 petabit = 178TB)_

<http://www.wolframalpha.com/input/?i=petabits+to+terabytes>

~~~
reitzensteinm
Wild guess, 1 petabit = 0.125 petabytes * 1024 = 128 TB, with a typo making
the 2 a 7.

------
sparky
Typo in title (memristors)

~~~
zackham
Oops, thanks for the heads up, posted in a hurry this morning.

Nice recap of details I had read a bit of a few years ago... also,
highscalability is a great blog that is likely of interest to many HNers.

------
anonymousDan
Can anyone explain at a high level how these things work? Even the wikipedia
article loses me fairly quickly. From what I understand, they were predicted
to exist based on a theory about the relationship between capacitors,
inductors, and resistors. What variables would be of interest in this
relationship?

------
windsurfer
Imagine the security rift this will create between old-style cpus and new.
Memristers would tear through current encryption like a knife through butter.

