
HP plans to release first memristor, alternative to flash and SSDs in 18 months - peritpatrio
http://nextbigfuture.com/2011/10/hp-plans-to-release-first-memristor.html
======
uniclaude
This article is mostly made of quotes from an EEtimes article[1], adds no
information or value to it, and is (to me at least) less readable.

[1]:[http://eetimes.com/electronics-news/4229171/HP-Hynix-to-
laun...](http://eetimes.com/electronics-news/4229171/HP-Hynix-to-launch-
memristor-memory-2013?pageNumber=0)

~~~
hornokplease
EE Times article submitted here: <http://news.ycombinator.com/item?id=3080963>

~~~
modeless
And here: <http://news.ycombinator.com/item?id=3081273>

------
pjscott
The really exciting part here, to me, is the idea of fabricating large amounts
of nonvolatile memory on top of a CPU. Modern processors already spend a huge
amount of their time waiting on memory, and a great amount of power trying to
hide that memory latency. If these guys can lower memory latency dramatically
-- and it looks like they can -- computers would get a lot faster.

~~~
TheEzEzz
This would be a huge boon for scientific computing. The data is often huge,
but all of it must be constantly pushed through the CPU every iteration.

~~~
montecarl
As the number of cores per socket is steadily increasing it is becoming more
difficult to keep all of the cores on a single computer operating efficiently.
A higher clock rate per core, or more cores per socket, is not very useful if
they are all waiting on memory.

~~~
pjscott
Now imagine a large amount of memory connected to the cores through tiny metal
wires on the chip itself. Now imagine that it's split into a bunch of small
independent memories with enormous bandwidth, with data automatically
migrating between them to cut down on wire delays. Give it a few years, and
this could be reality.

------
plasma
Watch HP research's Stanley Williams describe the memristor and what they are
working towards in more detail on YouTube:

[http://www.youtube.com/watch?v=bKGhvKyjgLY&sns=em](http://www.youtube.com/watch?v=bKGhvKyjgLY&sns=em)

~~~
bluehavana
The no more hard drive and persistent ram thing should be really interesting.
Completely change how we do things.

Also, application specific processing... writing instruction sets for each
application will be really interesting. Cell processing et al will be a thing
of the past.

~~~
georgemcbay
All of those years spent trying to educate my family and non-technical friends
on the difference between RAM and HDD and explaining how it is confusing when
they tell me they couldn't install a program because their laptop was "out of
memory".... wasted!

------
lordmatty
Great to see some positive press on HP. Hope HP can innovate their way out of
the current slump.

------
calebmpeterson
So how would this affect what languages we use? How different data structures
perform? etc. Will this amplify the end of the free lunch?

I'm especially curious how Rich Hickey's approach to state, time, and identity
in a functional language fit in?

------
rbanffy
Here we have HP on the verge of profoundly transforming our industry and a CEO
(who should be aware of this) wanting to turn the company into an SAP, because
that's the future...

I hope Meg Whitman does better than Leo Apotheker. Shouldn't be hard.

~~~
vnchr
But will Meg take it in the right direction either? Her experience is
consumer-facing... I hope there is an empowered VP or 2 who will make that
initiative a success.

~~~
rbanffy
I think the greatest impact of the memristor is in the consumer space. When
you have a 1000-fold (or million-fold) quantitative change you are bound to
have a qualitative change as well...

What would a computer with petabytes of non-volatile RAM look like? How would
you package software for it? Will it reboot from time to time? What does a
reboot mean when your memory is non-volatile? I have my hunches, but the
effort to find answers to my few questions is dwarfed by the effort to find
out what the relevant questions will be in this brave new world and if we can
answer them. Or even understand them.

------
bane
Enlarge the data bus and...are we finally finding a way out of the von Neuman
architecture?

~~~
palish
Is the Von Neumann architecture related to the data bus?

~~~
PurplePanda
It operates by pushing data back and forth across it. Backus argued that that
"Von Neumann bottleneck" of not being able to access program and data at the
same time ought to be done away with somehow in his "Can Programming Be
Liberated From The Von Neumann Style"
<http://www.stanford.edu/class/cs242/readings/backus.pdf>

~~~
palish
I... wish I understood. I'm just a lowly Blub programmer.

An executable is often less than 1MB, and certainly always less than 100MB. In
contrast, a 14GB video game still loads pretty quickly, and the data that goes
across the bus per frame is often a couple orders of magnitude larger than the
program executable itself.

I know I'm missing something obvious...

~~~
InclinedPlane
Modern computers are extremely fast, and can still do tremendous amounts of
computations despite the limitations of the Von Neumann architecture. But make
no mistake, the Von Neumann bottleneck is a serious and fundamental problem.
The CPU has to spend a lot of effort shuttling data back and forth. Worse yet,
it has to spend a lot of time _waiting_ on data (swapping to/from disk, for
example). Even when you're CPU is at 100% utilization the vast majority of
cycles are spent doing nothing but waiting. That has huge ramifications,
affecting everything from performance to power efficiency, etc.

Consider a typical snippet of CPU's life. The next instruction is read from
memory, it tells the CPU to move a value from memory into a register. The next
instruction after that is read from memory, it tells the CPU to move a
different value from memory into a different register. The next instruction is
read from memory, it tells the CPU to do some operation with the values in
those two registers. The next instruction is read from memory, it tells the
CPU to test whether the result from the previous instruction is 0, if it was
then jump to a specific address. Since it was the CPU fetches the next
instruction from that location in memory. And so on. It only takes following
this process for a little while to see how tedious it is. We've managed to
significantly improve it by adding fast local memory caches to the CPU but
even if the memory operated at the speed of the CPU it would still be
inefficient.

Now, imagine if instead of megabytes of low latency cache you have gigabytes.
Now, imagine if instead of having a low latency cache at all the processor is
directly wired to the RAM as if the RAM was just a large collection of
registers. Instead of "fetch me X, fetch me Y, add X + Y, put the result back
to Z" all of that could be a single CPU instruction. Moreover, it would be
far, far rarer for the CPU to be waiting for data merely due to local latency.
This would improve the effective computing power and power efficiency of CPUs
by several orders of magnitude. The impact it would have on computing is truly
mind boggling.

Let me express it in a different way. Imagine if your cell phone had the same
raw computing power as a top of the line GPU does today, with the same battery
life and with the same transistor count and clock speed on the CPU, just with
a different architecture and different RAM.

~~~
palish
I think... Maybe... I'm getting it. Kind of. Probably not.

By wiring the CPU directly to the RAM, to use your metaphor, then we can
entirely bypass the ASM stage of "a program" (but then what is a program if
not a sequence of instructions?) and therefore we may better predict which
data our program needs at runtime? Thereby caching that data more effectively
than the random access patterns of Von Neumann?

Basically, instead of "accessing a pointer causes its data to be cached into
L1", it would be... Well, I have no idea. Something else?

Here are my points of confusion, sorry:

1) in this non-Neumann paradigm, there will still be "data", in the
traditional sense, right? (Or is "everything a program"?)

2) then... There will surely still be "caches" for that data, yeah? (Or is
that what I'm missing? But without caches, I don't understand how it could be
faster.)

But yeah, I don't want to waste anyone's time... certainly not anyone of your
guys' caliber. Don't feel compelled/obligated to reply or anything. :)

~~~
InclinedPlane
Nope, still missing it.

When you wire the RAM to the CPU you don't need a cache. Imagine you have a
billion or even a trillion registers, or more. That's a non-Von Neumann
architecture. You're not shuffling data around on buses, the data is directly
connected to the CPU.

Look at the example I gave again. Consider a simple addition command. The
first CPU instruction says "take the word at this memory address, and move it
to a register", the second does the same with a different address, the third
adds the two values in the registers, the fourth then puts the result back in
some other memory location. But what if there's no difference between the
memory and registers? Instead you just have one instruction that says: add the
values at these two locations, put the result at this other location. Now
you've replaced 4 clock ticks with one clock tick. More than that, you save
however many clock ticks it would have taken on average for the data to get to
/ from main memory (sometimes cached, sometimes not). Such an architecture
would mean that you only have to wait on things you really have to wait on,
like network and device latency, etc.

The structure of programs need not be terribly different per say, it can still
be a sequence of instructions in memory. There are other non-Von Neumann
architectures which would work differently (such as neural networks), but
those are even more complicated.

~~~
hetman
Except addressing that amount of memory is still going to need a bus, it
doesn't matter if the memory is sitting right on top of the CPU core or in the
next room. It simply isn't going to be possible to provide direct access to
every single memory cell when there are billions of them. This is still going
to be a von Neumann (actually Modified Harvard) architecture, it's just going
to be blazingly fast.

Now, once we start applying memristor implicational logic data processing we
will have truely left the confines of the von Neumann architecture.

------
TheEzEzz
What are the implications for back end development? Will this greatly reduce
server complexity (need for redundancy)? Could AWS just give you a machine in
the cloud, and you wouldn't need to worry about databases and database backups
and so on, you could just keep all your data locally in natural data
structures?

~~~
ericfrenkiel
This would not necessarily affect back end development as you would still want
high availability for the application by using an active-active setup across
two availability zones in EC2.

With regard to your question on data structures, yes - they would have to
change. MySQL and most other databases use b-trees since they were meant to
live on disk. Just running MySQL on memsistor-backed storage would result in a
considerable waste of CPU and storage capacity (b-trees are not very compact).

Running a database like MemSQL on memsistors would make the most sense since
it uses data structures meant for DRAM.

------
sliverstorm
Not if they continue cutting their hardware divisions, it won't.

~~~
dmix
> Asked about the competition, Williams said: "Samsung has An even larger
> group working on this than we do."

As a consumer, I'm indifferent to who does it. As long as it gets out.

------
aheilbut
Whatever happened with Nantero?

------
rsanchez1
A memristor TouchPad would've been really hot.

I wonder how these theoretical projects will survive in an HP that is only
concerned with how much profit each project makes to appease shareholders...

------
nwatson
The bad side-effects of this memory technology: you can't just power-off your
computer to hide your current activity; decrypted passwords in memory still
will be readable after shutoff.

~~~
jasonwatkinspdx
This is silly. Just because memory is non volatile storage doesn't mean the OS
can't do reasonable things like clearing out some state as it goes to sleep.

~~~
nwatson
Ok. So your computer is in lock-screen mode requiring you to enter a password
before resuming your interactive session. Someone with physical access to your
computer certainly can find a way to divorce the memory from the rest of the
system without letting the OS do its thing ... your live program memory is
compromised. This memory often will have more sensitive info than your
(possibly encrypted) mass storage.

~~~
jasonwatkinspdx
I disagree.

Firstly, the fabrication process everyone is excited about puts the memrister
cells directly atop the cmos logic gates as just another layer to the die. No
external memory bus to tap into. So you'd need the sort of equipment used to
remove layers from dies to expose the cells for reverse engineering, testing,
etc. If someone with these resources is after you, you're already fucked
regardless of the exact technological vector.

Secondly, a system designer could trivially add some amount of volatile
storage for holding security sensitive data. Various schemes of encrypted
pages that are decrypted using a key stored in volatile SRM within the CMS
could be used. In other words, we could do the same things we currently do
with hard drives between the memrister array and SRAM that acts as a decrypted
cache.

Thirdly, you're applying an expectation to all memristers that is not applied
to existing storage technologies. It might be a fair criticism of a concrete
product that operates in an insecure way, but it's absurd to apply this
expectation to an entire technology.

------
jeroen
The article mentions both 18 months and summer 2013. That's not entirely
consistent.

~~~
corin_
It's an estimate not an exact length of time. 18 months takes us to April
2013, so maybe they rounded April into "summer" or maybe they rounded 20
months to the nearest .5 of a year. Or maybe they class April as summer
anyway.

It's not like those two times are way different.

~~~
Nicknameless
Well summer is December to February, so April is a fair way off...

Perhaps they could do away with the stupid idea altogether and say 2nd quarter
2013 if they can't be more specific.

------
dhughes
I doubt HP itself will exist in 18 months.

~~~
bvi
Hyperbole much?

~~~
michaelcampbell
All day, every day!

~~~
leoc
Eight days a wee-eeek!

