
IBM scientists say radical new ‘in-memory’ architecture will speed up computers - breck
http://www.kurzweilai.net/ibm-scientists-say-radical-new-in-memory-computing-architecture-will-speed-up-computers-by-200-times
======
otakucode
This is the promise of memristors. Despite innumerable articles being written
about neuromorphic architectures like they'll be something miraculous, this
ability to change from functioning as a bit of memory to being a bunch of
functional logic on the fly at the speed of a memory read? That's going to be
crazy. It will open up possibilities that we probably can't even imagine right
now.

I've never understood why people don't get more excited about memristors. They
could replace basically everything. Assuming someone can master their
manufacture, they should be more successful than transistors. Of course, I'm
still waiting to be able to buy a 2000 ppi display like IBM's R&D announced
creating back in the late 1990s or so... so I guess I'd best not hold my
breath.

~~~
nyolfen
> I've never understood why people don't get more excited about memristors.

personally, mostly because i've been seeing articles about how they're _just
about_ to totally upend computing for the last ten years

~~~
cududa
Pretty sure I first read about them in a PC World magazine I got at the
airport on a family vacation in like 2003

~~~
Alex3917
As far as I know I submitted the first ever article about memristors to HN,
which was this in 2008:

[https://news.ycombinator.com/item?id=177865](https://news.ycombinator.com/item?id=177865)

~~~
squeaky-clean
HN has only been around since 2007

~~~
kbenson
Ambiguous language. I thank what was being implied is that it was the first
occurrence of that type of article _on HN_ , not that the article was the
first ever about memristers _and_ was posted to HN.

~~~
squeaky-clean
Not really that relevant to the ongoing point of the age of memristor articles
then.

------
blennon
"The result of the computation is also stored in the memory devices, and in
this sense the concept is loosely inspired by how the brain computes."

For anyone who is interested in a simple model of how the brain does this,
check out "associative memories". The basic idea is that networks of neurons
both store memory (in their synapses) and perform the computations to retrieve
or recall those memories.

A simple example is the Hopfield network, a single layer of neurons that are
recurrently connected with a specific update function:
[https://en.wikipedia.org/wiki/Hopfield_network](https://en.wikipedia.org/wiki/Hopfield_network)

Another is two layers of neurons that are reciprocally connected called a
Bidirectional Associative Memory (BAM):
[https://en.wikipedia.org/wiki/Bidirectional_associative_memo...](https://en.wikipedia.org/wiki/Bidirectional_associative_memory)

edit: grammar

~~~
ngold
Really fascinating, the low energy aspect is intriguing.

------
shmerl
Weren't HP working on the similar idea? What happened to it?

UPDATE: Looks like memristors production didn't work out for them:
[https://www.extremetech.com/extreme/207897-hp-kills-the-
mach...](https://www.extremetech.com/extreme/207897-hp-kills-the-machine-
repurposes-design-around-conventional-technologies)

------
moh_maya
[https://arstechnica.com/science/2017/10/who-needs-a-cpu-
phas...](https://arstechnica.com/science/2017/10/who-needs-a-cpu-phase-change-
memory-acts-as-an-analog-computer/)

I found this article very useful in understanding what the work was about..

They are using phase change memory to store data and perform computations.

------
MikeDoesCode
Didn't HP design and I think prototype something like this with memristors,
calling it 'The Machine'?

 _edit_ So HP built one with 160Tb of memory, I remember it being proposed
with memristors but haven't been able to check if the prototype used them...
Does anyone know what is different about IBM's that let's them claim this as a
first though?

~~~
cmiles74
As shmerl noted, The Machine turned out to be vaporware and they released
something significantly different under the same name. HP couldn't mass
produce memsistors. :-(

[https://www.extremetech.com/extreme/207897-hp-kills-the-
mach...](https://www.extremetech.com/extreme/207897-hp-kills-the-machine-
repurposes-design-around-conventional-technologies)

------
PeachPlum
Sounds very much like Content Addressable Parallel Processors such as the one
that powered the Staran air traffic control system.

Caxton Foster's book is the major text I know on the subject.

[https://en.wikipedia.org/wiki/Content_Addressable_Parallel_P...](https://en.wikipedia.org/wiki/Content_Addressable_Parallel_Processor)

------
amelius
What they describe sounds like a pipeline architecture or a systolic array, or
a network of interconnected computers. None of these are new concepts from an
architectural point of view, but the actual dimensioning could be new.

~~~
naasking
> None of these are new concepts from an architectural point of view, but the
> actual dimensioning could be new.

Depends what you mean by "architecture". I disagree that a network of
computers is comparable to colocating a computing unit with its memory. There
are orders of magnitude in difference in communication costs and failure
modes, so at some point you just have to acknowledge that the models are
fundamentally different, and should be treated as such.

Certainly they are Turing equivalent, so they aren't more "powerful" in a
computability sense, we but what's more interesting is the tradeoffs in
computational complexity.

------
tboyd47
Are these benchmarks unusual to anyone else? Things like changing the color of
all the black pixels in a bitmap simultaneously and performing correlations on
historical rainfall data. Is it because this technology is more suitable for
certain types of computations?

~~~
scientistem
I think this is most analagous to simd in terms of issueing a single
computation over multiple words of data, which would work well for both
zeroing data and operating on scalar arrays.

------
tim333
I got a little confused by the

>scientists have developed the first “in-memory computing”

as your normal GPUs have register and cache memory mixed with the processors.
I think the novel feature is they are mixing the processors with non volatile
flash like memory rather than with RAM. Which I guess is interesting but the
"will speed up computers by 200 times" presumable refers to an old school
architecture rather than something like the 15/125 TFLOP Volta GPU which I'd
imagine is faster than their thing.
([https://news.ycombinator.com/item?id=14309756](https://news.ycombinator.com/item?id=14309756)).

------
YeGoblynQueenne
Regarding the application of memristors to AI, here's a bit of a dissenting
opinion:

    
    
      Unfortunately for neuromorphics, just about everyone else in the semiconductor
      industry—including big players like Intel and Nvidia—also wants in on the
      deep-learning market. And that market might turn out to be one of the rare cases
      in which the incumbents, rather than the innovators, have the strategic
      advantage. That’s because deep learning, arguably the most advanced software on
      the planet, generally runs on extremely simple hardware.
    
      Karl Freund, an analyst with Moor Insights & Strategy who specializes in deep
      learning, said the key bit of computation involved in running a deep-learning
      system—known as matrix multiplication—can easily be handled with 16-bit and even
      8-bit CPU components, as opposed to the 32- and 64-bit circuits of an advanced
      desktop processor. In fact, most deep-learning systems use traditional silicon,
      especially the graphics coprocessors found in the video cards best known for
      powering video games. Graphics coprocessors can have thousands of cores, all
      working in tandem, and the more cores there are, the more efficient the
      deep-learning network.
    
    

From:

[https://spectrum.ieee.org/semiconductors/design/neuromorphic...](https://spectrum.ieee.org/semiconductors/design/neuromorphic-
chips-are-destined-for-deep-learningor-obscurity)

------
rasz
Micron had eval ram modules with buildin cellular automata for what, 10 years
now?

[http://www.micronautomata.com/research](http://www.micronautomata.com/research)

------
agumonkey
For god's sake. I've been pitching this for years .. I can't sigh hard enough.

~~~
zackmorris
That's how I feel. I used FPGAs in the late 90s and wanted to try making a
parallel chip with say 1024 cores and a few K of RAM per core and then program
it with something like Erlang. Then the dot bomb happened, then the housing
bomb, the Great Recession, and so on and so forth. The big players got more
entrenched so everything was evolutionary instead of revolutionary and I'd say
computer engineering (my major, never used) got set back 10-15 years.

But that said, I'm excited that 90s technology is finally being adopted by the
AI world. I'm also hopeful that all these languages like Elixir, Go and
MATLAB/Octave will let us do concurrent programming in ways that are less
painful than say OpenCL/CUDA.

~~~
agumonkey
TBH, seeing Chuck Moore talk about its GreenArray forth silicon gave me the
inspiration. Since all cores are ALU+RAM and can send data/code to neighbors
you get a "smart buffer". I found it so damn exciting I can't stop dreaming
about this.

Even a subset of the idea, having blit in ram (swap, row/col-zeroing, etc etc)
could reduce pressure on the memory bus.

~~~
abecedarius
Have you read about the J-machine? An older research machine that was sort of
like that. Trouble is that we need to change software and hardware together to
really reap the advantage.

------
samueloph
There was a talk about this on this year's debconf

Delivering Software for Memory Driven Computing

[https://debconf17.debconf.org/talks/206/](https://debconf17.debconf.org/talks/206/)

------
em3rgent0rdr
The concept of "Smart Memory" has been around for a while...from 2000 at
least:
[https://dl.acm.org/citation.cfm?id=339673](https://dl.acm.org/citation.cfm?id=339673)

------
cmrdporcupine
Watched the video, read the article, but I'm not entirely clear how 'in
memory' components differ in principle from just having a CPU with very large
register sets?

------
m3kw9
This will catch up to Moore’s law when it does come out one day

------
partycoder
Would this be classified as MIMD?

[https://en.wikipedia.org/wiki/MIMD](https://en.wikipedia.org/wiki/MIMD)

------
digi_owl
Didn't early Palm Pilots have something they called run in place memory?

------
yigitdemirag
So sorry to find this thread on HN while intoxication. I need to write a
blogpost about it in a couple of days. In short, these devices are more
capable and realizable than most can imagine.

Disclaimer: Neuromorphic computing with PCRAM devices is my MSc and future PhD
thesis topic.

~~~
mhb
If only there was room in the margin.

------
unixhero
Serverless applications

Computerless computing!

------
make3
feels like IBM comes up with a new architecture every week and a half or so,
and then you never hear from them again. It's like Reddit and it's be-monthly
cures for aids and cancer

