
Researchers break “memory wall” conundrum, create fastest optical RAM cell - conse_lad
https://sparkonit.com/2019/06/16/researchers-break-memory-wall-conundrum-create-worlds-fastest-ram/
======
peter_d_sherman
This article is _why_ I read Hacker News!

To surpass limitations; and see limitations surpassed, in all areas of
technology.

This article exemplifies that ethos.

This is the future, or rather a glimpse of it, today.

Well done!

------
p1esk
How dense is it?

~~~
dvh
5x10x1cm

~~~
p1esk
:) ok, how dense can it be?

~~~
Etheryte
As covered in the article [1, paywalled], the binary cell itself consists of
two flipflops, each at 6x2 mm², and an optical xor (no dimensions given). So
for the time being, most of the space is in the control logic, as would be
expected for a first proof of concept. Currently, the answer to your question
seems "not very", but if the tech proves to be viable it will certainly
attract a lot of further research & funding to minimize it. Compare first
proof of concept memory cells vs what followed in the years to come.

[1]
[https://www.osapublishing.org/ol/abstract.cfm?uri=ol-44-7-18...](https://www.osapublishing.org/ol/abstract.cfm?uri=ol-44-7-1821)

~~~
p1esk
Haven’t read the article yet, but I wonder if this technology can ever compete
with SRAM on any of speed/density/power? Or is it intended strictly for an
“all optical” processor?

------
Dylan16807
So what is this memory wall?

> this concept revolves around the idea that computer processing units (CPUs)
> are advancing at a fast enough pace that will leave memory (RAM) stagnant

> According to Moore’s Law, which states that the number of transistors in a
> circuit doubles every two years, CPUs will eventually become too fast to
> yield any noticeable difference in computing speed. Once we reach this so-
> called memory wall, program/app execution time will depend almost entirely
> on the speed at which RAM can send data to the CPU. So even if you have an
> incredibly fast processor in your computer, it’s function may be limited to
> the speed of your RAM.

Oh, it's utter nonsense.

For one, it's ignoring that the cache speeds scale right along with the CPU.
You could get rid of DRAM entirely to prove there is no 'wall'.

But what really kills the argument is a proper analysis of how memory is set
up. Internal to the RAM chip, there's a ridiculous amount of bandwidth. The
bottleneck is using a 64-bit-wide bus to communicate with the CPU. But that
bottleneck is only a cost saving measure _on top_ of Moore's Law. If we back
off to pure Moore's Law, we can quite easily add more and more memory
width/channels and not hit any limits.

~~~
gumby
Cache is not a solution, it's a work-around. We didn't use caches in the days
when memory was faster than CPU.

Multicore machines have a lot of circuitry and performance costs around access
to memory that could potentially go away. That would be fantastic.

~~~
Dylan16807
If you get rid of DRAM, the cache is no longer cache, it's main memory.

I'm suggesting it not as a practical matter, but as a very simple thought
experiment disprove the idea of a "wall". There's no point where cores outpace
the ability to feed them, with no way to fix it, because you can put the
memory that feeds the core on the same die and it will grow faster 1:1 with
the core.

If you then say "well what about feeding the memory?" then I'll point to how
SSDs have been increasing in speed at a blistering pace, far outperforming
Moore's Law.

But that's just the quick dismissal. The more detailed dismissal is based on
adding more memory channels.

