Hacker News new | past | comments | ask | show | jobs | submit login
A Case for Self-Managing DRAM Chips (arxiv.org)
24 points by rbanffy 3 days ago | hide | past | favorite | 18 comments





I always wondered why DRAM didn't just, after some Moore's law point, have refresh control built in. There's probably a reason I've forgotten about. Nice to see someone at least thinking about it.

You'd have to introduce a handshake between the DRAM and the memory controller because some internal operation might already be in progress when the memory controller sends a command. You'd probably also want to sync all the DRAM chips on a DIMM so the memory controller doesn't have to deal with commands being partially blocked.

But you have that handshake anyway now, between the CPU and the memory controller, effectively at least, don't you?

No, you don't. AFAIK DRAM chips just sit idle until the memory controller gives them a command and then they perform the command immediately.

But the memory controller still has to refresh them. So at some point you have to negotiate a wait state, don't you?

Yes - everything 'upstrram' of the memory controller has variable latency, partly because of the chance that a read or write request has to wait for a refreshing operation.

they design for a very, very small edge on the signal state transition.. so yes technically but its meant to be negligible, I believe.. (not a hardware designer but I have looked over the shoulder a few times)

They can sleep, but waking up has a large latency.

I has been a while since I wrote a DRAM controller. But it's all about managing latencies. To achieve maximum throughput the controller needs to issue the operation in a specific order, different for each workload.

DRAM have self refresh mode.


PLEASE UPDATE YOUR DRAM FIRMWARE TO REMAIN SECURE.

IBM has some serial RAM they’ve developed that has controllers requiring firmware. It’s coming, but one can argue modern DRAM is storage-like given all the caching going on.

The "On The Metal" podcast had an interview with someone who had to deal with debugging DRAM drivers. But, I don't remember which one it was...

> new low-cost DRAM architecture

Provides no evidence that putting more logic into the DRAM will somehow be cheaper


The DRAM chips don't actually have to be cheaper. They just need to enable the system as a whole to be more cost effective. If putting a bit more logic in the DRAM makes it significantly cheaper and quicker to design and validate a processor's memory controller, and enables better performance and power efficiency from the memory, then it's very likely a net win even if the marginal cost of DRAM chips went up slightly.

While you're correct that overall system cost is what matters, most functionality is offloaded to the controller because a single controller often controls multiple DRAM chips (Like the chips all share a CA bus[1]) So it's cheaper to put the functionality in the controller.

With that being said, it gets a bit nuanced, because as we make faster and faster DRAMs, we need more complicated input/output electronics (Phy and ECC.) GDDR actually has self refresh as part of the spec and even allows DRAM vendors to put a PLL inside so you can run a half rate clock, which technically makes it "Quadruple Data Rate" instead of "Double Data Rate." But usually with GDDR, you're running one controller per DRAM. Clamshell mode is not super common in my experience.

[1] I'm not super well versed on DDR4 and all of its configurations. So someone who is can elaborate if they would like.


Putting a controller in the dram allows that controller to detect bad memory cells and repair data (ecc-like) or re-map data to other addresses.

Both of those things would be possible while implementing the regular 'dumb' dram interface, but any error detection or remapping would have to operate within the latency budget of the module, since the latency is fixed. Typically that means not much remapping can be done, and ram yields have to be really good.

With this change, ram becomes robust to far more silicon defects like nand flash is. That's what makes it cheaper.


Pseudo-sram is a thing. A very expensive thing compared to unmanaged dram.

For what it's worth, it's still way cheaper than true SRAM. And the architecture suggested is still way closer to unmanaged DRAM than to PSRAM



Applications are open for YC Winter 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: