Anandtech have an excellent analysis: https://www.anandtech.com/show/14750/hot-chips-31-analysis-i...
> The operation itself [inside the dram module] actually requires 2x the power (20 pJ compared to 10 pJ [inside the main CPU]), but the overall gain in power efficiency is 170 pJ vs 3010 pJ, or just under 20x
> One thing that this slide states that might be confusing is the server power consumption – the regular server is listed as only 300W, but the PIM solution is up to 700W. This is because the power-per-DRAM module would increase under UPMEM’s solution.
I'm assuming that those numbers represent "peak" power, and that when idle the compute parts of the DRAM can be power/clock gated. The implication being that if you are doing lots of in-memory analytics, you'll get a power saving, but if you switch to "something else" and don't get good utilisation of the in-memory compute, you'll probably ruin your power/energy efficiency.
I guess that means the future improvements will involve bringing the power consumption of the modified DRAM modules back in line with their "normal" cousins.
Yes, onchip SRAM can take single picojoules vs hundred picojules per bit on external DRAM, but using external DRAM is a perfectly valid design decision that does not dramatically increase the power consumption because CPUs have large caches and very smart prefetch logic. Some algorithms may seem to demand tens of gigabits of memory bandwidth on paper, but in reality smart prefetchers can reduce that to hundreds megabits.
Where things like this matters is scalability. Parallel computing performance scales not much with memory bandwidths per CPU, but with the number of individual CPUs, threads and their prefetch units. Here, the decision to have hundreds of wimpy CPUs, but all with individual memory access will be valid.
Edit: typo fix
How does a prefetcher reduce unneeded cache evictions? It fetches data that may not be needed, increasing cache evictions.
The only thing a prefetcher can do to improve power efficiency, IMO, is by increasing utilization-- if the processor stays 99% busy instead of 97% busy, the fixed portions of the power budget are amortized over more operations.
I'm struggling with that. To avoid a cache miss the prefetcher woul have to prefetch the data from memory so early, and memory is so slow, that an entire train of in-flight instructions could complete, and then likely have the CPU pause, before the fetch-from-ram completed.
I may be misunderstanding you though.
Memory is very latent, and moderately slow. Hardware prefetching tends to mostly make memory accesses longer, so you don't pay the RAS/CAS penalties and instead stream just more memory words back to fill an additional cache line beyond the one demanded.
e.g. One key part of Intel L2 hardware prefetching is working upon cache line pairs. If you miss on an even numbered cache line, and counters support it, it decides to also retrieve the next odd-numbered cache line at the same time.
The downside of a hardware prefetcher is three things, both stemming from it perhaps retrieving memory that isn't needed. 1) it can cause useful information to be evicted from cache if a prefetch is errant. 2) it can tie up the memory bus with an unnecessary access and make a necessary one more latent. 3) it can consume power for a prefetch that isn't needed.
We may be talking past each other. My understanding of a prefetcher is to get the data ASAP. It typically prefetches from cache, not from memory (if it prefetches from mem it's so slow it's useless).
> Memory is very latent, and moderately slow
Those 2 terms seem synonymous, but memory access is slooow - from doc in front of me, for haswell:
L1 access hit, 4 or 5 cycles latency;
L2 hit 12 cycles latency;
L3 hit 36 to 66 cycles latency;
ram latency 36 cycles + 57 NS / 62 cycles + 100NS depending
(found ref, it's https://www.7-cpu.com/cpu/Haswell.html)
for the ram latencies that's for different setups (single/dual cpu , I'm not sure) running ~3.5GHz. So for 36C + 57NS that 36 + (3.5 * 57) = 235 cycles latency. And it can get worse. Much worse.
I'm sorry i can't find it now but in hennesy & patterson I recall it being said that prefetchers can hide most of the latency of a L1 or L2 miss if it hits L3, but if it needs main mem then it's stuffed. Sorry, that is from memory and may be wrong! But I'm pretty sure a prefetcher is wasted if it needs to hit ram.
> If you miss on an even numbered cache line, and counters support it, it decides to also retrieve the next odd-numbered cache line at the same time.
I see, but I think it does better these days, it'll remember a stride and prefetch by the next stride. Even striding backwards through mem (cache)! But it won't cross a page boundary.
Really need an expert to chime in here, anyone?
This is a DDR4 pipelined read timing diagram. The bank select and row select latencies are significant, "off the left of this diagram," and contribute to the latency of random reads.
Further sequential reads --- until we hit a boundary -- can continue with no interruption of output, and do not pay the latency penalty (because there's no further bank select and row select, and because the prefetch and memory controller logic strobed the new column address before the previous read completed).
The prefetcher runs at whatever level it's at (L1/L2) and fetches from whatever level has the memory. So the L2 prefetcher may be grabbing from L3, and may be grabbing from SDRAM.
> ram latency 36 cycles + 57 NS / 62 cycles + 100NS depending
That's the ram LATENCY ON RANDOM ACCESS. If you extend an access to fetch the next sequential line because you think it will be used, you don't pay any of the latency penalty-- you might need to strobe a column access but words keep streaming out. For this reason sequential prefetch is particularly powerful. Even if we're just retrieving from a single SDRAM channel, it's just another 3ns to continue on to retrieve the next line. (DDR4-2400 is 2400MT/s, 8 transactions to fill a line, 8/2400000000=3.3n)
> I see, but I think it does better these days, it'll remember a stride and prefetch by the next stride. Even striding backwards through mem (cache)! But it won't cross a page boundary.
Sure, the even/odd access extender is just one very simple prefetcher that is a part of modern intel processors that I included for illustration. And we're completely ignoring software prefetch.
Go ahead, do the experiment. Run a memory-heavy workload and look at cache miss rates. Then turn off prefetch and see what you get. Most workloads, you'll get a lot more misses. ;)
Please read this thread on exactly this subject from last year https://news.ycombinator.com/item?id=16172686
A second, successive streamed fetch is basically free from a latency perspective. If you're missing, and have to go to memory, there's a very high chance that L2 is going to prefetch the next line into a stream buffer, and you won't miss to SDRAM next time.
It's reached the point that now that the stream prefetchers hint to the memory controller that a queued access is prefetch, so the memory controller can choose based on contention whether to service the prefetch or not.
Most of what you seem to talk about is L1 prefetch; I agree if L1 prefetch misses all the way to RAM you are probably screwed. The fancy strategies you mention, etc, are mostly L1 prefetch strategies. But L2 has its own prefetcher, and it's there to get rid of memory latency and increase effective use of memory bandwidth...
While we're talking about it... even the SDRAM itself has a prefetcher for burst access ;) Though it's kinda an abuse that it's called as such.
I wanted to say that a smarter prefetcher, or better to say a whole entirety of on-chip logic that works to minimise cache misses, will lower the rate of unneeded evictions.
Are tenths of a gigabit and hundreds of megabits not the same thing?
Tenths of gigabits -> 10, 20, 30 gigabits
Hundred megabits -> 0.2, 0.3, 0.4 gigabits
Also seems like concurrency and data consistency issues could arise pretty drastically when each memory unit is potentially making changes independent of the processor...
And what about CPU cache coherency?
> Internally the DPU uses an optimized 32-bit ISA with triadic instructions, with non-destructive operand compute. As mentioned, the optimized ISA contains a range of typical instructions that can easily be farmed out to in-memory compute, such as SHIFT+ADD/SHIFT+SUB, basic logic (NAND, NOR, ORN, ANDN, NXOR), shift and rotate instructions
Re. CPU cache coherency, they have a software library that automagically hides that: https://images.anandtech.com/doci/14750/HC31.UPMEM.FabriceDe...
Not exactly sure how it works, I'm not super familiar with the internals of CPUs.
But let's hope they've figured it out.
"A SLAM consists of a conventional dense semiconductor dynamic memory augmented with highly parallel, but simple, on-chip processors designed specifically for fast computer graphics rasterization. "
Then there are NVIDIA GPUs and Intel Phi (i.e. Knightslanding)... So they are trying to sell us that neither AMD, NVIDIA or Intel have though of this "ingenious" decision to increase the efficiency obtained by co-locating CPU and memory? The fact that they deliberately avoid addressing this elephant in the room makes me super skeptical.
Don't know if this is similar, but if it is, it's going to be a hard sell, especially in the era when 90% of programmers can't even understand what I wrote above.
They didn't get huge that I can tell. Might be a tough market.
Convincing DRAM manufacturers to change things for something at that small scale is a huge challenge.
I don't think venray managed to do that.
But OPMEM probably did.
I guess the other comment wasn't too far off.
I remember reading about it at the times, needing to have support by the firmware/BIOS/UEFI when you wanted to plug this
into a system.
Seems it was too complicated, and got moved onto a pcie-card with fpga.
Makes me wonder if the automata rams as such are even produced, or they just bought Microns remainig stock, and go out of business if that runs out?
Wayback Machined captured statement from Micron, and where development continues:
"A blitter is a circuit, sometimes as a coprocessor or a logic block on a microprocessor, dedicated to the rapid movement and modification of data within a computer's memory."
Which makes me wonder, when will somebody do an ISA dedicated for ML/AI and will we ever see old 8bit CPU's reborn as on memory CPU's.
But we have seen this approach for processing upon memory before and whilst that has not tractioned into a product, this might. Though with Chip design going towards chiplets, ram can and may well become another chiplet in the evolution of that design process.
It must feel like a Cell processor with thousands of SPUs to play with.
Sidenote: I think that ML people are more interested in 16-bit floats than 8-bit integer values. Though none of that is mainstream yet, most ML code still uses 32-bit floating point numbers at the moment.
https://www.upmem.com/developer/ or more directly https://sdk.upmem.com/2019.2.0/
It seems you have to compile an executable for the in memory processor, and they have some sort of daemon/infrastructure to communicate from the main cpu to the PIM.
One would really need to trust the manufacturer.. but how?
An L2 cache is just a huge chunk of SRAM, they take more die space and use a lot more power at the benefit of running with lower latency.
20x speedup compared to a CPU looks worse than just using a stock GPU, and we haven't talked about the AI inference chips that are optimized for NN inferencing.
This fails when data needs to be read and written randomly. However, even if the entire dataset was on vram random reads and writes would still be slow, because of the way a stream processor works. All reads and writes need to be laid out in memory in an order for a gpu to do it's best work.
If transferring data is bottleneck, then companies start to integrate. But that has downside of not being able to grow resources at different speed, so once transferring data is no longer a bottleneck, everyone runs to disaggregate. And then it repeats.
A lot of current ideas were actually used in the past, but cycle phased them out, and they make a comeback right now.
Does increased heat decrease the life of RAM?