EDRAM was very different, and IBM still uses it today. It is DRAM that is on the same die as the cores, which makes it far slower than AMD's VCache (which is SRAM). The concept is the same - putting a huge amount of cache on chip - but it's a very different solution.
The Intel solution was also not 3D stacked. It's a little like having an HBM stack next to the chip as a cache.
> It is DRAM that is on the same die as the cores,
From the article, it was actually a separate die/chiplets,
> Broadwell implemented its L4 cache on a separate 77mm2 die, creating a chiplet configuration. This cache die was codenamed “Crystal Well”, and was fabricated using the older 22nm process.
A lot of interesting details in article about how widely different this dram is, made to go fast fast fast. Fun read.
I'd really wanted a system with Crystal Well, seemed so cool. A lot of macs seemed to have the Intel Iris Pro models that had it. But general adoption in the PC market was - I feel - quite poor.
On mainframes, z14's drawer controller (that controlled four CPU sockets each) had a huge amount of eDRAM acting as an L4 cache for all cores in that drawer.
The e in eDRAM doesn't really mean DRAM embedded in the cpu, it means DRAM embedded inside a logic process.
Modern DRAM chips are manufactured in an entirely different way on a very different process than logic chips, and the manufacturing processes are incompatible. eDRAM was a very different implementation of DRAM that could be manufactured on a logic process.
The difference between eDRAM and DRAM is not just that eDRAM is closer, it was also typically dramatically faster, but also had shorter retention period requiring more frequent refresh.
I was confused by the use of "EDRAM" vs. "eDRAM" here and by the HN capitalization of the original article.
The EDRAM I'm familiar with, by a company called Ramtron and later Enhanced Memory Systems, seems to be largely lost to history. It's discussed in this relatively recent presentation, see slide 16 onward: https://site.ieee.org/pikespeak/files/2020/08/Silcon-Mountai...
I remember reading about IBM's usage of eDRAM for cache when they first used it. Their analysis showed that for their server workloads the number one thing was to keep the cores busy and that a lot of slower eDRAM worked better than a smaller amount of faster DRAM.
It was, at least, pretty good for video gaming: https://web.archive.org/web/20181025222235/https://techrepor... though, alas, it doesn't look like the Wayback Machine properly sucked up the whole article before the site got sold to particularly nasty link/content farmers.
My previous laptop I bought in 2016 had an Intel Skylake processor with 64MB eDRAM. Specifically, it was based on Intel Core i3-6157U: https://ark.intel.com/content/www/us/en/ark/products/96484/i... The laptop used DDR3L memory, I installed 16GB there. Was pretty fast despite small, light, and only 2 CPU cores.
I think the main reason chips like that didn’t took off was marketing. Laptop OEMs tell consumers that if they want performant graphics they must buy laptops with discrete GPUs despite expensive, heavy, and discharge batteries faster.
The Intel solution was also not 3D stacked. It's a little like having an HBM stack next to the chip as a cache.