Hacker News new | past | comments | ask | show | jobs | submit login

Hey Jeff, I loved the video. I know it was more about "current state of the tech" and less about what we should actually be buying, but it would be very cool to hear more about how each of these setups are priced on the scale of "dollar per unit of performance" or something like that. (Or maybe that's not fair to do, since most consumer software can't handle all those cores yet?)

I'm also curious whether you think Apple's decisions on memory architecture (despite being non-upgradeable) will have a leg up in the long run. You mentioned that memory bandwidth tops out around 174GB/s. Although you handily beat the Mac Pro in a multicore benchmark thanks to core count, one of the Mac Pro's claims to fame is its memory bandwidth of 800GB/s, as well as its unified memory architecture.




As with all things, it's a tradeoff. HBM on servers is similar to Apple's choice, and Xeon, EPYC, Nvidia H100, and some other designs incorporate it. There are good things (performance) and bad things (price/non-upgradeability) about it. Best of both worlds would he chip on module plus expansion slots, so the fast RAM is like L4 cache.


That's essentially how things are likely to go with CXL, though the latency isn't likely to be quite as good as on a dedicated DIMM connection or even as "good" as it was with IBM's OMI. The future (imminent in enterprise and sometime around when PCIe 6.0 hits consumer machines) looks to be mostly a combination of HBM and CXL memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: