Hacker News new | past | comments | ask | show | jobs | submit login

Is unified memory practical for a "normal" desktop/server configuration though? Apple has been doing unified memory, but they also have the GPU on the CPU die. I would be interested to know if a discrete GPU plugged into a PCIe slot would have enough latency to make unified memory impractical.





It’s clearly not practical now, but that doesn’t mean it won’t be at some point.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: