Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And will spend more time shuffling data through their horribly bottle-necked memory interfaces than the CPU would take handling the matrix operation, especially if the GPU is already used for graphics.

Multi-core systems and tools that can use them are valuable and very much reality right now.




Yep, CUDA and Vulkan Compute are such a waste in HPC, thankfully ML research is doubling down on threads.


You were the one who introduced GPUs on low-end embedded chips as an argument. HPC is a slightly different world.


Nah, you are the one that moved the goal posts after my first answer.


That was my first reply to you today...

As much as I often appreciate your no-nonsense perspective on things and you posting it, things like this don't really do you a favor.

EDIT: tweaked last line to be a bit less snippy


I don't always look at nicks.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: