> I’m old enough to remember when people said Apple was going to miss Machine Learning, and narratives are always easy to build when something’s gone wrong.
This "I'm old enough to remember the naysayers" rhetoric is so boring. "People" were right - Apple kicked Nvidia to the curb before they could ship an equivalent GPGPU featureset and abandoned Khronos/OpenCL before it could compete. Their inference hardware is considered second-class even compared to AMD's offerings and has zero industry buy-in outside Apple's own labs. Their NPU tensor acceleration hardware is not meaningfully distinct compared to competitors. Their compute shaders and proprietary frameworks are not a replacement for OpenCL. Buy-in on Mac-native acceleration is in a freefall, compared to yesteryear.
CUDA is an excellent opportunity for Apple to meaningfully reflect on what it means to ship capable software and hardware in the modern age. They will almost certainly squander it so they can focus on a high-margin consumer tech market instead of the the less profitable industry demands.
This "I'm old enough to remember the naysayers" rhetoric is so boring. "People" were right - Apple kicked Nvidia to the curb before they could ship an equivalent GPGPU featureset and abandoned Khronos/OpenCL before it could compete. Their inference hardware is considered second-class even compared to AMD's offerings and has zero industry buy-in outside Apple's own labs. Their NPU tensor acceleration hardware is not meaningfully distinct compared to competitors. Their compute shaders and proprietary frameworks are not a replacement for OpenCL. Buy-in on Mac-native acceleration is in a freefall, compared to yesteryear.
CUDA is an excellent opportunity for Apple to meaningfully reflect on what it means to ship capable software and hardware in the modern age. They will almost certainly squander it so they can focus on a high-margin consumer tech market instead of the the less profitable industry demands.