Do I understand correctly that three architectures of "AI-enabled hardware" (I couldn't come up with a better term) are the following?
1) separate cpu and gpu,
2) cpu and neural cores,
3) gpu-like cpu (like in this post).
In the long term, is any of these architectures potentially preferable for a) training, b) inference?
(I am guessing cpu + gpu is not ideal for consumer-level inference because of gpu prices and their space requirements, I don't know much about hardware.)
1) separate cpu and gpu,
2) cpu and neural cores,
3) gpu-like cpu (like in this post).
In the long term, is any of these architectures potentially preferable for a) training, b) inference?
(I am guessing cpu + gpu is not ideal for consumer-level inference because of gpu prices and their space requirements, I don't know much about hardware.)