Hacker News new | past | comments | ask | show | jobs | submit login

Do I understand correctly that three architectures of "AI-enabled hardware" (I couldn't come up with a better term) are the following?

1) separate cpu and gpu,

2) cpu and neural cores,

3) gpu-like cpu (like in this post).

In the long term, is any of these architectures potentially preferable for a) training, b) inference?

(I am guessing cpu + gpu is not ideal for consumer-level inference because of gpu prices and their space requirements, I don't know much about hardware.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: