Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> aren't most people using packages built on top of CUDA like pytorch etc?

Yes, and in fact both AMD and Intel have libraries. You can run Stable Diffusion and suchlike on AMD GPUs today, apparently. And you can export models from most ML frameworks to run in the browser, on phones and suchlike.

> If Intel is 10% slower but 50% cheaper [...] would that not be an enticing product?

Sometimes, yes. Some of the largest models apparently cost $600,000 in compute time to train [1], so halving that would be pretty appealing.

However, part of the reason for nvidia's dominance is that if you're hiring an ML engineer for $160,000/year spending $1,600 to give them an RTX 4090 is chump change.

[1] https://twitter.com/emostaque/status/1563870674111832066



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: