Hacker News new | past | comments | ask | show | jobs | submit login

People have been predicting companies like Intel and AMD overtaking Nvidia for a very long time now, and it's never panned out. This isn't to say that there can't be competition that can match or exceed Nvidia, but I don't think it's going to be any of the other old guard companies at this point. Especially not Intel. Every year I see articles trotted out claiming that Intel is making a comeback in general, and it never happens. Intel might buy some company in thinking they can harvest the talent to compete with the likes of Nvidia or Arm, but their corporatism will ruin any talent they buy.



>People have been predicting companies like Intel and AMD overtaking Nvidia for a very long time now, and it's never panned out.

And I have been saying "drivers" for 10+ years. Anyone who has been through 3Dfx Voodoo, S3, Matrox, ATI, PowerVR era should have known this but somehow dont. And yet it keeps coming up. I still remember an Intel Engineer once said to me they will be competitive by no later than 2020 / 2021. We are now in 2023, and Intel's discrete GPU market share is still a single digit rounding error. To give some additional context Raja Koduri joined Intel in early 2018. And Intel has been working on Discrete Graphics GPU budding on top of their IGP asset since 2015 / 2016.


Intel's grip on the CPU market was similarly one-sided not long ago. They had a 3 year process lead on anyone. Giants can falter.


Has there been any indication that nVidia will, or even can, falter in the foreseeable future?


Well the Llama.cpp running on CPUs with decent speed and fast development improvements, hints towards CPUs. And there the size of the model is less important as the RAM is the limit. At least for interference this is now a viable alternative.


Outside of Macs, llama.cpp running fully on the cpu is more than 10x slower than a GPU.


But having 32 real cores in a cpu is so much cheaper than having mumtiple gpus. RAM is also much cheaper as VRAM.


For local yes, but at data center level the parallelization is still often worth it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: