Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Replacing Nvidia GPU with AMD
19 points by textcortex 9 months ago | hide | past | favorite | 10 comments
Dear community I am currently exploring the feasibility of replacing Nvidia GPUs with AMD GPUs for LLM inference. Can anyone share their experience or point me to any relevant research on the performance differences between these GPUs for this task? Are there any particular hardware or software limitations that may affect the feasibility of such a switch? Thank you for your insights!



Since the amount of VRAM is low (max 24GB), one question to look into before investing might be whether there's support for chaining multiple cards together.


I would be interested in running inference on Instinct GPUs(MI250 with 128gb)BUT I can’t find any cloud provider to spin up a machine. It seems they are not yet available or cloud providers are not interested in supporting AMD hardware..


Ollama just had AMD support merged. I haven't got it working with my 6700 XT eGPU yet, but I anticipate getting there soon.


Awesome! Looking into it. Thanks


You'll be fine for inference but probably struggle to run any large models requiring multi gpu ops.



I heard lot of bad things about ROCm, hope things improved since then.


TGI has ROCm support.


My understanding is inference on latest amds (79*) is mostly fine but training is still shaky


Is there a clear business rationale?

Can that rationale be mitigated by increasing revenue as an alternative to adding the technical risks implicit in your question?

I mean it is one thing if this is a hobby project and using AMD will provide interesting challenges to keep yourself occupied. How you spend your own time and money is entirely up to you.

But spending other people’s money is another thing entirely (unless they are mom and dad). And even more so spending other people’s time particularly when it comes to paychecks.

Finally, I am not saying there aren’t business cases where AMD doesn’t make sense. Just that Nvidia is the default for good reasons and is the simplest thing that might work. Good luck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: