Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD sold $1B of Instinct GPUs in 2Q, driving 3-digit datacenter growth (theregister.com)
25 points by nabla9 11 months ago | hide | past | favorite | 12 comments



Good news for AMD is somehow 3x better news for Nvidia.

AMD up +4.36%

NVDA up +12.81%

> Nvidia's datacenter business is on track to post bigger quarterly revenue than all of AMD for a full year.


Kinda expected I think. If people are buying up all the "number 2" product, they must be really snapping up number 1.


Mostly because Jensen has a really weird creepy personality cult around him, instead of actually being a good leader that uses the technical strengths of his company.

I'll buy an AMD GPU over an Nvidia one any day of the week until they push him out.


This is an insane statment. There's a reason Nvidia has been able to coast from one boom (crypto) to another (ai), and has bullets in the magazine for several other moonshots that are coming to fruition (autonomous vehicles; finally rolling out in San Francisco this year), robotics (Omniverse and Isaac), and even biotechnology!

It's because the CEO has made a priority of investing in software and platforms like CUDA

When the LLM AI hype dies down, we might merely jump over to robotics AI hype, and the imminent mass manufacture of robotaxis that each require a $30,000 Nvidia inference card inside!


Jensen did what AMD, Intel and Apple all refused to do: sponsor a working GPGPU library. The industry had multiple chances to compete with them on any number of terms, but they didn't. Now Nvidia can ride the demand all the way to the bank while their competitors still play grudge-matches with Khronos and each other.


He did a 'little' more. Nvidia sponsored CUDA in Uni curriculum with free hardware and course material ~15 years ago, resulting in most parallel programming courses turning into Nvidia courses.


This is an incredibly insane thing to say given the dominance Nvidia has in AI and just general GPU sales. Nvidia eclipses both AMD and especially Intel in performance as well.


I can use Nvidia GPUs for building AI applications quite easily. I can spin up A100s and handle quite a bit of load. I can develop locally and know it'll work in production, and there's a vast ecosystem of models, libraries, tools, and more.

AMD, not so much. They might as well be selling rocks.


care to explain? Nvidia has CUDA which really accelerates stuff and is widely used. you can deploy an LLM with one command on NVIDIA GPU.


Doing large language model inference on AMD MI300 GPUs is a no-brainer due to the increased RAM and lower cost, while Nvidia A100/H100 still dominates for training.


Does AMD have a library like TensorRT that optimizes inference with operator fusions and kernel optimizations though?

TensorRT can halve the inference compute for certain Nvidia inference workloads


Yeah their equivalent is MIGraphX. There's also support for Triton for custom kernels.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: