Hacker News new | past | comments | ask | show | jobs | submit login

Qualcom and Apple is pushing forward so probably not a big issue. I'd rather say we're in a very interesting situation for CPUs right now (compared with 5-10 years ago).

GPUs on the other hand seems much more dominated by Nvidia. It seems though there are some competitors (AMD aside) lurking in the background though (again, Qualcomm and Apple).

Interesting that the new contenders seem to have a very typical disruptive innovation approach - moving up from the lower end up to more higher end products (with one specific edge; power efficiency). Even more interesting, this concept (coined/popularized by Clayton Christensen) stems from research in the hard drive industry! Full circle!




I haven't been following closely enough to know what the obstacles are, but I wonder why some of these vendors don't experiment with wider system memory buses to increase the bandwidth along with these iGPUs gaining more interesting cache and IO topologies.

My memory is that we used to see more variation with single, dual, triple, and quad channel memory configurations in essentially prosumer/pro desktops. But lately, it seems like everyone has just stuck to dual channel and minor variations in memory timing of the same DDR generations.


iGPUs like the ones in PHX/MTL have to go into handhelds and ultrabooks, so they're going to be power and thermally limited in before 2-4 MB of cache + LPDDR5 becomes a major bottleneck.

Now if you can give the iGPU a 80W power budget instead of maybe 15W, that's a different story. But at that point you're competing with discrete GPUs that can show up with GDDR6, and maybe an iGPU doesn't make so much sense anymore.


Does the fact that the new GPU "chips" from Apple et al are integrated vs expansion PCIe cards give them more room for faster advancement from Nvidia, or just the fact that they started with a very mature GPU market as guides the biggest boost? If integration is key, will Nvidia reach a point where they get beat because they are external to the CPU?


Maybe, but is the interface a significant factor here? From what I understand one of the biggest advantages of Nvidia among GPU companies is their strong software part (eg CUDA). That speaks to that Apple has a different kind of integration advantage (they probably need a much smaller software base because they control a lot of the other side of it).

I must say I don't see anyone outcompete Nvidia anytime shortly though. I just don't see anyone having any significant edge. But, I didn't see Apple coming either, despite iPhone showing clear signs of some very exciting performance increases year after year. I guess I didn't think Apple was interested/thought it was worth it. After all, their market/value (imo) stems not so much from a CPU advantage (very clearly it didn't just a few years ago laptop wise anyway). And it's risky - it could end up as it did before with PowerVR.


If Nvidia gets to a point where they have to onboard their GPU to keep scaling, I don't see why they couldn't do it. They design their own SOC packages now, and have been integrating Nvidia IP into third-party chips for well over a decade. Once the need arises for "Nvidia Silicon" beyond what we see in Tegra and Grace, I wager we'll see it.

From a cynical perspective, Apple's focus on integrated graphics kinda kills any hope they had of scaling to Nvidia's datacenter market. I mean, just look at stuff like this: https://www.nvidia.com/en-us/data-center/gb200-nvl72/

  GB200 NVL72 connects 36 Grace CPUs and 72 Blackwell GPUs in a rack-scale design.
Tightly integrated packages do work wonders for low-power edge solutions, but it simply doesn't enable the sort of scaling Nvidia has pulled off as fast as they've done it. CUDA is certainly one aspect to it, but Apple had the opportunity to make their own CUDA competitor with OpenCL through Khronos. The real reason Apple can't beat Nvidia is because Apple cannot write an attractive datacenter contract to save their life. Nvidia has been doing it for years, and is responding to market demand faster than Apple can realize their mistakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: