The problem with choosing Nvidia is that they can't make an x86 processor with an integrated GPU. If you're looking to maintain backward compatibility with the Playstation 5, you're probably going to want to stick with an x86 chip. AMD has the rights to make x86 chips and it has the graphics chips to integrate.
Nvidia has graphics chips, but it doesn't have the CPUs. Yes, Nvidia can make ARM CPUs, but they haven't been putting out amazing custom cores.
AMD can simply repackage some Zen X cores with RDNA X GPU and with a little work have something Sony can use. Nvidia would need to either grab off-the-shelf ARM Cortex cores (like most of their ARM CPUs use) or Sony would need to bet that Nvidia could and would give them leading-edge performance on custom designed cores. But would Nvidia come in at a price that Sony would pay? Probably not. AMD's costs are probably a lot lower since they're going to be doing all that CPU work anyway for the rest of their business.
For Nintendo, the calculus is a bit different. Nintendo is fine with off-the-shelf cores that are less powerful than smartphones and they're already on ARM so there's no backward incompatibility there. But for Sony whose business is different, it'd be a huge gamble.
I think changing from AMD GPUs to Nvidia GPUs by itself has a good chance of breaking backwards compatibility with how low level and custom Sony's GPU API apparently is, so the CPU core architecture would just be a secondary concern.
I was not saying Sony should switch to Nvidia, just pointing out that it is objectively incorrect to say that AMD is the only option for consoles when the most popular console today does not rely on AMD.
I also fully believe Intel could scale up an integrated Battlemage to meet Sony's needs, but is it worth the break in compatibility? Is it worth the added risk when Intel's 13th and 14th gen CPUs have had such publicly documented stability issues? I believe the answer to both questions is "probably not."
> incorrect to say that AMD is the only option for consoles
It's a bit of an apples to oranges comparison though, even if all 3 devices are technically consoles. The Switch is basically a tablet with controllers attached and a tablet/phone CPU while PS5/Xbox are just custom build PCs.
The only reason I can see that it would matter that the Switch is a low-end console is if you think Nvidia is incapable of building something higher end. Are you saying that Nvidia couldn't make more powerful hardware for a high end console? Otherwise, the Switch just demonstrates to me that Nvidia is willing to form the right partnership, and reliably supply the same chips for long periods of time.
I'm certain Nvidia would have no trouble doing a high end console, customized to Microsoft and/or Sony's exacting specs... for the right price.
> Are you saying that Nvidia couldn't make more powerful hardware for a high end console?
Hard to say. It tooks Qualcomm years make something that was superior to standard ARM designs. GPU is of course another matter.
> I'm certain Nvidia would have no trouble doing a high end console,
The last mobile/consumer CPU (based on their own core) that they have released came out in 2015 and they have been using off the shelf ARM core designs for their embedded and server stuff. Wouldn't they be effectively be starting from scratch?
I'm sure they could achieve that in a few years but do you think it would take them significantly less time that it did Apple or Qualcomm?
> Nvidia is incapable of building something higher end
I think it depends more on what Nintendo is willing to pay, I doubt they really want a "high-end" chip.
> I think it depends more on what Nintendo is willing to pay, I doubt they really want a "high-end" chip.
In this thread, we were talking about what Sony and Microsoft would want for successors to the PS5 and XSX, not Nintendo. Nintendo was just a convenient demonstration that Nvidia is clearly willing to partner with console makers like Sony and Microsoft.
> Hard to say. It tooks Qualcomm years make something that was superior to standard ARM designs.
> The last mobile CPU
I wasn't talking about Nvidia custom designing an ARM core, although they have done that in the past, and again, this wouldn't be mobile hardware. Nvidia is using very powerful ARM cores in their Grace CPU today. They have plenty of experience with the off-the-shelf ARM cores, which are very likely good enough for modern consoles.
> Nvidia is using very powerful ARM cores in their Grace CPU today
I'm not sure Neoverse is particularly (or at all) suitable for gaming consoles. Having 60+ cores wouldn't be particularly useful and their single core performance is pretty horrible (by design).
> which are very likely good enough for modern consoles
Are they? Cortex-X4 has barely caught up with Apple's M1 (from 2020)? What other options are there? ARM just doesen't seem to care that much about the laptop/desktop market at all.
The Neoverse cores are substantially more powerful than something like Cortex-X4. Why would they not be suitable? It's hard to find benchmarks that are apples-to-apples in tests that would be relevant for gaming, but what little I've been able to find shows that the Neoverse V2 cores in Nvidia's Grace CPU are competitive against AMD's CPUs. I hate to draw specific comparisons, because it's very easy to attack when, as I already said, the numbers are hard to come by, but I'm seeing probably 20% better than Zen 3 on a clock-for-clock, single core basis. The current-generation PS5 and XSX are based on Zen 2. Zen 3 was already a 10% to 30% jump in IPC over Zen 2, depending on who you ask. Any hypothetical Nvidia-led SoC design for a next-gen console would be pulling in cores like the Neoverse V3 cores that have been announced, and are supposedly another 15% to 20% better than Neoverse V2, or even Neoverse V4 cores which might be available in time for the next-gen consoles.
These gains add up to be substantial over the current-gen consoles, and as an armchair console designer, I don't see how you can be so confident they wouldn't be good enough.
The CPU cores Nvidia has access to seem more than sufficient, and the GPU would be exceptional. AMD is clearly not the only one capable of providing hardware for consoles. Nvidia has done it, will do it again, and the evidence suggests Nvidia could certainly scale up to much bigger consoles if needed. One problem is certainly that Nvidia is making bank off of AI at the moment, and doesn't need to vie for the attention of console makers right now, so they aren't offering any good deals to those OEMs. The other problem is that console makers also don't want any break in compatibility. I've already addressed these problems in previous comments. It's just incorrect to say that the console makers have no other choices. They're just happy with what AMD is offering, and making the choice to stick with that. Nintendo will be happy using hardware made on a previous process node, so it won't interfere with Nvidia's plan to make insane money off of AI chips the way that next-gen console expectations from Sony or Microsoft would. I'm happy to admit that I'm being speculative in the reasons behind these things, but there seem to be enough facts to support the basic assertion that AMD is not the only option, which is what this sub-thread is about.
Since you seem so confident in your assertions, I assume you have good sources to back up the claim that Neoverse V2/V3/V4 wouldn't be suitable for gaming consoles?
> Nvidia's Grace CPU are competitive against AMD's CPUs
I don't think PS/Xbox are using AMDs 64+ core server chips like Milan etc.
> I assume you have good sources to back up the claim that Neoverse V2/V3/V4
These are data center CPUs designed for very different purposes. Neoverse is only used in chips that target very specific, highly parallelized workloads. The point is having a very high number 64-128+ of relatively very slow but power efficient cores and extremely high bandwidth.
e.g Grace has comparable single thread performance to Ryzen 7 3700X (a 5 year old chip). Sure MT performance is 10x better but how does that matter for gaming workloads?
I assume you could boost the frequency and build a SoC with several times less core than all recent Neoverse chips (if ARM let's you). Nobody has done that or publically considered doing it. I can't prove that it's impossible but can you provide any specific arguments why do you think that you be a practical approach?
> substantially more powerful than something like Cortex-X4.
Of course it's just rumors but Nvidia seems to be going with ARM A78C which is a tier below X4. Which is not particularly surprising since Nintendo would rather spend money on other components / target a lower price point. As we've agreed the GPU is the important part here the CPU will probably be comparable to an off the shelf SoC you can get from Qualcomm or even MediaTek.
That might change in the future but I don't see any evidence that Nvidia is somehow particularly good at building CPUs or is close to being in the same tier as AMD, Intel, Qualcomm (maybe even Ampere depending if they finally deliver what they have been promising in the near future).
Same applies to Grace, the whole selling point is integration with their datacenter GPUs. For CPU workloads it provides pretty atrocious price/performance and it would make little sense to buy it for that.
Emulating x86 would be an option - though given Sony's history, I doubt they'd consider it seriously.
For context...
- PS1 BC on PS2 was mostly hardware but they (AFAIK?) had to write some code to translate PS1 GPU commands to the PS2 GS. That's why you could forcibly enable bilinear filtering on PS1 games. Later on they got rid of the PS1 CPU / "IO processor" and replaced it with a PPC chip ("Deckard") running a MIPS emulator.
- PS1 BC on PS3 was entirely software; though the Deckard PS2s make this not entirely unprecedented. Sony had already written POPS for PS1 downloads on PS2 BBN[0] and PSP's PS1 Classics, so they knew how to emulate a PS1.
- PS2 BC on PS3 was a nightmare. Originally it was all hardware[1], but then they dropped the EE+GS combo chip and went to GPU emulation, then they dropped the PS2 CPU entirely and all backwards compatibility with it. Then they actually wrote a PS2 emulator anyway, which is part of the firmware, but only allowed to be used with PS2 Classics and not as BC. I guess they consider the purchase price of the games on the shop to also pay for the emulator?
- No BC was attempted on PS4 at all, AFAIK. PS3 is a weird basketcase of an architecture, but even PS1 or PS2 aren't BC supported.
At some point Sony gave up on software emulation and decided it's only worth it for retro re-releases where they can carefully control what games run on the emulator and, more importantly, charge you for each re-release. At least the PS4 versions will still play on a PS5... and PS6... right?
[0] A Japan-only PS2 application that served as a replacement for the built-in OSD and let you connect to and download software demos, game trailers, and so on. Also has an e-mail client.
[1] Or at least as "all hardware" as the Deckard PS2s are
> Then they actually wrote a PS2 emulator anyway, which is part of the firmware, but only allowed to be used with PS2 Classics and not as BC.
To be fair, IMO that was only 80-90% of a money grab; "you can now run old physical PS2 games, but only these 30% of our catalog" being a weird selling point was probably also a consideration.
> Sony had already written POPS for PS1 downloads on PS2 BBN[0] and PSP's PS1 Classics, so they knew how to emulate a PS1.
POPS on the PSP runs large parts of the code directly on the R4000 without translation/interpretation, right? I'd call this one closer to what they did for PS1 games on the (early/non-Deckard) PS2s.
> No BC was attempted on PS4 at all, AFAIK. PS3 is a weird basketcase of an architecture, but even PS1 or PS2 aren't BC supported.
To Be Faiiiirrrrrr, that whole generation was a basket case. Nintendo with the motion controls. Microsoft with a console that internally was more PC then "traditional" console (and HD-DVD). Sony with the Cell processor and OtherOS™.
I do have fond memories of playing around with Linux on the PS3. Two simultaneous threads! 6 more almost cores!! That's practically a supercomputer!!!
I remember the hype around cell processors being so high around the release of the PlayStation 3. It was novel for the application, but still fizzled out even with the backing it had.
I'll try to answer in the parent commenter's place.
Prior generations of consoles were true-blue, capital-E "embedded". Whatever CPU they could get, graphics hardware that was custom built for that particular machine, and all sorts of weird coprocessors and quirks. For example, in the last generation, we had...
- The PlayStation 2, sporting a CPU with an almost[0] MIPS-compatible core with "vertex units", one of which is exposed to software as a custom MIPS coprocessor, a completely custom GPU architecture, a separate I/O processor that's also a PS1, custom sound mixing hardware, etc.
- The GameCube, sporting a PPC 750 with custom cache management and vector instructions[1], which you might know as the PowerPC G3 that you had in your iMac. The GPU is "ATI technology", but that's because ATI bought out the other company Nintendo contracted to make it, ArtX. And it also has custom audio hardware that runs on another chip with it's own memory.
- The Xbox, sporting... an Intel Celeron and an Nvidia GPU. Oh, wait, that's "just a PC".
Original Xbox is actually a good way to draw some red lines here, because while it is in some respects "just a PC", it's built a lot more like consoles are. All games run in Ring 0, and are very tightly coupled to the individual quirks of the system software. The "Nvidia GPU" is an NV2A, a custom design that Nvidia built specifically for the Xbox. Which itself has custom audio mixing and security hardware you would never find in a PC.
In contrast, while Xbox 360 and PS3 both were stuck with PPC[2], they also both had real operating system software that commercial games were expected to coexist with. On Xbox 360, there's a hypervisor that enforces strict code signing; on PS3 games additionally run in user mode. The existence of these OSes meant that system software could be updated in nontrivial ways, and the system software could do some amount of multitasking, like playing music alongside a game without degrading performance or crashing it. Y'know, like you can on a PC.
Contrast this again to the Nintendo Wii, which stuck with the PPC 750 and ArtX GPU, adding on a security processor designed by BroadOn[3] to do very rudimentary DRM. About the only thing Nintendo could sanely update without bricking systems was the Wii Menu, which is why we were able to get the little clock at the bottom of the screen. They couldn't, say, run disc games off the SD card or update the HOME Menu to have a music player or friends list or whatever, because the former runs in a security processor that exposes the SD card as a block device and the latter is a library Nintendo embedded into every game binary rather than a separate process with dedicated CPU time budgets.
And then the generation after that, Xbox One and PS4 both moved to AMD semicustom designs that had x86 CPUs and Radeon GPUs behind familiar APIs. They're so PC like that the first thing demoed on a hacked PS4 was running Steam and Portal. The Wii U was still kind of "console-like", but even that had an OS running on the actual application processor (albeit one of those weird designs with fixed process partitions like something written for a mainframe). And that got replaced with the Switch which has a proper microkernel operating system running on an Nvidia Tegra SoC that might have even wound up in an Android phone at some point!
Ok, that's "phone-like", not "PC-like", but the differences in systems design philosophy between the two is far smaller than the huge gulf between either of those and oldschool console / embedded systems.
[0] PS2 floating-point is NOWHERE NEAR IEEE standard, and games targeting PS2 tended to have lots of fun physics bugs on other hardware. Case in point: the Dolphin wiki article for True Crime: New York City, which is just a list of bugs the emulator isn't causing. https://wiki.dolphin-emu.org/index.php?title=True_Crime:_New...
[1] PPC 750 doesn't have vector normally; IBM added a set of "paired single" instructions that let it do math on 32-bit floats stored in a 64-bit float register.
[2] Right after Apple ditched it for power reasons, which totally would not blow up in Microsoft's face
[3] Which coincidentally was founded by the same ex-SGI guy (Wei Yen) who founded ArtX, and ran DRM software ported from another Wei Yen founded company - iQue.
Considering how the wins are blowing, I'm going to guess the next consoles from Sony and Microsoft are the last ones to use x86. They'll be forced to switch to ARM for price/performance reasons, with all x86 vendors moving upmarket to try and maintain revenues.
> Nvidia has graphics chips, but it doesn't have the CPUs. Yes, Nvidia can make ARM CPUs, but they haven't been putting out amazing custom cores.
Ignorant question - do they have to? The last time I was up on gaming hardware it seemed as though most workloads were GPU-bound and that having a higher-end GPU was more important than having a blazing fast CPU. GPUs have also grown much more flexible rendering pipelines as game engines have gotten much more sophisticated and, presumably, parallelized. Would it not make sense for Nvidia to crank out a cost-optimized design comprising their last-gen GPU architecture with 12 ARM cores on an affordable node size?
The reason I ask is because I've been reading a lot about 90s console architectures recently. My understanding is that back then the CPU and specialized co-processors had to do a lot of heavy lifting on geometry calculations before telling the display hardware what to draw. In contrast I think most contemporary GPU designs take care of all of the vertex calculations themselves and therefore free the CPU up a lot in this regard. If you have an entity-based game engine and are able to split that object graph into well-defined clusters you can probably parallelize the simulation and scale horizontally decently well. Given these trends I'd think a bunch of cheaper cores could work as well for cheaper than higher-end ones.
I think a PS6 needs to play PS5 games, or Sony will have a hard time selling them until the PS6 catalog is big; and they'll have a hard time getting 3rd party developers if they're going to have a hard time with console sales. I don't think you're going to play existing PS5 games on an ARM CPU unless it's an "amazing" core. Apple does pretty good at running x86 code on their CPUs, but they added special modes to make it work, and I don't know how timing sensitive PS5 games are --- when there's only a handful of hardware variants, you can easily end up with tricky timing requirements.
The first year of PS4 was pretty dry because of the lack of BC; It really helped that the competition was the Xbox One, which was less appealing for a lot of reasons
At this point people have loved the PS5 and Xbox Series for having full backwards compatibility. The Xbox goes even further through software. People liked the Wii’s backwards compatibility and the Wii U (for those who had it).
And Nintendo’s long chain of BC from the GB to the 3DS (though eventually dropping GB/GBC) was legendary.
The Switch was such a leap over the 3DS and WiiU Nintendo got away with it. It’s had such a long life having no BC could be a huge hit if the Switch 2 didn’t have it.
I think all three intended to try and keep it going forward at this point.
Which is also the reason why many games on PS 5 and XBox Series are kind of lame, as studios want to keep PS 4 and XBone gamers in the sales loop, and why PS 5 Pro is more of scam kind of thing for hardcore fans that will buy anything that a console vendor puts out.
One data point: there was no chip shortage at the PS4 launch, but I still waited more than a year to get one because there was little to play on it.
While with the PS5 I got one as soon as I could (that still took more than a year since launch, but for chip shortage reasons) because I knew I could simply replace the PS4 with it under the TV and carry on.
We're not in 2012 anymore. Modern players don't only want a clean break to play the new AAA games every month, they also want access to a large indie marketplace, they also want the games they play every day, they also want to improve the performance of the games they already have.
PS5 had Zen 2 which was fairly new at the time. If PS6 targets 120 fps they'll want a CPU that's double the performance of Zen 2 per thread. You could definitely achieve this with ARM but I'm not sure how new of an ARM core you would need.
You say that, but you can absolutely notice. Motion is smoother, the picture is clearer (higher temporal resolution), and input latency is half what it is at 60.
Does every game need it? Absolutely not. But high-speed action games and driving games can definitely benefit. Maybe others. There’s a reason the PC world has been going nuts with frame rates for years.
We have 120 fps on consoles today on a few games. They either have to significantly cut back (detail, down to 1080p, etc) or are simpler to begin with (Ori, Prince of Persia). But it’s a great experience.
My eyes are not best-of-best but the difference between 60 and 120hz in something first-person is dramatic and obvious. It depends on the content but there are many such games for consoles. Your claim that it's "slight" is one that only gets repeated by people who haven't seen the difference.
Honestly, I can't even tell the difference between 30 and 60. Maybe I'm not playing the right games or something but I never notice framerate at all unless it's less than 10-20 or so.
I don't think my TV can display 120 fps and I'm not buying a new one. But they promise 4K 60 (with upscaling) on the PS5 Pro, so they have to have something beyond that for PS6.
Nvidia has very little desire to make a high-end razor thin margin chip that consoles traditionally demand. This is what Jensen has said, and it makes sense when there are other areas that the silicon can be directed to with much greater profit.
Nvidia has graphics chips, but it doesn't have the CPUs. Yes, Nvidia can make ARM CPUs, but they haven't been putting out amazing custom cores.
AMD can simply repackage some Zen X cores with RDNA X GPU and with a little work have something Sony can use. Nvidia would need to either grab off-the-shelf ARM Cortex cores (like most of their ARM CPUs use) or Sony would need to bet that Nvidia could and would give them leading-edge performance on custom designed cores. But would Nvidia come in at a price that Sony would pay? Probably not. AMD's costs are probably a lot lower since they're going to be doing all that CPU work anyway for the rest of their business.
For Nintendo, the calculus is a bit different. Nintendo is fine with off-the-shelf cores that are less powerful than smartphones and they're already on ARM so there's no backward incompatibility there. But for Sony whose business is different, it'd be a huge gamble.