Hacker News new | past | comments | ask | show | jobs | submit login

Huang's first reason why they would take up ARM is really, really interesting (and serves as a big counterpoint for my pessimistic take on the announcement yesterday).

> Number one is this: as you know, we would love to take Nvidia’s IP through ARM’s network. Unless we were one company, I think the ability for us to do that and to do that with all of our might, is very challenging. I don’t take other people’s products through my channel! I don’t expose my ecosystem to to other company’s products. The ecosystem is hard-earned — it took 30 years for Arm to get here — and so we have an opportunity to offer that whole network, that vast ecosystem of partners and customers Nvidia’s IP. You can do some simple math and the economics there should be very exciting.

If Nvidia truely sees ARM as an opportunity kill off Mali and expand Geforce's install base, they might create an incentive for themselves to keep the ARM ecosystem alive. Note this is a pretty credible take I think - Samsung's next Exynos processor will have Radeon graphics and Nvidia can quickly nip that stuff in the bud by this (assuming Geforce is better and cheaper). If this plays out this way would simply be great for the ARM ecosystem. If Nvidia can sell Geforce like ARM sells Mali and leave the ARM ecosystem truly intact, I don't think many will lament the demise of Mali (although I expect some counterviews for this on HN :) ).

Having Geforce be a commanding presence in the ARM ecosystem might be a big problem for the future diversity of GPU vendors though, but that's something I'm interested in seeing play out at least. I do hope AMD can take the battle to Nvidia on ARM too, and that Qualcomm and PowerVR find ways to stay relevant.




> If Nvidia truely sees ARM as an opportunity kill off Mali and expand Geforce's install base,

Perfect replacement, from one unsupported hard to use non-open-source Linux-drivered gpu to another hard to use non open source Linux drivered gpu.

> I do hope AMD can take the battle to Nvidia on ARM too, and that Qualcomm and PowerVR find ways to stay relevant.

Qualcomm's Adreno amusingly enough came from AMD, as Imageon, in 2009. I definitely hope AMD can get back in the mobile game though. Good luck to powervr & anyone else too. You are probably up for some hard competition soon!!

https://en.m.wikipedia.org/wiki/Imageon


The first version of Adreno had some fixed-function blocks from AMD (and Bitboys), but the programmable shader core came from Qualcomm's never-commercialized Qshader architecture. The result was a buggy mess which took a ton of effort on our drivers to debug and correct.

In hindsight, I'm amazed at how well it worked given the schedule and the magnitude of the work involved in fusing those two architectures.


I really appreciate this kind of mention, thank you. I want to know so much more but this history, it feels like it evaporates so quickly, only a few people have any idea.what happened.


> non-open-source Linux-drivered gpu

I have seen this concern countless times, but not why that matters to them. I can understand it matters from Linus' perspective as kernel maintainer, but from users perspective I can't really get the issue. Anyways, not all code that runs on your system is open source. Why not demand your bootloader manufacturer for open source with the same intensity. If say NVIDIA wants the driver to contain malicious backdoor, open source is not going to stop them.


> from users perspective I can't really get the issue ... If say NVIDIA wants the driver to contain malicious backdoor, open source is not going to stop them.

No, but if such a backdoor were discovered, it would be possible to do something about it. The quote from the article in top comment here says it well: https://news.ycombinator.com/item?id=23944954

> Anyways, not all code that runs on your system is open source.

Not yet, but it is my goal. If/when that's achieved, I'd also like to run it exclusively on free/libre/open (FLO) hardware.

> Why not demand your bootloader manufacturer for open source with the same intensity.

My bootloader plays a much smaller role in my computing endeavors than my gpu. And less importantly, as a practical matter, there's many more major motherboard vendors, and few FLO alternatives; whereas both nvidia alternatives (amd, integrated intel) do have FLO drivers.


It means when people are trying to do things like experiment with how to make for example frame timing more useful, such that specs like Vulkan can advance[1], we can't experiment with & try to advance & figure out what might work, because closed proprietary software doesn't allow mankind to explore & progress.

We basically have to keep going back to Nvidia & relying on them to be authorities on their own system & to be acting in everyone's interest when we try to develop extensions like VK_EXT_present_timing. This greatly injures the development of good standards, obstructing there from being a collaborative healthy environment where people can work together to make standards that work well.

Another example is EGLStreams which is not that bad but very different approach to handling video buffers from what everyone else does which has been obstructing the use of the newer Wayland display server on nvidia hardware for 6 years now[2]. Nvidia wants their thing, & closed drivers mean no one can play around & attempt to make their hardware work if they wanted to. Ridiculously harsh limitations, no choice, no experimenting.

[1] https://www.phoronix.com/scan.php?page=news_item&px=VK_EXT_p...

[2] https://www.phoronix.com/scan.php?page=news_item&px=MTgxMDE

This creates a science-free vacuum where research & experimentation & progress wither, where peership dies.


Putting a driver into the Linux project presumably means better integration with the rest of the kernel?

Also: less package management work.


O yea users don't give a shit. But it's further reduction and shrinking of the playing field to corporate giants that'll only share details with other giants to develop products.


> Why not demand your bootloader manufacturer for open source with the same intensity.

Folks want that too, and lots of ARM platforms use open source bootloaders already, mainly u-boot.


The problem isn't inherently that the drivers are closed source, it's that Nvidia is actively hostile towards the ecosystem. For instance Mesa added a generic buffer management API (GBM) allowing compositors like Weston to be hardware accelerated using OpenGL. Nvidia could have followed suite and supported GBM but instead went their own route with EGLStreams. So now Wayland, XWayland and every single Wayland compositor has to implement Nvidia specific code to support their hardware.


Fun little fact: Adreno as an anagram of Radeon.

Honestly I really don't see the Geforce play. Nvidia tried it with Tegra and failed pretty miserably. Mali and Adreno pretty much cornered the market from that era(with PowerVR pivoting over to Apple). I just don't see their IP really hitting it home with the type of workloads you see in SoCs.

The primary driver for most SoC GPUs since the qHD days is to push pixels for the UI layers which has a different set of requirements and features compared to modern GPUs use for Gaming or ML. They're almost exclusively heavily tiling based and biased more towards power consumption than raw horsepower.


> Honestly I really don't see the Geforce play. Nvidia tried it with Tegra and failed pretty miserably. Mali and Adreno pretty much cornered the market from that era

I want to be nice but I don't know what rock you've been sleeping under. TX2 is 3 years old & it's not just nvidia cornering the entire hapless AI/ML market with proprietary CUDA that keeps it & the jetson platform as the #1 most obvious go to for robotics, in spite of having a fairly trash terrible not very good ARM cpu: those couple of nv cores are way better than the rest of the arm offerings. Even outside ML, the nv gpu arm offerings radically outstrip everyone else. No one else has the ram bandwidth to begin to compete, much less the cores. 3 years have passed & the only one with the X2's 60GBps is the NV Xavier top end part with 137GBps. No one else is playing the league as NV has been playing with arm gpus. I don't know how you would call this massive raring colossal success a failure. Word nothing of the Nintendo Switch.


The Tegra thing is interesting, and I wrote about it in the other thread too.

It is a failure for NVidia in that they launched it as a mainstream mobile phone/tablet part, and it's not used in anything outside the NVidia Shield in that market that I'm aware of (and the Switch of course).

But it has seen success in robotics and self driving cars, because NVidia makes it easy to use and it has great performance.

So it's not obvious how to judge it. Commercially, compared to their initial goals it is probably a failure. But it has opened new markets that didn't exist so that's successful?


Horsepower isn't everything, usually cost and power consumption come first in SoC selection followed by feature set(of which your GPU is one part of a larger picture).

If you want to really succeed in the SoC space(which is where Arm has) then what you need is volume and I don't think Tegra ever really made any serious inroads there.

The switch is a game console and so it sits somewhat outside of the traditional high-volume SoC market.


Feels like you are arguing that boring mainstream success is the only thing we can judge by. Disagree.


Linux and Mesa have open source drivers for Mali GPUs now (lima & panfrost).


> from one unsupported hard to use non-open-source Linux-drivered gpu to another hard to use non open source Linux drivered gpu.

I thought panfrost was pretty good these days


Do any Android phones use Panfrost?


Bet you ChromeOS runs on Panfrost first. :)


Oh well look, some actual support from the owners.

https://www.phoronix.com/scan.php?page=news_item&px=Arm-Panf...


How is NVIDIAs Linux driver hard to use? Are you referring just to the fact that using the driver means installing and maintaining more packages?


You can’t just upgrade your kernel when you want

Sway refuses to support their nonstandard apis

Any kernel bugs you report are tainted

I’m sure there’s more


It's a great line and one that I would expect Jensen to take - and it's almost certainly true - to an extent.

I'd expect Nvidia to keep the Arm ecosystem alive, but only where they don't see an opportunity to take control. So they keep Radeon off Exynos (and incidentally why couldn't they do that anyway?) by offering GeForce. But elsewhere they can deny Arm IP to other firms where they have a competitive SoC.

Take one example. So Nvidia / Arm invest heavily in data center focused designs - are they really going to offer this IP to Ampere / Amazon on equal terms when compared to an Nvidia CPU? As Ben says 'color me skeptical'.

Essentially they will have a full overview of the Arm ecosystem - will lots of confidential information - and pick and choose where they drive out competitors whilst farming license fees from the rest.


> (and incidentally why couldn't they do that anyway?)

Why try to compete on merit when you can swoop in and dictate the market to do your bidding? Coming at it from this perspective, it's looking like almost the same old Nvidia again :)

> Take one example. So Nvidia / Arm invest heavily in data center focused designs - are they really going to offer this IP to Ampere / Amazon on equal terms when compared to an Nvidia CPU? As Ben says 'color me skeptical'.

That I don't believe, indeed. They will crush their datacenter competition, but I think the server market here might ironically a bit more flexible in moving to different ISAs when compared to mobile.


Absolutely! I think actually the biggest constraint on Nvidia will be 'bandwidth' (management not memory!). Where do they focus their energies and where do they leave the Arm ecosystem alone.

Incidentally Intel's stock seems to have risen over the last day or so which is a bit surprising given the datacenter story?


> Essentially they will have a full overview of the Arm ecosystem - will lots of confidential information

This is the real danger among everything.


I think my biggest concern is Nvidia is looking to be another Qualcomm. With all of the issues there.


But Mali is an also-ran in a market where the one making all the money has their own solution. What does "sell Geforce like ARM sells Mali" mean? Make it an utter commodity, a bunch of transistors in one much bigger chip, like the GPU in a console but valued even less? Nobody knows the name of what you are selling and 90% of the market would rather have more runtime on their phone than ever care in the slightest for 3D game performance?

Mobile GPUs are like integrated graphics on CPUs, those didn't kill AMD or Nvidia because that entire market doesn't want to spend a dime to begin with.


Because it pushes the CUDA ecosystem into dominance of yet another platform. You can run your acceleration routine on anything from a smartphone/raspberry pi to an enterprise accelerator, one algorithm. And it will be everywhere, a defacto capability of most reference-implementation ARM devices.

(and sure opencl too but that's too loose a standard to have any platform effect, it's just a standard that everyone implements a little differently and needs to be ported to their own compiler/hardware/etc, so there is no common codebase and toolchain that everyone can use like with CUDA.)

Everyone laughed at Huang saying that NVIDIA is a software company. He was right.


Nobody wants to run CUDA anymore. All the mobile SoCs jumped right over that one and have AI coprocessors now. CUDA is what runs on the developer workstation, not the "edge device" as they call it. Like there was a tiny window of software supremacy there, then everyone remembered how matrix multiplication works.


What are you even talking about? Jumped over? It was literally never an option (and still isn't). That said, compute shaders are indeed used on mobile SOCs but even then the install base is pretty abysmal.

Maybe nVidia will decide to push this type of tech a lot harder.


>Mobile GPUs are like integrated graphics on CPUs

I really don't think so. For the high end PC consumer, integrated performance can be ignored because you'll get a dedicated card. Not so with mobile SOCs.

It makes sense that nVidia might want to squeeze out Qualcomm and get their GPUs in Samsung flagships, for example.


I think they need a Qualcomm modem more than they will ever miss a Nvidia GPU.


The majority of the handsets Samsung sells is their own Exynos. I believe they only use Qualcomm in the US anyway. Is this referring to some US specific 5G band or something?


Can't Nvidia do this without buying ARM?


A comment yesterday in the other thread mentions that ARM currently sells their ARM CPU plus Mali GPU IP in a single package. Buying the IP together is much cheaper than buying just the CPU IP. This is why nearly every ARM CPU maker uses the Mali cores, and why PowerVR as a company is nearly dead.

Reading this comment from Huang, I read it as if Nvidia wants to sell this package, but with Geforce IP instead of Mali IP.


Minor nit: Architecture and not “Package”, the latter has specific meaning. ARM defines not just instruction set, but also the architecture - Core, Memory layout, peripheral busses, etc.


Ahh, the old comcast TV+Internet is cheaper than just Internet trick....


They did, and failed:

https://www.notebookcheck.net/Nvidia-to-licence-GPU-technolo...

My guess is they failed because Geforce at the time was too much of a "desktop architecture", and my guess is that it still is. But now that they own Arm they can just tell all Mali customers "take it or leave it."


A less cynical take is that they can now ask Mali customers "how can we modify this better and more mature architecture to be appropriate for your use case"


Am sad to say I'd call that take more naive more so than less cynical.


I hope this is true.


AMD is going to "own" x86 for the foreseeable future. AMD will have to either produce cut down x86 cores or start using RISC-V to compete with NVidia across their larger portfolio.


Hopefully they would need to divest Mali to an independent spinoff rather than just squash it in the acquisition.


Does nvdia even have product which would work at the power envelop that Mali works?


Not all the way down to <5W TDP but Orin goes down to 5W.

They had tablets at one point, and they've continued to refresh their product line for automotive and set-top use.

https://www.anandtech.com/show/15800/nvidia-announces-new-dr...


The Jetson Nano has a predefined 5W mode (see page 33 on [1]), so presumably the SoC could be even lower.

[1] https://info.nvidia.com/rs/156-OFN-742/images/Jetson_Nano_We...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: