Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia is reportedly in ‘advanced talks’ to buy ARM for more than $32B (bloomberg.com)
1093 points by caution 3 days ago | hide | past | favorite | 670 comments





This is quite concerning honestly. I don't mind ARM being acquired, and I don't mind Nvidia acquiring things. But I'm concerned about this combination.

Nvidia is a pretty hostile company to others in the market. They have a track record of vigorously pushing their market dominance and their own way of doing things. They view making custom designs as beneath them. Their custom console GPU designs - in the original Xbox, in the Playstation 3 - were considered a failure because of terrible cooporation with Nvidia [0]. Apple is probably more demanding than other PC builders and have completely fallen out with them. Nvidia has famously failed to cooporate with the Linux community on the standardized graphics stack supported by Intel and AMD and keeps pushing propietary stuff. There are more examples.

It's hard to not make "hostile" too much of a value judgement. Nvidia has been an extremely successful company because of it too. It's alright if it's not in their corporate culture to work well with others. Clearly it's working, and Nvidia for all their faults is still innovating.

But this culture won't fly well if your core business is developing chip designs for others. It's also a problem if you are the gatekeeper of a CPU instruction set that a metric ton of other infrastructure increasingly depends on. I really, really hope ARM's current business will be allowed to run independently as ARM knows how to do this and Nvidia has time and time again shown not to understand this at all. But I'm pessimistic about that. I'm afraid Nvidia will gut ARM the company, the ARM architectures, and the ARM instruction set in the long run.

[0]: An interesting counterpoint would the Nintendo Switch running on an Nvidia Tegra hardware, but all the evidence points to that this chip is a 100% vanilla Nvidia Tegra X1 that Nvidia was already selling themselves (to the point its bootloader could be unlocked like a standard Tegra, leading to the Switch Fusee-Gelee exploit).


You are not wrong, but the facts you have cherry picked fail to portrait the whole picture.

For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.

The claim that Nintendo is the only company nvidia successfully collaborates with is just wrong:

- nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards

- nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology

- nvidia collaborates with OS vendors like microsoft very successfully

- nvidia collaborated with mellanox successfully and acquired it

- nvidia collaborates with ARM today...

The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...

I mean, this is not nvidia specific.

You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).

I mean, you even try to paint this as if Nvidia is the only company that Apple has parted ways with, yet Apple has long track record of parting ways with other companies (IBM PowerPC processors, Intel, ...). I'm pretty sure that the moment Apple is able to produce a competitive GFX card, they will part ways with AMD as well.


> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong [...]

Hey! Wait a second, there. Nvidia isn't bad because it has a properietary Linux driver. Nvidia is bad because it actively undermines open-source.

Quoting Linus Torvalds (2012) [0]:

> I'm also happy to very publicly point out that Nvidia has been one of the worst trouble spots we've had with hardware manufacturers, and that is really sad because then Nvidia tries to sell chips - a lot of chips - into the Android Market. Nvidia has been the single worst company we've ever dealt with.

> [Lifts middle finger] So Nvidia, fuck you.

Nvidia managed to push some PR blurbs about how it was improving the open-source driver in 2014, but six years later, Nouveau is still crap compared to their proprietary driver [1].

Drew DeVault, on Nvidia support in Sway [2]:

> Nvidia, on the other hand, have been fucking assholes and have treated Linux like utter shit for our entire relationship. About a year ago they announced “Wayland support” for their proprietary driver. This included KMS and DRM support (years late, I might add), but not GBM support. They shipped something called EGLStreams instead, a concept that had been discussed and shot down by the Linux graphics development community before. They did this because it makes it easier for them to keep their driver proprietary without having work with Linux developers on it. Without GBM, Nvidia does not support Wayland, and they were real pricks for making some announcement like they actually did.

[0]: https://www.youtube.com/watch?v=iYWzMvlj2RQ

[1]: https://www.phoronix.com/scan.php?page=article&item=nvidia-n...

[2]: https://drewdevault.com/2017/10/26/Fuck-you-nvidia.html


In recent years, Linux computers have evolved to a major revenue source for Nvidia thanks to deep learning. However, not desktop users are behind this but servers, due to Nvidia's proprietary CUDA API. If they open sourced it or rebuilt it on top of mesa, it'd make it easier for AMD to implement CUDA, getting access to the deep learning ecosystem that's currently locked into CUDA. Nvidia's sales would take a huge drop. So I think it's even more likely that their drivers remain proprietary.

I don't have so much of a problem with CUDA staying closed, but rather Nvidia sabotaging Nouveau through signed firmware which they don't release (and obfuscate in their blob). Nouveau would be probably be decent by now (not as fast or feature complete, but usable for real workloads on newer cards) if it weren't for the fact that Nvidia has added features which have the direct effect of making it impossible to have a competitive open source driver.

Maybe something will change on this soon. There was speculation about this: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-O.... But I'm not holding my breath, and it would be nice if the solution wasn't "wait and hope until Nvidia releases the software necessary to control their GPUs".


> I don't have so much of a problem with CUDA staying closed, but rather Nvidia sabotaging Nouveau through signed firmware which they don't release (and obfuscate in their blob)

Do you have more info on this ? There is a big difference between not supporting open source, and actively sabotaging it. What are they doing, exactly ?


Look up "nouveau signed firmware". Phoronix has a bunch of articles on it. The Nouveau developers also talk about it at FOSDEM 2018 (and probably a later conference). This comment is a good intro: https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...

TL;DR starting with the 9xx series, Nvidia started making it so their GPUs would only run firmwares signed by them (likely to prevent counterfeits, i.e. 2060s sold as 2080s). So it is impossible to control the fans and reclock the GPU. There's no workaround, so even as the person who owns the device, I can't run my own firmware. AMD has signed firmwares too, but they actually release sufficient blobs to fully run the device.


Wow, that's insane, thanks for sharing this.

When I had nvidia hardware, I never tried nouveau, so I never ran into this.



Call me back when ROCm supports their flagship graphics card.

The 5000 series has been out for an entire year without ROCm support at this point.


That is because AMD split graphics vs. compute and you look at a non-compute card. The Vega-based chips work quite well and the Radon VII is (or was, I'm afraid) an excellent value proposition.

So this is well outside my area of expertise; but that seems weird. I want AMD to shepherd the ecosystem to a point where I can run PyTorch with some support from a graphics card. Supporting some graphics cards and not others doesn't sound very promising.

It is amazing to watch how much of a struggle AMD is having with getting PyTorch to work with ROCm. It makes me appreciate what a good job Nvidia must have done with CUDA.


Nvidia has split lines too. But they understand the importance of getting developers on their platform.

(To be fair AMD has it planned, but it's behind several other priorities to them.)


> Vega-based chips work quite well

LOL what a fucking joke. Not even close.

Also that one VFIO bug makes one of the great advantages of Radeon disappear and there is no fix in sight.

Fuck AMD. Bunch of marketing hype and they ship garbage half baked crap to the market.


Yeah... I learned this the hard way.

I thought AMD had great Linux driver support, and open source drivers.

Turns out AMD only has the open source driver part of the things, and this driver does not support a large chunk of their products.

I'd take a binary driver over no driver any day (and in fact, I returned the amd card and am using intel now, which works great).


difference between AMD and nvidia:

AMD: some open source drivers, but many things are inop

Nvidia: all closed source, but everything works

Source: I have a Vega 56 and I am gonna give it the sledgehammer when 3000 series arrives. Fuck that shit to high heavens my dude. ROCm is buggy as fuck. Only the bare basics work on Linux. Drivers are buggy, and crash all the time. Even the most basic bullshit is not implemented right. Like there is no fan control. That's how bad it is.


ROCm exists but many frameworks don't support it.

https://ai.stackexchange.com/a/16541


PyTorch supports ROCm quite well if you can follow three step build instructions instructions (https://lernapparat.de/pytorch-rocm/). TVM.ai also supports ROCm well. TF I didn't check.

FP32 performance for the VII is comparable to GTX20280Ti e.g. on Resnet training. Tensor Cores are cool, but 90% of people don't use FP16 that much.

I'm not sure that is the best reply that stackoverflow has ever seen.


The problem is that ROCm does not support many AMD compute cards, like their top of the line Navi-based one-year-old 5700 family.

Most frameworks do support ROCm though, so the ball is on AMDs court.


I can only speak for myself, but my understanding is that the Navi cards are not intended nor advertised for compute. Now, one might wish to get a consumer card with Arcturus after the Instinct MI100 is released or dislike that the Radeon VII is discontinued, or even that it wasn't split into render and compute lines, but basically, to me the complaint about Navi not being supported by ROCm sounds a bit like buying a boat and complaining that it can't fly. Driver support was bumpy for me with Linux 5.0 or so, but I haven't had trouble with just running whatever kernel Debian ships for quite a while now. But I must admit I don't use the GPU for anything but compute, so I don't know about graphics.

But your mileage may vary, I don't want to make people swear.


a GPU is a GPU, it's all just programs running on the shaders.

If AMD is unwilling to properly support its consumer GPUs in GPGPU workloads and NVIDIA is, then well... sounds like a good reason not to buy AMD.

It's really not a major ask, people have been running GPGPU programs on consumer cards since forever, even though they're "consumer" and sold as graphics cards and not compute accelerators. NVIDIA basically does this on purpose, people use their personal hardware to get a foot in the door of the ecosystem and end up writing programs that get run on big compute GPUs which NVIDIA makes big profits on, it's very intentional.

If AMD chooses not to do that, well... can't really blame people for avoiding their stuff when NVIDIA is willing to let you do this stuff on their cards and AMD isn't.


Yes, they only can do only one good thing - go bankrupt. That would be the best news for open-source.

Should open-source perhaps drop NVidia, instead of the other way around?

This divestiture happened long ago, if your running Linux not on a server, then Intel and AMD are the only players making chips that work right out of the box.

Even there though, only Intel has a buttery smooth experience. Ryzen for laptops is half baked (terrible USB C docking performance, occasional crashing on Windows and Linux with the 2xxxU series CPU/GPU chips) and AMD GPUs still require manual intervention to load proprietary firmware.

AMD does make some performant mobile GPUs though, they work well in Debian!


Funny, I stick with NVidia because they still offer more reliable, performant drivers for Linux (or these days FreeBSD), open source or no.

How so? Aren't you still trapped on Xorg with their proprietary drivers?

I switched to Wayland a few years back as vsync is quite nice to have, but whenever I go back to Xorg for AnyDesk or TeamViewer on AMD or Intel there is a fair bit of tearing.

Nvidia could have a competitive open source driver tomorrow if they released redistributable firmware that allowed reclocking the GPU.


> How so? Aren't you still trapped on Xorg with their proprietary drivers?

As a FreeBSD user I'm still "trapped" on Xorg anyway, and in any case I'd rather stick with what works than learn some whole new way of doing things for marginal benefits.


This is the sort of mentality I left the *BSD world for. Xorg is not some elegant codebase that's being unfairly targeted by an upstart.

> Xorg is not some elegant codebase that's being unfairly targeted by an upstart.

I never said it was. I said it's a whole new way of doing things for marginal benefits; is that not accurate?


You're not. I've been using Wayland exclusively on FreeBSD for a couple years now, with both AMD and Intel GPUs.

How do you go back to Xorg? Logout/login and/or reboot or is there a better way? I'm asking this because I often need screen sharing to work with my customers (demoes, check problems, etc) and I'm sticking with Xorg because Wayland doesn't do screen sharing.

Or even if they simply stopped signing their firmware. Nouveau has been perfectly capable of writing their own firmware.

Intel's drivers are far more reliable in my experience.

They might work, for some definition of working, now getting the full capabilities of the GPU is another matter.

OpenGL 3.3 when the proprietary drivers do OpenGL 4.1 for example.

https://www.gpuzoo.com/GPU-AMD/Radeon_HD_6320_IGP.html


> then Intel and AMD are the only players making chips that work right out of the box.

I have an Intel GFX card. Had an AMD 5700. Linux support sucked, switched back to intel.

So AFAICT, only intel and nvidia have good Linux driver support, and only intel has good open source linux driver support.


Which is funny because AMD drivers on linux has been nothing short of trouble (both open source and proprietary) and Nvidia’s blob just works.

Sadly I have to agree. Ryzen 3400G here, getting hardware transcoding on the iGPU is something I still haven't sorted out. There have been several recent issues in kernel, AGESA firmware (I suspect there might be newer versions with potential fixes that my mobo manufacturer hasn't released yet; this is 1.0.0.4 Patch B) and drivers. I've had several rounds of hunting down and compiling various versions of driver packages, modules and kernels from source, trying third-party PPAs, to no avail. The amdgpu/amdgpupro mess adds another layer of confusion.

I am not sure if I am missing some update, need to set some undocumented kernel flag and/or BIOS setting, if it's a software issue or Í just made a mistake somewhere. Debian 10/11.

Meanwhile, as much as I wanted to get away from Intel, their drivers have never posed any issue at all.


I believe AGESA 1.0.0.4 Patch B broke something for the APUs, you should try either upgrading or downgrading your BIOS but what worked for me was downgrading to AGESA 1.0.0.3 ABB, both Windows and Linux has stopped crashing now, although I still get the occasional lockup when browsing with Firefox on Linux. I found out the culprit after stumbling into this thread: https://old.reddit.com/r/AMDHelp/comments/gj9kpz/bsod_new_pc...

[citation needed] My experience directly conflicts with this and IIUC most GNU/Linux users have exactly the opposite impression. Maybe you're thinking of some past situation?

Take any AMD 5000 series card. I had a top of the line 5700.

Still no driver for compute 1 year later. I'm so happy i decided to return it and switch to intel instead of waiting for AMD or some random joe on their free time to add support for it to their open source driver.

So yeah. I'd take a working proprietary driver over no driver any day.


Here, my card on the travel netbook that I use,

https://www.gpuzoo.com/GPU-AMD/Radeon_HD_6320_IGP.html

The open source driver is kind of ok if the only thing we expect from it is getting a working X session.

Now if one wants to do some complex OpenGL stuff, then it might work, or not.


That card is super old, before GCN. Have you tried with something a little bit more recent?

I used to have a HD 7950 and it always worked perfectly, same with my current Vega 56.


Sure, how do I replace the card on an otherwise perfectly working laptop?

The usual linux answer to hardware problems, keeps being to buy new hardware.


I would say that's a much more prevalent attitude in the Windows and Mac worlds. Linux tries to keep compatibility with really old software. It was only 4 years ago that major distros started to require at least a 686, aka Pentium Pro, released in November 1995!

But at some point you have to consider if it's really worth it keeping a 10 year old laptop around. It's painful to say them goodbye, I know, I have been there, but for me it's just not worth it.


Asus sold the laptop with Windows 7 support as well, the drivers kept being updated up to Windows 8.1, and thanks to Windows driver ABI, those drivers work perfectly fine in Windows 10.

No need to throw a perfectly working laptop to enjoy the DirectX 11 and OpenGL 4.1 capabilities that it was sold for.


No I am thinking of the situation where the open source driver doesn’t have opencl support and the AMD drivers (fglrx) doesn’t compile or requires dependencies so old that it was dropped by the packagers for Arch Linux, all this for a few years until they come out with something that actually works when I’ve never ever had an issue with Nvidia. Also AMD never ever figured out how to fix screen tearing, or do so in a sane that doesn’t involve trial and error editing xorg.conf.

Even on Windows AMD drivers are the most unstable, bugged software that’s even been shipped. It’s been a long standing joke that AMD “has no drivers”.


Yeah, fglrx driver with the issues you mention, offered OpenGL 4.1 support on a graphic card I have.

The open source driver replacement only does OpenGL 3.3.

I guess I should be happy it does any OpenGL at all.


Ryzen 3700X and Radeon 5700 XT here. Not a single problem.

I have recently used Radeon 550, 560, 570, 5500 on AMD 5050e (yes, that old!), Ryzen 1600 (non af), 3100, 3600 and all have worked fine, Ubuntu 16.04, 18.04 and 20.04. In fact on average I have found the various hardware configurations to be about 5% faster on Linux than Windows.

It's anecdotal of course, but my RX560 has been absolutely flawless on both Ubuntu and openSUSE, literally out of the box support on a standard install.

You don't even have to go to open source. You can see this hostile behaviour from their top-paying clients!

Microsoft own previous gen Xbox emulator on the next gen xbox (i think it was original xbox emulated in the 360, but i might be wrong) was impacted by the team having to reverse-engineer the GPU because nvidia refused to let the emulator people to have access to the documentation provided to the original team.


Stumbling around Google didn't find me much more info on this, do you have any citations or keywords I could follow up on?

> Quoting Linus Torvalds (2012) [0]:

Is this an Ad Hominem ? Linus does not mention there a single thing that they are actually doing wrong.

> Drew DeVault, on Nvidia support in Sway [2]:

Nvidia has added wayland support to both KDE and GNOME. Drew just does not want to support the nvidia-wy in wl-roots, which is a super super niche WM toolkit whose "major" user is sway, another super super niche WM.

Drew is angry for two reasons. First, sway users complain to them that sway does not work with nvidia hardware, which as a user of a WM is a rightful thing to complain about. Second, Drew does not want to support the nvidia-way, and it is angry and nvidia because they do not support the way that wl-roots has chosen.

It is 100% ok for Drew to say that they don't want to maintain 2 code-paths, and wl-roots and sway do not support nvidia. It is also 100% ok for nvidia to consider wl-roots to niche to be worth the effort.

What's IMO not ok is for Drew to feel entitled about getting nvidia to support wl-roots. Nvidia does not owe wl-roots anything.

---

IMO when it comes to drivers and open-source, a lot of the anger and conflict seems to steem from a sentiment of entitlement.

I read online comments _every day_ of people that have bought some hardware that's advertised as "does not support Linux" (or Macos, or whatever) being angry at the hardware manufacturer (why doesn't your hardware support the platform that says it does not support? I'm entitled to support!!!), the dozens of volunteers that reverse engineer and develop open source drivers for free (why doesn't the open source driver that you develop in your free time work correctly? I'm entitled to you working for free for me so that I can watch netflix!), etc. etc. etc.

The truth of the matter is, that for people using nvidia hardware on linux for Machine Learning, CAD, rendering, visualization, games, etc. their hardware works just fine if you use the only driver that they support on the platforms they say they support.

The only complaints I hear is people buying nvidia to do something that they know is not supported and then lashing out at everybody else due to entitlement.


What exactly are you invested in in this discussion? We were originally discussing Nvidia's business practices don't match ARM's business practices. But you seem to just want to take on people's personal views on Nvidia now.

You're now somehow arguing with people that they should stop complaining about Nvidia's business practices. I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market. But even if you don't agree with some/most of the complaints surely you must agree that Nvidia's track record of pissing of both other companies (and people) is problematic for when they take control of a company with an ecosystem driven business model like ARM's?


> I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market.

I'd agree with you this is OP's argument, however it's main flaw is in explicitly omitting the fact that NVidia is not the only party that's "free" to do things.

We're not obliged to buy their cards and we aren't obliged to stay silent regarding its treatment of the open-source community and why we think it would be bad for them to acquire ARM.

I am always amazed at the amount of pro-corporate spin from (presumably) regular people who are little more than occasional customers.


> We were originally discussing Nvidia's business practices don't match ARM's business practices.

We still are. I asked about "which specific business practices are these", and was only pointed out to ad hominems, entitlement, and one sided arguments.

Feel free to continue discussing that on the different parent thread. I'm interested on multiple views on this.

> You're now somehow arguing with people that they should stop complaining about Nvidia's business practices

No. I couldn't care less about nvidia, but when somebody acts like an entitled choosing beggar, I point that out. And there is a lot of entitlement in the arguments that people are making about why nvidia is bad at working with others.

Nvidia has some of the best drivers for Linux there are. This driver is not open source and distributed as a binary blob. Nvidia is very clear that this is the only driver that they support on Linux, and if you are not fine with that, they are fine with you not buying their products. This driver supports all of their products very well (as opposed to AMD's, for example), its development is made by people being paid full time to do it (as opposed to most of their competitors which also have people helping on their drivers on their free time - this is not necessarily bad, but it is what it is), and some of their developments are contributed back to open source, for free.

People are angry about this. Why? The only thing that comes to mind is entitlement. Somebody wants to use an nvidia card on Linux without using their proprietary driver. They know this is not supported. Yet they buy the card anyways, and then they complain. They do not only complain about nvidia. They also complain about, e.g., nouveau being bad, the Linux kernel being bad, and many other things. As if nvidia, or as if the people working on nouveau or the Linux kernel for free on their free time owes them anything.

I respect people not wanting to use closed source software. Don't use windows, don't use macosx, use alternatives. Want to use linux? don't use nvidia if you don't want to.


No, thank you. Can we do without the Reddit attitude here please? The choosing beggars, the needless quoting of debate fallacies in what is not a debate?

If you ask your impression of Nvidia's business practices, and they give you their opinion, you can't somehow invalidate that opinion by retorting with debate fallacies. That's the "fallacy fallacy" if you're sensitive to that. This is not a debate competition about who's right, this is people giving their opinions based on Nvidia's past and current actions. You asked a question, and they answered. This is not a competition. Please give them the basic respect of acknowledging their opinion.


This is so wrong.

We are discussing a topic, and people throwed multiple arguments that do not make sense.

You are claiming that I should just shut up and respect their feelings, but that is worthless.

Two examples:

---

Somebody's argument was: "Linus doesn't like them, therefore I don't like them".

The reason these are called logical fallacies is because these arguments are illogical. I told them that this was a logical fallacy (argument of authority - just because someone with authority makes an argument does not mean they are right), and ask them _why_, what is it that linus and you do not like.

I am happy I did that, because many of them have raised multiple actually-valuable arguments in response. For example, because nvidia's hardware throttles down if the driver firmware is not signed, and this makes the open source drivers slower for no reason.

That's a valid and valuable argument. Linus doesn't like them is worthless.

The person who raised this argument learned something from somebody else which knew what Linus did not like, and so did I.

---

The same happened when I called out the entitled choosing beggars. "Why are you angry at nvidia for not providing an open source driver ? You knew before buying their product that only the binary driver was supported."

Read the responses. The reason they are angry, is because they don't have a choice but to use nvidia, because the competition products (AMD in those cases) are much worse. AMD does have open source drivers, but they are crap, and they don't support many of AMD's products, at least for compute, which is something that many (including myself) use for work.

These people have picked a platform that values open source code, but due to their job requiring them to actually get some work done, they must use nvidia for that, and they don't like having to compromise on a proprietary driver.

Honestly, I think this is still entitlement, but I definitely sympathize with the frustration of having to make compromises one does not like.

---

From the point of view of whether nvidia buying ARM is good or bad. I still have no idea. ARM does _a lot_ of open source work, its major market are Android and Linux communities.

I understand that people are afraid that Nvidia will turn ARM into a bad open source player. It can happen. But without Android, iOS, and Linux, ARM is worthless. So a different outcome of this could be that NVIDIA buying ARM ends up making NVIDIA more open source friendly, since at least the Linux market is important for nvidia as well (~50% of their revenue).

It definitely makes sense for regulatory authorities to only allow somebody to buy ARM that will preserve ARM deals with current vendors (apple, google, samsung, etc.), and that also will preserve ARM open platform efforts.

If nvidia does not agree to that, they should not be allowed to buy arm.


> their hardware works just fine if you use the only driver that they support on the platforms they say they support.

Except that they stop supporting older HW at some point. That, together with occasional crashes learned me not to buy nVidia HW again.


> Nvidia has added wayland support to both KDE and GNOME.

NVIDIA insisted on pushing its own EGL streams even as the wider community was moving in a different direction.

They suffer from a major NIH syndrome and do not know how to work with others at all.


Speaking as a Linux graphics developer, I can confirm that NVidia indeed is a pretty terrible actor. There could be a viable Open Source driver for most of their GPUs tomorrow if they changed some licensing, NVidia knows this.

A NVidia purchase of ARM would also create a lot of conflicts of interest.


And selling for only 32 billion seems really low for a company that significant.

$18B less than Tiktok (rumored valuation at $50B). That's hard for me to grasp.

How many ARM chips will the average consumer buy in their life? How much profit will ARM make on each of those chips? How many Tiktok videos will the average consumer watch in their life? How much profit will Tiktok make on each of those views?

ARM doesn't manufacture chips, they license ARM Processor 'designs'.

ARM is more of a household name for their use in mobile phones but that is just the tip of the iceberg.

I think you underestimate how many ARM chips you have in a single car or delivery truck.

Add to that:

- farming machinery

- construction machinery

- factory line automation

- elevators, escalators

- EV chargers

- fridges, washing machines, ovens

- medical devices such as drug-infusion pumps, ventilators, surgical machinery, etc.

- Auxiliary modules in aeroplanes and shipping containers.

- Infrastructure for Road, Rail, Power Grid with ARM processors running headless embedded systems

- Anything the size of a pebble with bluetooth connectivity uses nordic's nRF chip (which is yet again an ARM chip)

ARM processors are hiding in plain sight in the world all around you.

I understand the point you make, from a business standpoint, on how Tiktok might scale. The thing that is bizarre for me and I agree with the parent's sentiment, is how disconnected the valuation is from the real-world impact and objective _usefulness_ of ARM versus Tiktok.

Edit: bullet points


> The thing that is bizarre for me and I agree with the parent's sentiment, is how disconnected the valuation is from the real-world impact and objective _usefulness_ of ARM versus Tiktok.

Also how easily the technology behind Tiktok can be duplicated compared to ARM.

Anyway, it's probably all about the network effect and brand value.


ARM has no exposure to any end consumers and takes the tiniest sliver from each sale—if anything at all.

Tiktok can deliver arbitrary content to hundreds of millions of people’s eyeballs around the planet.


ARM processors are also in pretty much any storage device like hard drives too.

Also Intel ME and (not sure, but probably) AMD PSP.

The ME runs on an ARC chip, not ARM.

It did run on ARC, but now it's just a 486.

Per-video profit isn't really meaningful; these kinds of companies report revenue (and thus profit) base on the number of active users.

Consider that in 2017, 21 billion ARM chips were manufactured (doubled in 4 years, from 10 billion in 2013), and that ARM's licensing fees are over 2% of chip cost for current high-end designs (and they're talking about raising that even more). They have 95%+ of the mobile phone market, are making inroads in the server market, and will soon be in every Apple laptop, which I expect will grow the market for ARM laptops even outside Apple. It wouldn't be a stretch to find them in common desktop computers after that. They're in smart TVs, washing machines, robot vacuum cleaners, and all other kinds of smart (and non-smart) appliances. Even SSDs have their own embedded ARM chip. And that's just home/consumer stuff; haven't even scratched industrial/commercial applications, of which there are a ton.

I could easily see yearly production at over 100 billion chips before 2030, probably even before 2025. While I'd love to see something like RISC-V take off commercially, I don't think that's realistic.

Meanwhile, social media users are incredibly fickle; platforms are subject to fad and fashion. Certainly Facebook and Instagram are still huge behemoths, but their growth is nothing like it once was, with people -- especially younger people, trying to distinguish themselves from their older, boring relatives -- flocking to TikTok. I fully expect TikTok will be in Instagram's boat in under 10 years, with some other platform taking its place.

ARM seems like an amazingly great short-, medium-, and long-term bet, while TikTok feels like a nice short- (maybe medium-, if they're lucky) term money-maker, and even that feels like a big maybe: I have no idea what their ad revenue per user looks like, but it's probably not great since their audience skews younger. Teenagers and college kids don't have much in the way of discretionary income. Then again, TikTok doesn't pay their creators like e.g. YouTube does, so they get to keep all that ad revenue.


21 billion is 3 per person per year for everyone in the world, including rural goat herders etc. who don’t consume very many. I don’t doubt the number but it’s still wild.

TikTok is doubling every one year, and is getting around one billion views per day. I'd imagine that TikTok's growth over the next five years is going to be both faster and larger than ARM's. ARM makes around the same per chip (10 cents) that Youtube is making per view.

Now, certainly TikTok might not be sustainable and might disappear off the face of the earth tomorrow. Or it might become a juggernaut that overtakes Facebook.


Sorry, this is wrong by an order of magnitude. Youtube is making more like 1-3 cents per view.

Do you have a source for the cents per view on YouTube?

> Google pays out 68% of their AdSense revenue, so for every $100 an advertiser pays, Google pays $68 to the publisher.

> The actual rates an advertiser pays varies, usually between $0.10 to $0.30 per view, but averages out at $0.18 per view.

https://influencermarketinghub.com/how-much-do-youtubers-mak...


$18 divided by a thousand is 1.8 cents, not 18 cents.

This also is only people that watch the whole ad.


Oh! That is quite the mistake. It does look to be more on the order of cents per view.

If you don’t see RISC-V taking off commercially, then why is ARM trying to sell? I ask because my understanding was that they were trying to exit because of RISC-V.

I think it's just trying to sell because Softbank is a venture capital company, they only care about short-term profits.

PS I applaud RISC-V but it won't take over the market for a long time, and it wouldn't drive ARM out completely, I'm sure. Intel's had many competitors and they're doing just fine (even despite screwing up repeatedly with their processes!)

Look at all the failed attempts to move away from x86(/64). Even intel tried it with Itanium and failed, HP has to pay them to keep making it so they can fulfull their server contracts. I'm sure ARM has a similar hold on the mobile market.


> Intel's had many competitors and they're doing just fine (even despite screwing up repeatedly with their processes!)

With AWS offering ARM systems, all the Chromebooks, Apple, the complete loss of the phone market, Intel’s staying power is about to be tested to the extreme.


Most of those don't matter and didn't matter anyway: Chromebooks and Apple desktops are a tiny portion of a small market; Apple's volume chips are iPhones. Intel never really had a grip on any kind of mobile handset market and hasn't for years, despite that they still post record profits. That's because the margins on handsets are very slim, unless you're Apple.

The only actual major change here is AWS offering Graviton, which actually hints at their real cash cow: datacenter SKUs with absurd markup. Something like 80% of their profit margins are here. More accurately, the change is that there are now viable silicon competitors to Intel in the performance department. So it's now clear that ultra-integrated hyperscalers who can actually afford tape out costs (7nm CPUs are not cheap to produce in volume) have an option to vertically integrate with e.g. Neoverse. Smaller players will not do this still, because alternative options like Rome will be adequate. But the only reason any of them are changing anything is cost savings, because now there are actual viable competitors to Intel when there were zero of them for like, 15 years. Producing cutting edge silicon products isn't easy, but it's very profitable, it turns out.

To be clear, Intel isn't charging $10,000 for a Xeon Platinum because it costs $9500 to make and they make $500 in profit. (Likewise, AMD doesn't produce competitors at 1/5th the price because they made a revolutionary, scientific breakthrough in processor design.) They're charging what you'll pay, not what it takes to produce. Seeing as they currently still have a complete stranglehold on the datacenter industry and make more in a quarter than most of their competitors do in several years, I suspect they've got much more "staying power" than the watercooler chat on this website would lead you to believe.


Softbank had a bad year and is in need of cash not to collapse.

Softbank is selling ARM because they are broke.

The Softbank guy invested a ton of money on WeWork. Tried to sell WeWork for 60 billion, but before that happen WeWork valuation dropped out to 2-5 billion (huge loss). That was in 2019. Afterwards, Softbank invested another 10 billions to try to save it. WeWork owns and also pays rent or hundreds of office buildings in the most expensive zones of all the major capitals in the world. 2020 COVID now means these super expensive offices are now empty, since WeWork customers pay a premium to be able to cancel their leases in <1 week. So essentially, WeWork is broke, worth 0, and Softbank has lost dozens of billions on it.

On top, Softbank owns a huge chunk of Uber, which is also worth close to zero now that people are not travelling due to COVID...

So... yeah... Softbank is selling ARM because they must. They are super broke, and investors are going to pull the money that remains out. Selling ARM and giving investors a tiny benefit so that they keep their money is better than them taking a huge loss this year.


If you see that is one of the options I don’t think Nvidia did not see that possibility as well. Yet they still plan to buy ARM.

Something like Tiktok can myspace in the blink of an eye. Established CPU architectures have enormous staying power.

MIPS sure didn't, same for SPARC. CPU architectures can effectively die in a handful of years just like MySpace, Delicious, Digg and such.

MySpace is dead, dead. You'll still find devices, especially in manufacturing, that are still running on a MIPS, and are essential to the process. You'll still find them in various cheap toys on the shelves.

MIPS _should_ be dead, half the manufacturers of the chip have stopped. But, Imagination Technology still sell a considerable number to Apple every year.


Even decades after starting to decline both are still less dead than the average former social media fad.

I have more Arm CPUs in consumer devices within arms reach at this moment than the number of Tiktok videos I’ve seen in my entire life.

Other people watch more TikTok videos in a single day than the number of ARM consumer devices that you've owned in your entire life.

They watch those TikTok videos on devices powered by ARM SoCs.

But if we are doing nonsense comparisons let’s use the number of ARM clock cycles for devices you own.

The sheer number of TikTok videos that have been watched on an Arm device. Arm soon to rebrand as TikArmTok.

It’s not nonsense - Arm gets paid per device, TikTok get paid per ad impression (and views are a good proxy for that)

But the pay for each of those is orders of magnitude apart.

Here's an estimate of ARM's revenue per chip at 10 cents (profit maybe half that).

[1]https://qz.com/741078/a-company-that-doesnt-really-make-chip...

Here's an estimate of youtube (not exactly tiktok, but not exactly not), revenue at 10 to 30 cents per view.

[2]https://influencermarketinghub.com/how-much-do-youtubers-mak...


that article makes me wonder if the beos would have survived if they switched from Hobbit to arm chips and stuck with selling hardware+software, and later on added x86/x64/PPC/POWER. A tight integration of hw+sw has worked for apple.

My guess is no.

I loved BeOS, but there was an even more fundamental problem that put a limit on its days: A failure to anticipate coming need for home computers to become more secure. At the same time that Microsoft and Apple were both working frantically to ditch their old single-user desktop operating systems and replace them with, in effect, spruced up versions of existing server/workstation operating systems, Be was trying to launch a brand new OS on the dying model. Had they survived even a couple years longer, they would have had to reckon with that, and they simply didn't have the resources to navigate such a fundamental transition.


Sorry, this is wrong by an order of magnitude. Youtube is making more like 1-3 cents per view.

Compared to the number of pixels * the number of video frames displayed by Tikki…

Same here, which makes neither of us an “average consumer”.

By this logic, TicTac should be worth more than all car manufacturers combined, since there are more TicTacs in an 1$ pack than I will ever buy cars in my entire life. And I'm sure their margin is much higher, percentage-wise, at least compared to most car manufacturers.

Let's be real here: TikTok is a big social network, yes, but ARM owns most of the embedded market. Every smartphone your average consumer buys runs an ARM chip. And hardware is harder to replace than software.


Car manufacturers make about $2k per car. Do you buy 2k packs of TicTac every 10 years?

Maybe ARM should copy the business model of more valuable companies like TikTok by harvesting and selling it's users data. Surely the data flowing through a SOC is worth something.

> How many Tiktok videos will the average consumer watch in their life?

This number is not unbounded. It will converge to some asymptotic limit.

> How much profit will Tiktok make on each of those views?

Around a tenth of a cent per view.

These huge valuations are purely because people don't do the math and don't know how the market works.


Considering Apple is moving their desktops to ARM and pretty much all iOS devices are ARM...

What if giphy was worth 100x ARM, would it be interesting then?

It is quite simple really. You seem to be under the mistaken impression that valuations in tech are based in any way on logic. They're not. They're completely hype-driven.

That's how this year's myspace, which people will have trouble remembering 5 years from now, can get a higher "valuation" than a large semiconductor company with a 30 year track record.

Investors and the financial sector are proving time and time again that they're unable to learn from their mistakes, through no "fault" of their own, because apparently it's human nature to just be horribly bad at this.

It amazes me that people think investors somehow learned anything from the dot-com bubble, given they've been repeating all of their other major mistakes every odd year or so.


... Meanwhile Microsoft is busy buying Tik-Tok (eye roll)

Source? Didn’t hear this was happening

Edit: https://www.businessinsider.com/microsoft-tiktok-donald-trum...

This is some very fresh news.


Microsoft buying ARM would be interesting.

I don’t want no ads on me CPUs

The flawless execution of these instructions has been sponsored by RAID bug spray.

RAID-5 bug spray, presumably?

now with extra parody!

I am starting to see a lot of these similar comments around the internet.

I think, it is because people are now so used to Apple and Amazon's trillion valuation, with Apple closing in to 2 Trillion, people think $32B is low or ( relatively ) cheap.

Reality is ARM was quite over valued when it was purchased by Softbank.


Another context to add to this.

This might also be part of SoftBank’s fire sale which bought ARM for $32B just few years ago (2018?)


Company valuation usually involves more tangible things, like revenue and profits, or at least expected revenues. "Importance" can vanish really fast.

We don't really live in that world anymore – what is value investing when the Federal Reserve sets the price? More and more, "intagibles" like brand value are becoming more important on balance sheets than investors want to admit.

For me it is hard to grasp if this really is a paradigm shift or merely some valuation cycle aka https://twitter.com/chamath/status/1280531290635157505

For me Tesla stock value is where it got confirmed stock valuation diverged from any data points. I understand there is a potential huge upside there, but no real data points would justify anything close to the current valuation.

> There could be a viable Open Source driver for most of their GPUs tomorrow if they changed some licensing

I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.

Do you have a source explaining what licensing changes they would have to make and what impact would that have for Linux and Nvidia ? I'd like to read that.


Nvidia has a very different model for what they're trying to get out of their drivers. They spent something like 5x more on the number of driver developers than AMD, then would send engineers to work with AAA game studios to "optimize their games" for Nvidia. Good so far. But then these optimizations went so far as fixing (in the driver) broken game code. Like apparently it was so bad that games were being shipped without issuing BeginScene/EndScene on DirectX.

Hence AMDs push for Mantle then Vulkan. The console like API is the carrot to get people to use an API that has an verification layer so that third parties can easily say "wow what a broken game" rather "wow this new game runs on Nvidia and not AMD, what broken AMD drivers".

Nvidia open sourcing their drivers completely destroys a large chunk of their competitive advantage and is so intertwined with all the IP of the games they have hacks for that I'd be surprised if they ever would want to open source them, or even could if they wanted to.

More docs would be nice though.


The problem was never opening the existing driver.

It was:

- all kinds of problems wrt. the integration of the driver in the Linux Eco system, including the properitary driver having quality issues for anything but headless CUDA.

- nvidea getting in the way of the implementation of an open source alternative to their driver


But most of these games don't even exist on Linux.. So they wouldn't have to fix all that stuff.. As a Linux user I'd gladly do without that bloat anyway (also explains why a "driver" has to be 500MB lol)

Sure, but they can't open source their existing drivers since it's really the same driver source for Linux and Windows with all the IP problems.

And they don't see the benefit to create new drivers.

I agree they should help out more.


AMD supported Radeon & AMDGPU rather than releasing their own drivers, there is no reason Nvidia can't provide documentation and a simple open source firmware for their GPUs.

We could care less about the hacks and cludges baked into the proprietary Nvidia drivers and firmware focused on DirectX powered gaming. The current path Nvidia has chosen with signed firmware locks out open source developers from much of the low level operations of their GPUs.


> But then these optimizations went so far as fixing (in the driver) broken game code.

AMD does the exact same thing and always has. When you see shaders come down the wire you can replace them with better-optimized or more performant versions. It's almost always fixing driver "bugs" in the game rather than actual game bugs. And the distinction is important.

I do agree with you, but that element is something everyone has to do to remain competitive in games. Developers will only optimize for one platform (because they're crunching), and 9 times out of 10 that's a RTX2080Ti.


Nvidia did more than that, hooking their drivers in the same way that they would for benchmarks "oh actually ignore this API call" "oh actually issues these two calls when you see this one call with this signature" "yes this game requested synchronization here, but just ignore it" kinds of things.

While yes AMD did similar things when they could, it was way less prevalent (if only because they didn't have the staff necessary to pull it off to the same degree).

Edit: Here's an example of some of the stuff I'm talking about that goes beyond shaders. https://devblogs.microsoft.com/oldnewthing/20040305-00/?p=40...


> I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.

This persistent bit of FUD really needs to die. Yes, you have to be careful, but at this point it's ridiculously well-known what is obviously correct and what is obviously incorrect when dealing with GPL. I'm sure there are some grey ares that haven't been worked out, but avoiding those is fairly simple.

Nvidia is already in a weird grey area, releasing binary blobs with an "open source" shim that adapts its interfaces to the GPL kernel. As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.


I don't know if statement about interfacing with GPL is true or not but your statement first calls it a FUD, meaning you believe it is false, and then you say all the driver code should be considered derivative work and therefore be subject to GPL, meaning that original statement you called a FUD is actually true. Seems to me that a lot of GPL advocates are actually responsible for a good part of the GPL FUD.

> This persistent bit of FUD really needs to die. [...] I'm sure there are some grey ares that haven't been worked out

Way to contradict yourself.

> but avoiding those is fairly simple.

[Citation needed].

> As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.

Maybe this is what needs to happen to force companies to change their mindset, but where I work, lawyers tell us to (1) never contribute to any GPL'ed based code, (2) never distribute GPL'ed code to anybody (e.g. not even a docker container), etc.

Their argument is: a single slip could require us to publish all of your code, and make all of our IP open, and to make sure this doesn't happen, an army of lawyers and software engineers and managers would need to review every single code contribution that has something to do with the GPL. So the risks are very high, the cost of doing this right is very high as well, and the reward is... what exactly ? So in practice this means that we can't touch GPL'ed code with a 10 foot pole, it is not worth the hassle. If I were to ask my manager, it will tell me that it is not worth it. If they ask their manager, they will tell them the same. Etc.

BSD code ? No problem, we contribute to hundreds of BSD, MIT, Apache, ... licensed open source projects. Management tells us to just focus on those.


Nouveau is a highly capable open source driver for NVIDIA GPUs based on reverse-engineering.

For some older card generations (e.g. GTX 600 series) it was competitive with the official driver. But in every hardware generation since then, the GPU requires signed firmware in order to run at any decent clock speed.

The necessary signed firmware is present inside the proprietary driver, but nouveau can't load it because it's against the ToS to redistribute it.

Most GPU features are available but run at 0.1x speed or slower because of this single reason. Nvidia could absolutely fix this "tomorrow" if they were motivated.

https://www.phoronix.com/scan.php?page=news_item&px=Nouveau-...


Solution: Download the Nvidia blob, isolate the binary firmware, extract, load. Be fun? Absolutely not. Desperate times and inconsciable acts of legalism call for equally extreme levels of overly contrived legalism circumvention.

At this point I've gotten so bloody tired of the games people play with IP, that I'm arriving at the point I think I wouldn't even mind being part of the collateral damage of our industry being burned to the ground through the complete dissolution of any software delivered or related contract. If you sell me hardware, and play shenanigans to keep me from being able to use it to it's fullest capability, you're violating the intent of the rights of First Sale.

To be honest, I think every graphics card should have to be sold bundled with enough information for a layperson (or I'll throw out a bone,a reasonably adept engineer) to write their own hardware driver/firmware. Without that requirement, this industry will never change.


Wow, that does suck. Imagine them doing the same thing with ARM.

AMD and Intel had been providing open source drivers for their gpus for long time and they've yet to have had any legal problems with it.

AMD didn't directly open source their driver because if legal issues.

The point is it's not about open sourcing your properitary driver but about not getting in the way of an alternative open source driver, maybe even letting it a bit of an have even if just unofficially.

I thing if I where nvidea I might go in the direction of having a fully it at least partially open source driver for graphic stuff and a not so open source driver for headless CUDA (potentially running alongside a Intel integrated graphics based head/GUI).

Through I don't know what they plan wrt. ARM desktops/servers so this might conflict with their strategies there.


Perhaps Nvidia's ASICs and/or firmware contains more legally dubious components. ;-)

The GPL can't cause your code to automatically become GPL licensed. It can only prevent you from distributing the combination of your incompatibly licensed code and others' GPL code.

>I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.

The only tricky things involve blatantly betraying the spirit of the agreement while trying to pretend to follow the letter and hoping a judge supports your interesting reading of the law.

Even so there is no provision in law wherein someone can sue you and magically come into possession of your IP.

It would literally require magical thinking.


Out of interest how much work does ARM put into the Linux kernel, OSS compilers, OSS libraries.

Quite a lot.

Take a look at a recent snapshot of changesets and lines of code to the Linux kernel contributed by various employers: https://lwn.net/Articles/816162/

Arm themselves is listed at 1.8% by changesets; but Linaro is a software development shop funded by Arm and other Arm licensees to work on Arm support in various free software, and they contributed 4% of changesets and 8.8% by lines of code. And Code Aurora Forum is an effort to help various hardware vendors, many of whom are Arm vendors, get drivers upstreamed, and they contributed 1.8% by changesets and 10.1% by lines changed. A number of other top companies listed are also Arm licensees, though their support may be for random drivers or other CPU architectures as well.

However, Arm and companies in the Arm ecosystem do make up a fairly large amount of the code contributed to Linux, even if much of it is just drivers for random hardware.

And Arm and Linaro developers also contribute to GCC, LLVM, Rust, and more.


I think the main question is weather they will "just" own AMD it weather they will absorb it.

There’s no defending Nvidia’s approach to Linux and OSS. It is plain awful no matter how you try to twist the reality. And it is actively damaging because it forces extra work on OSS maintainers and frustrates users. You should not be required to install a binary blob in 2020 to get basic functionality (like fan control) to work. Optimus and wayland support is painfully, purposely bad.

Also the Nvidia Wayland support is horrible.

> You should not be required to install a binary blob in 2020 to get basic functionality (like fan control) to work.

You are not required to do that. Use nouveau, buy an AMD or intel GFX card.

You are not entitled to it either. People developing nouveau on their free time don't owe you anything, and nvidia does not owe you an open source driver either.

I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.

I don't use nvidia GFX cards on linux anymore (intel suffices for my needs), but when I did, I was happy to have a working driver at all. That was a huge upgrade from my previous ATI card, which had no driver at all. Hell, I even tried using AMD's ROCm recently on Linux with a 5700 card, and it wasn't supported at all... I would have been very happy to hear that AMD had a binary driver that made it work, but unfortunately it doesn't.

And that was very disappointing because I thought AMD had good open source driver support. At least when buying Nvidia for Linux, you know beforehand that you are going to have to use a proprietary driver, and if that makes you uncomfortable, you can buy just something else.


> You are not required to do that. Use nouveau, buy an AMD or intel GFX card.

Has internet discussion really fallen this low that all needs to be spelled out and no context can ever be implied?

We're in a thread about NVidia, so of course OP's talking about NVidia hardware here. Yeah, they can get AMD, but that does not change their (valid) criticisms of NVidia one bit.

> I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.

Aaand?

Windows and macOS have different standard for drivers than many Linux users do. Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too?

I find it really weird to assume that because something is happening somewhere, it's some kind of an "objective fact of reality" that has to be true for everyone, everywhere.

When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?

It's the same thing. NVidia is not some lonely developer on GitHub hacking during their lunch break on free software.

Do you also assume that the kind of music you find interesting is objectively interesting for everyone?

This has nothing to do with entitlement. It's listing reasons for why someone thinks NVidia buying ARM is a bad idea.


> Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too? When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?

It is to me. When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed.".

That's exactly how these comments being made sound to me. Nvidia is very clear that they only support their proprietary driver, and they deliver on that.

I have many GFX card from all vendors over the years, and I've had to send one back because the vendor wasn't honest about things like that.

Do I wish nvidia had good open source drivers? Sure. Do I blame nvidia for these not existing, not really. That would be like blaming microsoft or apple for not making all their software open source.

I do however blame vendors that do advertise good open source driver support that ends up being crap.

What does any of this have to do with nvidia buying or not buying arm ? Probably nothing.

What nvidia does with their GFX driver can be as different from what ARM does, as what Microsoft does with Windows and Github.


> When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed."

That's a bad analogy. A better one would be it's like you bought a car, (an open-source operating system) and this one accessory supplier is selling you what are really motorcycle parts, but just about fit the car barely, (a less-than-great proprietary driver when you explicitly are on an open system). Additionally, they are extremely secretive and absolutely refuse to answer any sort of questions or allow you to modify the parts you purchased from them to fit better by implementing various forms of DRM.

You can just not buy those parts and indeed that's what many users are doing.

This is separate from raising concerns about this somewhat dodgy parts manufacturer potentially acquiring another manufacturer, specifically one that does require a lot of cooperation with others by its very nature.

> Nvidia is very clear that they only support their proprietary driver, and they deliver on that.

It's more complex than that. They seem to actively implement features to make it purposely more difficult to develop an independent open-source driver. This is rather different than just being passively indifferent to open-source. Moreover their proprietary driver can be less than seller too, so am not so sure they "deliver" even on that.

Therefore we, Linux users, can refuse to support a company that only supports their (lacking) proprietary driver and certainly we are within our rights to raise concerns about its purchase of ARM given its actively hostile approach to open-source.


I would probably agree with you if everything was modular and commodity and easily swappable. If I decide I won't buy hardware with nvidia in it, that chops out a chunk of the possibly laptops I can have. It means I can't repurpose older hardware; sure, hindsight may be 20/20, but perhaps I didn't have the foresight 7 years ago to realize I'd want to run Linux on something today (yeah, older hardware is better supported, but it's by no means universal). It means that I can't run some things that require CUDA and don't support something like OpenCL.

And you can argue that that still is all fine, and that if you're making a choice to run Linux, then you have to accept trade offs. And I'm sympathetic to that argument.

But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests. And that's not a fair thing to require of us. If nvidia perhaps simply didn't care about supporting Linux at all, and just said, with equanimity, "sorry, we're not interested; please use one of our competitors or rely on a possibly-unreliable community-supported, reverse-engineered solution", then maybe it would be sorta ok. But they don't do that. They foist binary blobs on us, provide poor support, promise big things, never deliver, and actively try to force their programming model on the community as a whole, or require that the community do twice the amount of work to support their hardware. That's an abusive relationship.

Open source graphics stack developers have tried their hardest to fit nvidia into the game not because they care about nvidia, but because they care about their users, who may have nvidia hardware for a vast variety of reasons not entirely under their control, and developers want their stuff to work for their users. Open source developers have been treated so poorly by nvidia that they're finally starting to take the extreme step of deciding not to support people with nvidia hardware. I don't think you appreciate what a big deal that is, to be so fed up that you make a conscious choice to leave a double-digit percentage of your users and potential users out in the cold.

> None of the drivers on my windows and macosx machines are open source. They are all binary blobs.

Not sure how that's relevant. Windows and macOS are proprietary platforms. Linux is not, and should not be required to conform to the practices and norms of other platforms.


> But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests

This company in no way shape or form is obligated to cater to your interests. In this case it would likely be counter to their interests.


I don't know why this is downvoted. If any, Nvidia has been providing quality drivers for Linux for decades, and it was the only way to have a decent GPU supported by Linux in the 2000s, as ATI/AMD cards were awful in Linux.

> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.

But for entirely different reasons. Apple switched from PowerPC to Intel because the PowerPC processors IBM was offering weren't competitive. They switched from Intel for some combination of the same reason (Intel's performance advantage has eroded) and to bring production in-house, not because Intel was quarrelsome to do business with.

Meanwhile Apple refused to do business with nVidia even at a time when they had the unambiguously most performant GPUs.


Apple didn't part ways with Intel and IBM because they were difficult to work with. They parted ways because Intel and IBM fell behind in performance. Nvidia has certainly not, and Apple has paid a price with worse graphics and machine learning support on Macs since their split. It's clearly different.

Correct, IBM didn't care to make a power efficient processor and Motorola didn't see the benefit in Multimedia extension in their processor because they needed them for their network devices.

Nvidia introduced a set of laptop GPUs that had a high rate of failure. Instead of working with and eating some of the cost of repairing these laptops they told their customers to deal with it. Apple being one of their customers got upset and left holding the bag of shit and hasn't worked with them since.

Intel and AMD have used their x86/AMD64 patents to block Nvidia from entering the x86 CPU market.

Nvidia purchasing ARM will hurt not the large ARM licensees like Apple and Samsung but the ones that need to use the CPU in a device that does not need any of the Multimedia extensions that NVidia will be pushing.


With intel it's a bit more complicated than that. I think running Macs on their own processors is a bit cost saver for them, and allows more control. And intel's CPU shortages have hurt their shipping schedules. I don't think this transition is about CPU performance.

But yeah I don't think it's about collaboration either.


> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...

Out of curiosity, is there any large open source product from NVidia? I can't think of any.


Only for their own hardware (RAPIDS, all the cuda libraries, etc.), which other companies like AMD have just forked and modified to work on their hardware keeping the algorithms intact.

NVIDIA contributes mostly to existing open source projects (LLVM, Linux kernel, Spark, etc.), see https://developer.nvidia.com/open-source


Not agreeing or disagreeing, just want to point out that LLVM was always open source and wasn't developed by Apple. Apple just happened to hire the dev who initially wrote it.

Most LLVM development has to my knowledge been funded by Apple. LLVM and Clang has seen the bulk of their work done on Apple payroll.

It is a bit like WebKit. It was based on KHTML which was an open source HTML renderer. But Apple expanded that so greatly on their own payroll that it is hard to call WebKit anything but an Apple product.


This is not true anymore. Apple funded a significant part of LLVM/Clang work in the 2010s, and then again with the aarch64 backend, but nowadays Google and Intel contribute much more to LLVM than Apple.

Yep. I didn't meant that Apple created LLVM, only that Apple contributes _a lot_ to the open source LLVM project.

I think the common theme in your examples is that in these situations other parties bend to Nvidia's demands. Nvidia has no problem with other parties bending to their demands. But when another company or organization requires Nvidia to bend to their demands, things go awry almost without exception.

EDIT - for added detail:

> - nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards

Most (all?) of which bend to Nvidia's demands because Nvidia's been extremely successful in getting end users to want their chips, making the Nvidia chip a selling point.

> - nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology

IBM bends to Nvidia's demands so POWER can remain a relevant HPC platform.

> - nvidia collaborates with OS vendors like microsoft very successfully

Microsoft is the only significant OS vendor with which Nvidia collaborates successfully. It's true - but for the longest time Nvidia would have been out of business if they didn't. I will concede this point, but I don't find this is enough to paint a different picture.

> - nvidia collaborated with mellanox successfully and acquired it

Mellanox bent over to Nvidia's demands to such an extent that they were acquired.

> - nvidia collaborates with ARM today...

Collaboration in what sense? My impression is that Nvidia and ARM have a plain passive customer/supplier relationship today.

> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...

Nvidia is humongously behind their competitors Intel and AMD in open source contribution while having a large amount more R&D in graphics. They are terrible at open source compared to the "industry standard" of their market, and only partake as far as it serves their short term needs.

They are perfectly entitled to behave this way, by the way. But Nvidia's open source track record is only more evidence is that they don't understand how to work in an open ecosystem, not less.

> You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).

The "whataboutism" is valid but completely irrelevant here. I would also not appreciate Apple buying ARM.

> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.

Apple has parted ways with Intel, IBM, Motorola, Samsung (SoCs) and PowerVR for technology strategy reasons, not relationship reasons. Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.


> Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.

I'm typing this on a MacBook with an Nvida GPU that was created in 2012, many years after the failing laptop GPU debacle. AFAIK, Apple used that GPU until 2015?

I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)


> I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)

This used to be true, as Apple swapped between AMD and Nvidia chips several times in 2000-2015. Then Nvidia and Apple fell out, and Apple has not used Nvidia chips in new designs in 5 years - a timeframe in which Nvidia coincidentally achieved its largest technical advantages over AMD. Apple goes as far as to actively prevent Nvidia's own macOS eGPU driver from working on modern macOS. A simple pricing dispute does not appear to be a good explanation here.


Whatever the reason may be, the fact that Apple used Nvidia GPUs until 2015 at least debunks the endlessly repeated theory that it was because of the broken GPUs of 2008.

What hardware does apple use for all of its machine learning training (e.g. for Siri, etc. ) ?

What Apple is really worried about with NVIDIA is vendor lock-in. They own the walled garden, they absolutely are not going to accept an "external dependency" they can't control.

CUDA is such an "external dependency". It locks you in to something that's not an Apple product.


I never quite got how the Apple story was an indictment of NVIDIA. The first-gen RoHS-compliant solders sucked and would fracture... how exactly is that NVIDIA's fault? In fact, wouldn't Apple have been the ones who chose that particular solder?

It is the same issue that caused the Xbox 360 red-ring-of-death, and caused "baking your graphics card" to become a thing (including AMD cards). It basically affected everyone in the industry at the time, and Apple would not have gotten any different outcome from AMD had they been in the hotseat at the time. They were just throwing a tantrum because they're apple damn it and they can't have failures! Must be the supplier's fault.

That one has always struck me as a "bridezilla" story where Apple thinks they're big enough to push their problems onto their suppliers and NVIDIA said no.

And as far as the Xbox thing... Microsoft was demanding a discount and NVIDIA is free to say no. If they wanted a price break partway through the generation, it probably should have been negotiated in the purchase agreements in the first place. NVIDIA needs to turn a profit too and likely structured their financial expectations of the deal in a particular way based on the deal that was signed.

Those are always the go-to "OMG NVIDIA so terrible!" stories and neither of them really strike me as something where NVIDIA did anything particularly wrong.


> Microsoft is the only significant OS vendor with which Nvidia collaborates successfully

Canonical, which ships nvidia's proprietary driver with Ubuntu, is another quite major OS vendor that collaborates with nvidia successfully. Recently, Ubuntu's wayland-based desktop environment was also the first to work with nvidia's driver (the results of this work are open source).


Apple is parting with Intel over quality control issues. That is the same reason why Apple parted with IBM and Motorola that and Intel chips were faster.

You will find ARM Macs are cheaper than Intel Macs, even if not as fast but consume less power due to mobile technology.

Microsoft had the Surface tablet with ARM chips and an ARM version of Windows which didn't sell as well, but then they are not Apple who won't make the same mistakes as Microsoft.


I much prefer it if we focus on NVidia since its the subject matter, rather than go into a pitless discussion

Well if Nvidia acquires ARM, just maybe it will push apple to riscV

Not really. There is nothing any future owner of ARM can do to cut Apple out. Which is why they are not interested in purchasing ARM themselves. They co-developed ARM6 with Acorn and VLSI. Their license allows them to build on to the ARM core. Most Nvidia can do is try to outperform Apple but it will come at a lost of customer that don't need desktop features like GPUs. https://en.wikipedia.org/wiki/ARM_architecture#Architectural...

Whataboutism. All you did was show that Apple is also bad.

No. What I did show is that any multi 10.000 employee FAANG has thousands of projects on flight, some of which are good, some of which are bad.

This contradicts the claim from the OP that suggests that all the projects from one of these companies are all bad.


I'm not normally one to defend Nvidia (paratiulalrly not from my Linux laptop), but at least the Xbox and PS3 issues never really seemed to be their fault from what I've heard on the grapevine.

Xbox: The Xbox's security was broken, and Nvidia apaprently took the high road, claimed a loss on all existing chips in the supply chain (claiming a loss fo the quarter out of nowhere and tanking their stock for a bit) and allowed Microsoft to ship a new initial boot ROM as quickly as possible for a minimum of cost to Microsoft. When that new mask ROM was cracked within a week of release Microsoft went back to Nvidia looking for the same deal and Nvidia apparently told them to pound sand and in fact said that they would be doing no additional work on these chips, not even die shrinks (hence why there was no OG Xbox Slim). There are other reasons why Microsoft felt like Nvidia still owed them albeit, but it was a bit of a toxic relationship for everyone involved.

PS3: they were never supposed to be the GPU until the eleventh hour. The Cell was supposed to originally crank up to 5GHz (one of the first casualties of the end of Dennard scaling, and how it affected Moore's law as we conceived it) and there were supposed to be two Cell processors in the original design, and no dedicated GPU. When that fell through and they could only crank them up to 3.2GHz, they made a deal with Nvidia at the last second to create a core with an new bus interconnect to attach to the Cell. And that chip was very close to the state of the art from Nvidia. Most of it's problems were centered around duck taping in a discrete PC GPU into the console with time running out on the clock, and don't think that anyone else would have been able to deliver a better solution under those circumstances.

Like I said, Nvidia is a scummy company in a lot of respects, but I don't think the Xbox/PS3 issues are necessarily their fault.


I would agree that general Nvidia troubles don't particularly stand out in the PS3 hardware design clusterfuck, but Microsoft's and Nvidia's falling out really is indicative of terrible relationship management even if it was from both sides. Again my point is that Nvidia just doesn't understand how to work together, how not to view everything as a zero sum game. That doesn't mean that Nvidia is the only bad actor in these situations, but Nvidia really does end up in these situations quite a lot.

That is just not true. And I am with parents I dont normally come and defend Nvidia.

If by working with everyone meant stepping back and relenting in every possible way then Nvidia would not be profitable. I am not sure why Microsoft felt they were entitled to Nvidia. And Nvidia just said no. It was that simple.

Nvidia wants to protect their business Interest, and that is what business is all about. And yet everyone on the internet seems to think company should do open source or throw resources into it etc.


I've already mentioned in the top parent comment that Nvidia is perfectly entitled to behave this way. They clearly know how to run a successful business in this way. I have bought Nvidia chips in the past and will continue to do so in the future when they are the best option for my use case - I don't really try to personify companies or products like this.

I am just pointing out that Nvidia's evident opinion on how to run a business (their corporate culture) is not in line with cultivating an open ecosystem like ARM is running. And the cultivation of this ecosystem is ARM's key to success here. Nvidia is entitled how to run a business how they want, but I'm very much hoping that that way of working does not translate to how they will run ARM.

People everywhere in this thread are having huge difficulty separating the point "Nvidia's way of doing business does not match ARM's" with "I have personal beef with Nvidia's way of doing business". I'm trying to make the former argument.

> That is just not true.

Out of curiosity - what isn't true here? Am I missing facts, or are you expressing disagreement with my reading of the business situation? If the latter is based on some understanding I have some personal beef with Nvidia, then please reconsider.


On the plus side it will probably really give RISC-V (and other platforms) a boost for the risk averse businesses.

The ARM company is not just about the instruction set architecture. The ISA wouldn't be interesting at all if no good processors were built with the ISA [0]. For RISC-V to succeed, it requires a company that builds some good processor designs for it - for smartwatches, smartphones, tablets, laptops desktops - and licenses that to others. That company (one or multiple) does not (yet) exist, and is not easy to build.

[0]: Which is exactly why SPARC, and with one exception Power is dying, and why RISC V is yet to deliver. Nobody (bar IBM's POWER line) is building good processors with those ISAs that make it worth the effort to use. Nothing to do with the ISA - you just need chips people are interested in using.


Yep. It's difficult to build a community-- you need to have enough mass to get further interest in. From a business view you end up with the question of "Why bother with RISC-V when ARM is doing what we need and has enough critical mass to keep things going forward?"

About the only thing that could force that to change would be another company buying up ARM and changing the licensing mechanisms (e.g. pricing or even removing some license options) going forward.. or just wrecking the product utterly.

I do think RISC-V has an opportunity here, but only if ARM sells out to NV and NV screws this up as hard as they're likely to in that situation.


> For RISC-V to succeed, it requires a company that builds some good processor designs for it - for smartwatches, smartphones, tablets, laptops desktops - and licenses that to others

The way I see it is that this may actually generate incentive for someone to do that. One of the reasons that that isn't happening yet is because there's no real need with ARM vendors supplying and no real chance with ARM vendors as competition. This could, in theory, clear the way.


This is assuming it would have to be a new company, rather than an existing company like Qualcomm or AMD which could produce a processor with a different ISA if nVidia/ARM became unreasonable to deal with.

This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.


> This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.

Google could easily afford to design their own RISC-V CPUs and port Android to it, if they thought it was in their strategic interests to do so.

I think it really depends on how nVidia-owned Arm behaves. If it behaves the same as Softbank-owned Arm, I don't think Google would bother. If it starts to behave differently, in a way which upsets the Android ecosystem, Google might do something like this. (I imagine they'll give it some time to see whether Arm's behaviour changes post-acquisition.)


Given the geopolitical/geoeconomic struggle between the US and China i wouldn't be surprised if China will pivot into RISC-V arch.

And given that Nvidia is a US company, that makes them quiet toxic for a Chinese company to source from.


China is pushing hard both on alternative ISAs like RISC-V and for control of ARM IP.

https://www.techradar.com/news/arm-sacks-china-boss-over-sec...


I’m not saying your claim is wrong but it’s not clear to me how that article backs the claim that China is pushing hard on RISC-V. It seems more like the old CEO of Arm China was doing no good very bad things, most likely for his own benefit.

“Arm revealed that an investigation had uncovered undisclosed conflicts of interest as well as violations of employee rules.”


That was intended to support the second part of my comment (although I can definitely see how that wasn’t clear - sorry about that). A lot of Chinese companies have come out with new RISC-V designs and it’s clear they’re prioritizing making it a possible alternative platform in the case that Arm can no longer be used. The RISC-V foundation also decided to move from the US to Switzerland to avoid the exact sorts of restrictions that have been placed on Arm.

https://www.reuters.com/article/us-usa-china-semiconductors-...


>Arm China CEO Allen Wu has refused to step down after being dismissed by Arm's board

The story you posted is incredible. Does this happen anywhere else in the world?


This can happen anywhere in the world. In order to remove a CEO, you have to follow the proper process. Allen Wu claims that the process wasn't followed and, therefore, his dismissal was illegal and void in effect.

It's not like he was dismissed and he just didn't leave his office. He's challenging the legality of his dismissal.


Ah, but what will it do for the RISC-adverse?

a•verse /əˈvɜrs/ (adj.): Having a strong feeling of opposition to; unwilling: Not averse to spending the night here.

a•verse (ə vûrs′), (adj.): Having a strong feeling of opposition, antipathy, repugnance, etc.; opposed: He is not averse to having a drink now and then.

source: https://www.wordreference.com/definition/averse


Fine, I mistyped. I don't precisely need or desire additional spelling classes from a mysterious online person.

No, I'm sorry, I didn't get your pun and thought you were alluding to the spelling in GP's post.

I didn't mean to bother you, I've been pedantic, thanks for pointing it out.


Well, besides ARM, there is also MIPS core or ISA that IC makes can also buy a license and embed into their products in the same fashion as ARM.

They can also design their own ISA. An ISA is a document, they can write their own. Now, can you think of reasons why they wouldn't want to do it that don't also apply to MIPS?

I was thinking the same thing. Nvidia could start charging obscene fees for ARM licenses, but then RISC-V is poised to receive more investment and become increasingly mainstream. Not such a bad thing. We can switch architectures. The toolchain is maturing. Imagine if more companies started making high-performance RISC-V chips?

I was going to say MIPS but yeah.

Don't forget the GeForce Partner Program they pushed a while back which required partners to make their gaming brands exclusive to GeForce products. They ended up cancelling it and I bet the reason was due to all the anti-competitive violations the FTC would have slapped on them.

While Nvidia has a vastly superior product to AMD and Intel, they have less than 20% of the GPU market. Intel has held greater than 50% market share since 2010.

It is very hard to make an anti-competition case against someone who is consistently 2nd and 3rd in the market.


The “GPU market” is not an ideal market (E.g. with total fungibility of goods, low non-elastic demand, etc) - Intel has most of the share of GPUs sold because it’s impossible for NVIDIA - or anyone else - to compete with Intel in the spaces where Intel supplies without competition: CPU-integrated GPUs.

On a related note: with PCs now definitely heading towards ARM, this is a sensible move by NVIDIA: they could now sell GeForce-integrated ARM chips for future Windows and Linux boxes - and then they would be the ones with the dominant marketshare.


If only a point of balance. Intel integrated GPU is safe choice on Linux. If it was not there would be space for competitors - entry level GPUs, AMD iGPU.

The anti-competitiveness isn't the market isn't all GPUs, it's gaming hardware. The sole providers GPU providers for that space is just AMD and Nvidia.

Nvidia's GPP would require manufactures such as ASUS, Gigabyte, MSI, HP, Dell, etc. to have their gaming brands only use Geforce GPUs. So all the well known gaming brands such as Alienware, Vodoo, ROG, Auros and Omen would only be allowed to have Geforce. nVidia already has aggressive marketing plastering their brand across every esports competition, which is fair game, but the GPP would be a contractual obligation to not use AMD products.


Which is a perfectly reasonable and legal thing to do, even if you don't personally like it.

Nike is the exclusive clothing brand of every NBA team. American Airlines is the exclusive airline of the Dallas Cowboys. UFC fighters can only wear Reebok apparel the entire week leading up to a fight.

Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.


> Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.

How did that work out? Did your company secure a good rate - and/or did the vendor become complacent once they realised they didn’t have to compete anymore? Did the contract require minimum levels of improvement in server reliability and performance with each future product generation?


He's not saying it's a good or bad idea, he's pointing out it isn't an illegal restraint in trade.

It's a tough spot to be in. AMD and Intel split the largest chunk of the cake because their products are cheap, so they make money in volume.

The only reason nvidia has 20% of the GPU market at all is because their products are better, but without volume, there is very little separating you from losing the market.

If NVIDIA slips over AMD and Intel perf wise during one generation, the competition will have cheaper and better products, so it's pretty much game over.


>If NVIDIA slips over AMD and Intel perf wise during one generation

It has happened many times.


In both graphics and compute ?

Its ok for nvidia to release an architecture that does compute very well, but barely improves graphics, and vice-versa.

But I don't recall any compute generation where there was a better product from the competition.


>While Nvidia has a vastly superior product to AMD

Which product? It can't possibly be their GPUs you mean because that would be hilariously wrong. That is like saying a Lamborghini is a better car than a VW because it has a higher top speed.

To me -and to many many buyers- AMD is the superior product. To most Intel has the best product by far (business laptop, Chromebook, etc.)


I'm curious what ways you think AMD GPUs are better? I can think of dozens of ways NVIDIA GPUs are better, struggling to think of any for AMD.

AMD has the best iGPUs available, which (unlike Intel's) are actually fast enough to play a lot of games. They're also significantly more power efficient as a result of 7nm. For any use where this is fast enough -- and this is a huge percentage of the PC market -- nVidia has no answer to this.

AMD is the only option for a performant GPU with reasonable open source drivers. Intel has the drivers but they don't currently offer discrete GPUs at all. nVidia doesn't have the drivers.

AMD makes it a lot easier to do GPU virtualization.

AMD GPUs are used in basically all modern game consoles, so games that run on both are often better optimized for them.

They also have the best price/performance in the ~$300 range, which is the sweet spot for discrete GPUs.


No. AMD's iGPU still not powerful to playing games with good quality, mainly due to memory bandwidth.


Performance per dollar, and usually more energy efficient. NVIDIA coasts on their proprietary extensions imo.

Maybe it just dawned on them that calling upon the wrath of an army of angry gamers isn't such a good idea.

But it still killed Kaby Lake G. What could have been... sigh

Also, it must be hard to trust them as the creator of the designs you use if they also compete with you directly in the market.

At this point, do Apple and Qualcomm even depend on ARM's new designs? In the same way that AMD branched from Intel but are still mostly compatible, can the same thing happen in mobile chipsets?

Apple very much does not and is reported to have a perpetual license to the ISA. Apple will most likely be fine even in the worst case scenario.

Qualcomm however has been rebranded/tweaked (which is unclear) ARM standard CPU core designs since 2017. They very much depend on ARM doing a lot of heavy lifting.


A few years ago 15 companies, among which Qualcomm, Apple, Intel, Nvidia, Microsoft, Samsung, Huawei and more had an architecture license which is perpetual so that probably puts them all on safe ground. I'm sure that the specific licensing terms can vary but I doubt someone like Qualcomm didn't take any precautions for exactly such eventuality given how much they rely on being able to ship new ARM based SoCs. They probably gave up on designing custom cores because of the effort/costs involved and the fact that 99% of the Android market doesn't really require it. But they'd still have to ship ARM cores, standard or not.

Apple is probably the safest of the bunch given how they helped build ARM.

ARM announced it would cut ties with Huawei after the US ban but reconsidered the decision less than half a year later so I assume that the architecture license is either usually iron clad or simply too valuable to both sides to give up.


> more had an architecture license which is perpetual

Architectural license is not necessarily perpetual

> Drew declined to comment on whether the deal was multi-generational

https://www.eetimes.com/microsoft-takes-arm-architectural-li...

So they may have license for ARMv8 but not future ISAs like ARMv9.


Perpetual means they can indefinitely deliver as many designs as they want using the ISA they licensed. Not "perpetual for everything ARM present and future". It's similar to the perpetual multi-use license except the holder has more freedom with the modifications and customized designs. All other licenses are time limited.

And again, the terms of the license may vary. I have the impression that Apple has a far more permissive license than anyone else out there for example.


Ah, I was not aware of this agreement.

Qualcomm has shown in the past to be able to build great custom ARM CPUs not based on an ARM standard design. But it seems they decided the investment was not worth it after their custom Kryo design (which was not a complete failure but definitely not better than what ARM was producing at the time). But I think they'll need to go back to their own silicon at some point if this acquisition happens.

For sure Huawei and Samsung (and smaller manufacturers like Rockchip, Mediatek, Allwinner) don't have an impressive track record designing custom CPU IP and definitely not custom GPU IP. These guys should be terribly alarmed if this were to happen.


All the latest Snapdragons use bog standard ARM cores.

I imagine there are some ongoing license fees paid for ARM.

> This is quite concerning honestly. I don't mind ARM being acquired, and I don't mind Nvidia acquiring things.

You should mind both of these things. The more oligopolistic technology is, the worse.

Other than that - fully agree with your concern. As a GPU developer I'm often frustrated with NVIDIA's attitude towards OpenCL, for example.


if so, could this end up as good for RISC-V in the long run?

Definitely in the short run, because of the understandable fear from NVIDIA's competitors to use their (now) technology. Maybe in the mid run if those fears begin to crystallize. Unlikely in the long run, I'd assume NVIDIA would spin ARM off before killing it entirely, buying ARM would be a multi-billion investment.

Indeed. Nvidia shouldn't be allowed to buy it, given their status (not just a reputation) of an anti-competitive bully.

But anti-trust is so diluted and toothless these days, that the deal will probably be simply rubber stamped. If they aren't stopping existing anti-competitive behavior, why wouldn't they allow such bullies to gain even more power?


Yeah, I, too am worried about this. I would have rather had a conglomerate of companies with Apple being one buying them and keeping them private. But oh well. Hopefully Nvidia does right by all of ARM’s existing customers.

With this buy Nvidia has GPUs, CPUs, networking, what else do they need to be a vertically integrated shop?


RAM, manufacturing, storage, cooling, power supply, ... depends on how far you want to vertically integrate everything.

Let Nvidia have ARM, they'll run it into the ground and speed up Risk V adoption.

If all 'closed' companies would support Linux as well as NVIDIA does then I would throw a party. Keep in mind that they don't have to open up their stuff. Instead, they support it to the hilt and as long as I've been using Linux and Nvidia together (2006 or so) they've never let me down.

> Nvidia is the only suitor in concrete discussions with SoftBank, according to the people.

Would Arm stakeholders (i.e. much of the computer industry) prefer an IPO?

In 2017, Softbank's Vision Fund owned 25% of Arm and 4.9% of Nvidia, i.e. these are not historically neutral parties, https://techcrunch.com/2017/08/07/softbank-nvidia-vision-fun...

After WeWork imploded, https://www.bloomberg.com/opinion/articles/2019-10-23/how-do...

> Neumann created a company that destroyed value at a blistering pace and nonetheless extracted a billion dollars for himself. He lit $10 billion of SoftBank’s money on fire and then went back to them and demanded a 10% commission. What an absolute legend.

Is the global industry (cloud, PC, peripheral, mobile, embedded, IoT, wearable, automotive, robotics, broadband, camera/VR/TV, energy, medical, aerospace and military) loss of Arm independence our only societal solution to a failed experiment in real-estate financial engineering?


ARM was public for a long time. In 2016 they were taken private by Softbank via a $32bn acquisition.

ARMs IPO value is under $10B, revenue/profits too small, growth not strong enough.

Why would Arm be valued at $10B publicly and $32B+ privately? Nvidia shareholders would be paying a premium for ... what exactly? Did Softbank overpay for Arm?

Is Arm not profitable as a standalone business? They recently raised some license fees by 4X.


I don’t believe NVidia will pay $30B. But certainly they might believe ARM has value outside its current cash flow and mediocre growth. Like strategically combining technologies.

I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.


I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.

At the time I thought Son had clever telco synergies in mind, but I gave him far too much credit


ARM is actually an okay choice and $31B probably wasn't even that out of whack.

The problem is that Son needs cash, so he's flogging off everything he can to get it.


Apparently Son had a stake in Nvidia too which was sold off last year at a slight loss if you believe BI. Crazy.

https://www.businessinsider.com/running-list-softbank-invest...


>Softbank overpay for Arm

Grossly so. They paid like 45 percent above what the stock was trading at the time.


That's a pretty normal premium for a buyout. You'll never be able to buy a company completely just by using the stock price * number of shares.

The company is currently valued by analysts as high as $40 billion. Most seem to believe it's worth more than the $32bn Softbank paid in 2016.

> Nvidia shareholders would be paying a premium for ... what exactly?

They'd be paying a premium for a path to an all-nvidia datacenter & supercomputer.

Consider HPC applications like Oak Ridge's Frontier supercomputer. They went with an all AMD approach in part due to AMD's CPUs & GPUs being able to talk directly over the high-speed Infinity Fabric bus. Nvidia's HPC GPUs can't really compete with that, since neither Intel nor AMD are exactly in a hurry to help integrate Nvidia GPUs into their CPUs.

This makes ARM potentially uniquely valuable to Nvidia - they can then do custom server CPUs to get that tight CPU & GPU integration for HPC applications.


Hmmmm.

There is [0] https://en.wikipedia.org/wiki/NVLink

which is supported by [1] https://en.wikipedia.org/wiki/POWER9

those two combined give you [2] https://en.wikipedia.org/wiki/Summit_(supercomputer)

currently the worlds number 2 supercomputer(only very recently dethroned) according to the article.

Installed at Oak Ridge.

So they are already there, just needing some premium POWER?


Can't they make custom Arm server CPUs without buying Arm, as the Amazon/Annapurna team and others have done with their Arm licenses?

Amazon paid 350MM for Annapurna, ~ 1/100th of 32B.

For embedded devices, Nvidia already ship Jetson boards with Arm CPUs and Nvidia GPUs.


Sure, but they'd need to buy or build a CPU design team. Which they'd get as part of buying ARM, the teams that make the Cortex reference designs.

A $30B acquihire would be impressive, 100 times more than Amazon paid for Annapurna, the team who built AWS Graviton server CPU on top of Arm's reference design. If the HR department is having so much trouble hiring Arm engineers that Nvidia needs to pay 30 billion dollars to hire a CPU design team, something's wrong. Nvidia already has CPU design teams, e.g. they made a 2014 Transmeta-like design.

https://hothardware.com/news/nvidias-64bit-tegra-k1-the-ghos...

> .. this chip is fascinating. NVIDIA has taken the parts of Transmeta's initial approach that made sense and adopted them for the modern market and the ARM ecosystem -- while pairing them with the excellent GPU performance of Tegra K1's Kepler-based solution.

https://www.extremetech.com/computing/174023-tegra-k1-64-bit...

> there’s an interesting theory ... that Denver is actually a reincarnation of Nvidia’s plans to build an x86 CPU, which was ongoing in the mid-2000s but never made it to market. To get around x86 licensing issues, Nvidia’s chip would essentially use a software abstraction layer to catch incoming x86 machine code (from the operating system and your apps) and convert/morph it into instructions that can be understood by the underlying hardware.

Which other Arm licensee has been talking about x86/Arm instruction morphing in 2020?

If the goal of acqui-billion-hiring the Arm reference design team is to prevent other companies from using those designs, that would endanger smaller vendors in the Arm supply chain, along with many of the devices that run modern society. Regulators may not like that.


Why would nvidia overpay by 22 billion dollars?

Because the sources for this story are investment bankers desperate for bidders?

An ARM owned and fully controlled by NVIDIA is probably worth more to them than an independent and reasonably neutral ARM who's willing to do business with NVIDIA's competitors. Maybe not $22B more, though.

That would imply the purchase is for the purpose of actions that could cause regulators in multiple countries to block the purchase.

There’s an opportunity for arbitrage here then...

Not really. You can't exactly cut in and buy arm from SoftBank for 16 and flip it to nvidia for 32. What's your pitch to SoftBank?

Get hired at arm with equity comp.

Doing IPO would mean they will use the money raised meaningfully. Shareholders probably see more upside with Nvidia integration. I’m not really sure what ARM need a bunch of money for in an IPO, they are pretty established.

> I’m not really sure what ARM need a bunch of money for

To buy themselves back from owner Softbank, who can return money to investor Saudi Arabia? https://www.cnbc.com/2018/10/23/softbank-faces-decision-on-w...


If that is the main idea your S1, not gonna get much interest unless you don’t raise much.

The goal of independence is typically to execute on a vision.

According to some comments in this thread, the alternative is the slow destruction of the neutral Arm ecosystem. While some new baseline could be established in a few years, many Arm customers could face a material disruption in their supply chain.

With the US Fed supporting public markets, including corporate bond purchases of companies that include automakers with a supply chain dependent on Arm, there is no shortage of entities who have a vested interest in Arm's success.

If existing Arm management can't write a compelling S1 in the era of IoT, satellites, robots, edge compute, power-efficient clouds, self-driving cars and Arm-powered Apple computers, watches, and glasses, there will be no shortage of applicants.


IPO'ing a business that only makes revenue from licensing arrangements is a recipe for disaster

ARM was publicly traded between 1998 and 2016. In that period its value multiplied about 25x, not counting the premium of the acquisition. Could you elaborate, please? Where do you see the disaster? (Honest question).

Because of Apple. Not to mention their last 40% price increase was because of the Vision fund's nonsense.

Publicly traded companies that rely on income from "licensing" peak in revenue then stagnate because innovation becomes harder to come by.


Apple is a small, although significant, part of ARM's total market share. And that 25x is, as I said, without taking into account the premium. If you do, and there are good arguments to do so, the valuation growth is 35x, in almost 20 years.

Regarding innovation, ARM's been at it since 1990. I'm sure it's not the same now as it was 30 years ago, but we're well past the point where one can reasonably fear it to be an unsustainable business. Last time I heard numbers, they were talking about more than 50 billion devices shipped with ARM IP in them. That is a massive market.

You don't answer my question. Why wouldn't licensing businesses work as publicly traded companies? What's the fundamental difference, specially in an increasingly fabless market, between a company licensing IP to other companies and a company selling productized IP to consumers?


How so? It seems like lots of businesses run successfully on that model for indefinite periods.

If a business is viable as a private company (as ARM certainly is), why wouldn't they be viable as a public company?

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: