Nvidia is a pretty hostile company to others in the market. They have a track record of vigorously pushing their market dominance and their own way of doing things. They view making custom designs as beneath them. Their custom console GPU designs - in the original Xbox, in the Playstation 3 - were considered a failure because of terrible cooporation with Nvidia . Apple is probably more demanding than other PC builders and have completely fallen out with them. Nvidia has famously failed to cooporate with the Linux community on the standardized graphics stack supported by Intel and AMD and keeps pushing propietary stuff. There are more examples.
It's hard to not make "hostile" too much of a value judgement. Nvidia has been an extremely successful company because of it too. It's alright if it's not in their corporate culture to work well with others. Clearly it's working, and Nvidia for all their faults is still innovating.
But this culture won't fly well if your core business is developing chip designs for others. It's also a problem if you are the gatekeeper of a CPU instruction set that a metric ton of other infrastructure increasingly depends on. I really, really hope ARM's current business will be allowed to run independently as ARM knows how to do this and Nvidia has time and time again shown not to understand this at all. But I'm pessimistic about that. I'm afraid Nvidia will gut ARM the company, the ARM architectures, and the ARM instruction set in the long run.
: An interesting counterpoint would the Nintendo Switch running on an Nvidia Tegra hardware, but all the evidence points to that this chip is a 100% vanilla Nvidia Tegra X1 that Nvidia was already selling themselves (to the point its bootloader could be unlocked like a standard Tegra, leading to the Switch Fusee-Gelee exploit).
For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
The claim that Nintendo is the only company nvidia successfully collaborates with is just wrong:
- nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards
- nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology
- nvidia collaborates with OS vendors like microsoft very successfully
- nvidia collaborated with mellanox successfully and acquired it
- nvidia collaborates with ARM today...
The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
I mean, this is not nvidia specific.
You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).
I mean, you even try to paint this as if Nvidia is the only company that Apple has parted ways with, yet Apple has long track record of parting ways with other companies (IBM PowerPC processors, Intel, ...). I'm pretty sure that the moment Apple is able to produce a competitive GFX card, they will part ways with AMD as well.
Hey! Wait a second, there. Nvidia isn't bad because it has a properietary Linux driver. Nvidia is bad because it actively undermines open-source.
Quoting Linus Torvalds (2012) :
> I'm also happy to very publicly point out that Nvidia has been one of the worst trouble spots we've had with hardware manufacturers, and that is really sad because then Nvidia tries to sell chips - a lot of chips - into the Android Market. Nvidia has been the single worst company we've ever dealt with.
> [Lifts middle finger] So Nvidia, fuck you.
Nvidia managed to push some PR blurbs about how it was improving the open-source driver in 2014, but six years later, Nouveau is still crap compared to their proprietary driver .
Drew DeVault, on Nvidia support in Sway :
> Nvidia, on the other hand, have been fucking assholes and have treated Linux like utter shit for our entire relationship. About a year ago they announced “Wayland support” for their proprietary driver. This included KMS and DRM support (years late, I might add), but not GBM support. They shipped something called EGLStreams instead, a concept that had been discussed and shot down by the Linux graphics development community before. They did this because it makes it easier for them to keep their driver proprietary without having work with Linux developers on it. Without GBM, Nvidia does not support Wayland, and they were real pricks for making some announcement like they actually did.
Maybe something will change on this soon. There was speculation about this: https://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-O.... But I'm not holding my breath, and it would be nice if the solution wasn't "wait and hope until Nvidia releases the software necessary to control their GPUs".
Do you have more info on this ? There is a big difference between not supporting open source, and actively sabotaging it. What are they doing, exactly ?
TL;DR starting with the 9xx series, Nvidia started making it so their GPUs would only run firmwares signed by them (likely to prevent counterfeits, i.e. 2060s sold as 2080s). So it is impossible to control the fans and reclock the GPU. There's no workaround, so even as the person who owns the device, I can't run my own firmware. AMD has signed firmwares too, but they actually release sufficient blobs to fully run the device.
When I had nvidia hardware, I never tried nouveau, so I never ran into this.
CUDA just works.
The 5000 series has been out for an entire year without ROCm support at this point.
It is amazing to watch how much of a struggle AMD is having with getting PyTorch to work with ROCm. It makes me appreciate what a good job Nvidia must have done with CUDA.
(To be fair AMD has it planned, but it's behind several other priorities to them.)
LOL what a fucking joke. Not even close.
Also that one VFIO bug makes one of the great advantages of Radeon disappear and there is no fix in sight.
Fuck AMD. Bunch of marketing hype and they ship garbage half baked crap to the market.
I thought AMD had great Linux driver support, and open source drivers.
Turns out AMD only has the open source driver part of the things, and this driver does not support a large chunk of their products.
I'd take a binary driver over no driver any day (and in fact, I returned the amd card and am using intel now, which works great).
AMD: some open source drivers, but many things are inop
Nvidia: all closed source, but everything works
Source: I have a Vega 56 and I am gonna give it the sledgehammer when 3000 series arrives. Fuck that shit to high heavens my dude. ROCm is buggy as fuck. Only the bare basics work on Linux. Drivers are buggy, and crash all the time. Even the most basic bullshit is not implemented right. Like there is no fan control. That's how bad it is.
FP32 performance for the VII is comparable to GTX20280Ti e.g. on Resnet training. Tensor Cores are cool, but 90% of people don't use FP16 that much.
I'm not sure that is the best reply that stackoverflow has ever seen.
Most frameworks do support ROCm though, so the ball is on AMDs court.
But your mileage may vary, I don't want to make people swear.
If AMD is unwilling to properly support its consumer GPUs in GPGPU workloads and NVIDIA is, then well... sounds like a good reason not to buy AMD.
It's really not a major ask, people have been running GPGPU programs on consumer cards since forever, even though they're "consumer" and sold as graphics cards and not compute accelerators. NVIDIA basically does this on purpose, people use their personal hardware to get a foot in the door of the ecosystem and end up writing programs that get run on big compute GPUs which NVIDIA makes big profits on, it's very intentional.
If AMD chooses not to do that, well... can't really blame people for avoiding their stuff when NVIDIA is willing to let you do this stuff on their cards and AMD isn't.
Even there though, only Intel has a buttery smooth experience. Ryzen for laptops is half baked (terrible USB C docking performance, occasional crashing on Windows and Linux with the 2xxxU series CPU/GPU chips) and AMD GPUs still require manual intervention to load proprietary firmware.
AMD does make some performant mobile GPUs though, they work well in Debian!
I switched to Wayland a few years back as vsync is quite nice to have, but whenever I go back to Xorg for AnyDesk or TeamViewer on AMD or Intel there is a fair bit of tearing.
Nvidia could have a competitive open source driver tomorrow if they released redistributable firmware that allowed reclocking the GPU.
As a FreeBSD user I'm still "trapped" on Xorg anyway, and in any case I'd rather stick with what works than learn some whole new way of doing things for marginal benefits.
I never said it was. I said it's a whole new way of doing things for marginal benefits; is that not accurate?
OpenGL 3.3 when the proprietary drivers do OpenGL 4.1 for example.
I have an Intel GFX card. Had an AMD 5700. Linux support sucked, switched back to intel.
So AFAICT, only intel and nvidia have good Linux driver support, and only intel has good open source linux driver support.
I am not sure if I am missing some update, need to set some undocumented kernel flag and/or BIOS setting, if it's a software issue or Í just made a mistake somewhere. Debian 10/11.
Meanwhile, as much as I wanted to get away from Intel, their drivers have never posed any issue at all.
The open source driver is kind of ok if the only thing we expect from it is getting a working X session.
Now if one wants to do some complex OpenGL stuff, then it might work, or not.
I used to have a HD 7950 and it always worked perfectly, same with my current Vega 56.
The usual linux answer to hardware problems, keeps being to buy new hardware.
But at some point you have to consider if it's really worth it keeping a 10 year old laptop around. It's painful to say them goodbye, I know, I have been there, but for me it's just not worth it.
No need to throw a perfectly working laptop to enjoy the DirectX 11 and OpenGL 4.1 capabilities that it was sold for.
Even on Windows AMD drivers are the most unstable, bugged software that’s even been shipped. It’s been a long standing joke that AMD “has no drivers”.
The open source driver replacement only does OpenGL 3.3.
I guess I should be happy it does any OpenGL at all.
Still no driver for compute 1 year later. I'm so happy i decided to return it and switch to intel instead of waiting for AMD or some random joe on their free time to add support for it to their open source driver.
So yeah. I'd take a working proprietary driver over no driver any day.
Microsoft own previous gen Xbox emulator on the next gen xbox (i think it was original xbox emulated in the 360, but i might be wrong) was impacted by the team having to reverse-engineer the GPU because nvidia refused to let the emulator people to have access to the documentation provided to the original team.
Is this an Ad Hominem ? Linus does not mention there a single thing that they are actually doing wrong.
> Drew DeVault, on Nvidia support in Sway :
Nvidia has added wayland support to both KDE and GNOME. Drew just does not want to support the nvidia-wy in wl-roots, which is a super super niche WM toolkit whose "major" user is sway, another super super niche WM.
Drew is angry for two reasons. First, sway users complain to them that sway does not work with nvidia hardware, which as a user of a WM is a rightful thing to complain about. Second, Drew does not want to support the nvidia-way, and it is angry and nvidia because they do not support the way that wl-roots has chosen.
It is 100% ok for Drew to say that they don't want to maintain 2 code-paths, and wl-roots and sway do not support nvidia. It is also 100% ok for nvidia to consider wl-roots to niche to be worth the effort.
What's IMO not ok is for Drew to feel entitled about getting nvidia to support wl-roots. Nvidia does not owe wl-roots anything.
IMO when it comes to drivers and open-source, a lot of the anger and conflict seems to steem from a sentiment of entitlement.
I read online comments _every day_ of people that have bought some hardware that's advertised as "does not support Linux" (or Macos, or whatever) being angry at the hardware manufacturer (why doesn't your hardware support the platform that says it does not support? I'm entitled to support!!!), the dozens of volunteers that reverse engineer and develop open source drivers for free (why doesn't the open source driver that you develop in your free time work correctly? I'm entitled to you working for free for me so that I can watch netflix!), etc. etc. etc.
The truth of the matter is, that for people using nvidia hardware on linux for Machine Learning, CAD, rendering, visualization, games, etc. their hardware works just fine if you use the only driver that they support on the platforms they say they support.
The only complaints I hear is people buying nvidia to do something that they know is not supported and then lashing out at everybody else due to entitlement.
You're now somehow arguing with people that they should stop complaining about Nvidia's business practices. I would agree with that in the sense that Nvidia can do whatever they want: nobody is obliged to buy Nvidia, and Nvidia is not obliged to cater to everyone's needs. It's a free enough market. But even if you don't agree with some/most of the complaints surely you must agree that Nvidia's track record of pissing of both other companies (and people) is problematic for when they take control of a company with an ecosystem driven business model like ARM's?
I'd agree with you this is OP's argument, however it's main flaw is in explicitly omitting the fact that NVidia is not the only party that's "free" to do things.
We're not obliged to buy their cards and we aren't obliged to stay silent regarding its treatment of the open-source community and why we think it would be bad for them to acquire ARM.
I am always amazed at the amount of pro-corporate spin from (presumably) regular people who are little more than occasional customers.
We still are. I asked about "which specific business practices are these", and was only pointed out to ad hominems, entitlement, and one sided arguments.
Feel free to continue discussing that on the different parent thread. I'm interested on multiple views on this.
> You're now somehow arguing with people that they should stop complaining about Nvidia's business practices
No. I couldn't care less about nvidia, but when somebody acts like an entitled choosing beggar, I point that out. And there is a lot of entitlement in the arguments that people are making about why nvidia is bad at working with others.
Nvidia has some of the best drivers for Linux there are. This driver is not open source and distributed as a binary blob. Nvidia is very clear that this is the only driver that they support on Linux, and if you are not fine with that, they are fine with you not buying their products. This driver supports all of their products very well (as opposed to AMD's, for example), its development is made by people being paid full time to do it (as opposed to most of their competitors which also have people helping on their drivers on their free time - this is not necessarily bad, but it is what it is), and some of their developments are contributed back to open source, for free.
People are angry about this. Why? The only thing that comes to mind is entitlement. Somebody wants to use an nvidia card on Linux without using their proprietary driver. They know this is not supported. Yet they buy the card anyways, and then they complain. They do not only complain about nvidia. They also complain about, e.g., nouveau being bad, the Linux kernel being bad, and many other things. As if nvidia, or as if the people working on nouveau or the Linux kernel for free on their free time owes them anything.
I respect people not wanting to use closed source software. Don't use windows, don't use macosx, use alternatives. Want to use linux? don't use nvidia if you don't want to.
If you ask your impression of Nvidia's business practices, and they give you their opinion, you can't somehow invalidate that opinion by retorting with debate fallacies. That's the "fallacy fallacy" if you're sensitive to that. This is not a debate competition about who's right, this is people giving their opinions based on Nvidia's past and current actions. You asked a question, and they answered. This is not a competition. Please give them the basic respect of acknowledging their opinion.
We are discussing a topic, and people throwed multiple arguments that do not make sense.
You are claiming that I should just shut up and respect their feelings, but that is worthless.
Somebody's argument was: "Linus doesn't like them, therefore I don't like them".
The reason these are called logical fallacies is because these arguments are illogical. I told them that this was a logical fallacy (argument of authority - just because someone with authority makes an argument does not mean they are right), and ask them _why_, what is it that linus and you do not like.
I am happy I did that, because many of them have raised multiple actually-valuable arguments in response. For example, because nvidia's hardware throttles down if the driver firmware is not signed, and this makes the open source drivers slower for no reason.
That's a valid and valuable argument. Linus doesn't like them is worthless.
The person who raised this argument learned something from somebody else which knew what Linus did not like, and so did I.
The same happened when I called out the entitled choosing beggars. "Why are you angry at nvidia for not providing an open source driver ? You knew before buying their product that only the binary driver was supported."
Read the responses. The reason they are angry, is because they don't have a choice but to use nvidia, because the competition products (AMD in those cases) are much worse. AMD does have open source drivers, but they are crap, and they don't support many of AMD's products, at least for compute, which is something that many (including myself) use for work.
These people have picked a platform that values open source code, but due to their job requiring them to actually get some work done, they must use nvidia for that, and they don't like having to compromise on a proprietary driver.
Honestly, I think this is still entitlement, but I definitely sympathize with the frustration of having to make compromises one does not like.
From the point of view of whether nvidia buying ARM is good or bad. I still have no idea. ARM does _a lot_ of open source work, its major market are Android and Linux communities.
I understand that people are afraid that Nvidia will turn ARM into a bad open source player. It can happen. But without Android, iOS, and Linux, ARM is worthless. So a different outcome of this could be that NVIDIA buying ARM ends up making NVIDIA more open source friendly, since at least the Linux market is important for nvidia as well (~50% of their revenue).
It definitely makes sense for regulatory authorities to only allow somebody to buy ARM that will preserve ARM deals with current vendors (apple, google, samsung, etc.), and that also will preserve ARM open platform efforts.
If nvidia does not agree to that, they should not be allowed to buy arm.
Except that they stop supporting older HW at some point. That, together with occasional crashes learned me not to buy nVidia HW again.
NVIDIA insisted on pushing its own EGL streams even as the wider community was moving in a different direction.
They suffer from a major NIH syndrome and do not know how to work with others at all.
A NVidia purchase of ARM would also create a lot of conflicts of interest.
ARM is more of a household name for their use in mobile phones but that is just the tip of the iceberg.
I think you underestimate how many ARM chips you have in a single car or delivery truck.
Add to that:
- farming machinery
- construction machinery
- factory line automation
- elevators, escalators
- EV chargers
- fridges, washing machines, ovens
- medical devices such as drug-infusion pumps, ventilators, surgical machinery, etc.
- Auxiliary modules in aeroplanes and shipping containers.
- Infrastructure for Road, Rail, Power Grid with ARM processors running headless embedded systems
- Anything the size of a pebble with bluetooth connectivity uses nordic's nRF chip (which is yet again an ARM chip)
ARM processors are hiding in plain sight in the world all around you.
I understand the point you make, from a business standpoint, on how Tiktok might scale. The thing that is bizarre for me and I agree with the parent's sentiment, is how disconnected the valuation is from the real-world impact and objective _usefulness_ of ARM versus Tiktok.
Edit: bullet points
Also how easily the technology behind Tiktok can be duplicated compared to ARM.
Anyway, it's probably all about the network effect and brand value.
Tiktok can deliver arbitrary content to hundreds of millions of people’s eyeballs around the planet.
Consider that in 2017, 21 billion ARM chips were manufactured (doubled in 4 years, from 10 billion in 2013), and that ARM's licensing fees are over 2% of chip cost for current high-end designs (and they're talking about raising that even more). They have 95%+ of the mobile phone market, are making inroads in the server market, and will soon be in every Apple laptop, which I expect will grow the market for ARM laptops even outside Apple. It wouldn't be a stretch to find them in common desktop computers after that. They're in smart TVs, washing machines, robot vacuum cleaners, and all other kinds of smart (and non-smart) appliances. Even SSDs have their own embedded ARM chip. And that's just home/consumer stuff; haven't even scratched industrial/commercial applications, of which there are a ton.
I could easily see yearly production at over 100 billion chips before 2030, probably even before 2025. While I'd love to see something like RISC-V take off commercially, I don't think that's realistic.
Meanwhile, social media users are incredibly fickle; platforms are subject to fad and fashion. Certainly Facebook and Instagram are still huge behemoths, but their growth is nothing like it once was, with people -- especially younger people, trying to distinguish themselves from their older, boring relatives -- flocking to TikTok. I fully expect TikTok will be in Instagram's boat in under 10 years, with some other platform taking its place.
ARM seems like an amazingly great short-, medium-, and long-term bet, while TikTok feels like a nice short- (maybe medium-, if they're lucky) term money-maker, and even that feels like a big maybe: I have no idea what their ad revenue per user looks like, but it's probably not great since their audience skews younger. Teenagers and college kids don't have much in the way of discretionary income. Then again, TikTok doesn't pay their creators like e.g. YouTube does, so they get to keep all that ad revenue.
Now, certainly TikTok might not be sustainable and might disappear off the face of the earth tomorrow. Or it might become a juggernaut that overtakes Facebook.
> The actual rates an advertiser pays varies, usually between $0.10 to $0.30 per view, but averages out at $0.18 per view.
This also is only people that watch the whole ad.
PS I applaud RISC-V but it won't take over the market for a long time, and it wouldn't drive ARM out completely, I'm sure. Intel's had many competitors and they're doing just fine (even despite screwing up repeatedly with their processes!)
Look at all the failed attempts to move away from x86(/64). Even intel tried it with Itanium and failed, HP has to pay them to keep making it so they can fulfull their server contracts. I'm sure ARM has a similar hold on the mobile market.
With AWS offering ARM systems, all the Chromebooks, Apple, the complete loss of the phone market, Intel’s staying power is about to be tested to the extreme.
The only actual major change here is AWS offering Graviton, which actually hints at their real cash cow: datacenter SKUs with absurd markup. Something like 80% of their profit margins are here. More accurately, the change is that there are now viable silicon competitors to Intel in the performance department. So it's now clear that ultra-integrated hyperscalers who can actually afford tape out costs (7nm CPUs are not cheap to produce in volume) have an option to vertically integrate with e.g. Neoverse. Smaller players will not do this still, because alternative options like Rome will be adequate. But the only reason any of them are changing anything is cost savings, because now there are actual viable competitors to Intel when there were zero of them for like, 15 years. Producing cutting edge silicon products isn't easy, but it's very profitable, it turns out.
To be clear, Intel isn't charging $10,000 for a Xeon Platinum because it costs $9500 to make and they make $500 in profit. (Likewise, AMD doesn't produce competitors at 1/5th the price because they made a revolutionary, scientific breakthrough in processor design.) They're charging what you'll pay, not what it takes to produce. Seeing as they currently still have a complete stranglehold on the datacenter industry and make more in a quarter than most of their competitors do in several years, I suspect they've got much more "staying power" than the watercooler chat on this website would lead you to believe.
The Softbank guy invested a ton of money on WeWork. Tried to sell WeWork for 60 billion, but before that happen WeWork valuation dropped out to 2-5 billion (huge loss). That was in 2019. Afterwards, Softbank invested another 10 billions to try to save it. WeWork owns and also pays rent or hundreds of office buildings in the most expensive zones of all the major capitals in the world. 2020 COVID now means these super expensive offices are now empty, since WeWork customers pay a premium to be able to cancel their leases in <1 week. So essentially, WeWork is broke, worth 0, and Softbank has lost dozens of billions on it.
On top, Softbank owns a huge chunk of Uber, which is also worth close to zero now that people are not travelling due to COVID...
So... yeah... Softbank is selling ARM because they must. They are super broke, and investors are going to pull the money that remains out. Selling ARM and giving investors a tiny benefit so that they keep their money is better than them taking a huge loss this year.
MIPS _should_ be dead, half the manufacturers of the chip have stopped. But, Imagination Technology still sell a considerable number to Apple every year.
Here's an estimate of youtube (not exactly tiktok, but not exactly not), revenue at 10 to 30 cents per view.
I loved BeOS, but there was an even more fundamental problem that put a limit on its days: A failure to anticipate coming need for home computers to become more secure. At the same time that Microsoft and Apple were both working frantically to ditch their old single-user desktop operating systems and replace them with, in effect, spruced up versions of existing server/workstation operating systems, Be was trying to launch a brand new OS on the dying model. Had they survived even a couple years longer, they would have had to reckon with that, and they simply didn't have the resources to navigate such a fundamental transition.
Let's be real here: TikTok is a big social network, yes, but ARM owns most of the embedded market. Every smartphone your average consumer buys runs an ARM chip. And hardware is harder to replace than software.
This number is not unbounded. It will converge to some asymptotic limit.
> How much profit will Tiktok make on each of those views?
Around a tenth of a cent per view.
These huge valuations are purely because people don't do the math and don't know how the market works.
That's how this year's myspace, which people will have trouble remembering 5 years from now, can get a higher "valuation" than a large semiconductor company with a 30 year track record.
Investors and the financial sector are proving time and time again that they're unable to learn from their mistakes, through no "fault" of their own, because apparently it's human nature to just be horribly bad at this.
It amazes me that people think investors somehow learned anything from the dot-com bubble, given they've been repeating all of their other major mistakes every odd year or so.
This is some very fresh news.
I think, it is because people are now so used to Apple and Amazon's trillion valuation, with Apple closing in to 2 Trillion, people think $32B is low or ( relatively ) cheap.
Reality is ARM was quite over valued when it was purchased by Softbank.
This might also be part of SoftBank’s fire sale which bought ARM for $32B just few years ago (2018?)
I always just assumed that interfacing proprietary IP with the GPL is a tricky legal business. One slip, and all your IP becomes open source.
Do you have a source explaining what licensing changes they would have to make and what impact would that have for Linux and Nvidia ? I'd like to read that.
Hence AMDs push for Mantle then Vulkan. The console like API is the carrot to get people to use an API that has an verification layer so that third parties can easily say "wow what a broken game" rather "wow this new game runs on Nvidia and not AMD, what broken AMD drivers".
Nvidia open sourcing their drivers completely destroys a large chunk of their competitive advantage and is so intertwined with all the IP of the games they have hacks for that I'd be surprised if they ever would want to open source them, or even could if they wanted to.
More docs would be nice though.
- all kinds of problems wrt. the integration of the driver in the Linux Eco system, including the properitary driver having quality issues for anything but headless CUDA.
- nvidea getting in the way of the implementation of an open source alternative to their driver
And they don't see the benefit to create new drivers.
I agree they should help out more.
We could care less about the hacks and cludges baked into the proprietary Nvidia drivers and firmware focused on DirectX powered gaming. The current path Nvidia has chosen with signed firmware locks out open source developers from much of the low level operations of their GPUs.
AMD does the exact same thing and always has. When you see shaders come down the wire you can replace them with better-optimized or more performant versions. It's almost always fixing driver "bugs" in the game rather than actual game bugs. And the distinction is important.
I do agree with you, but that element is something everyone has to do to remain competitive in games. Developers will only optimize for one platform (because they're crunching), and 9 times out of 10 that's a RTX2080Ti.
While yes AMD did similar things when they could, it was way less prevalent (if only because they didn't have the staff necessary to pull it off to the same degree).
Edit: Here's an example of some of the stuff I'm talking about that goes beyond shaders. https://devblogs.microsoft.com/oldnewthing/20040305-00/?p=40...
This persistent bit of FUD really needs to die. Yes, you have to be careful, but at this point it's ridiculously well-known what is obviously correct and what is obviously incorrect when dealing with GPL. I'm sure there are some grey ares that haven't been worked out, but avoiding those is fairly simple.
Nvidia is already in a weird grey area, releasing binary blobs with an "open source" shim that adapts its interfaces to the GPL kernel. As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.
Way to contradict yourself.
> but avoiding those is fairly simple.
> As much as the Linux kernel's pragmatic approach toward licensing helps make it easier on some hardware manufacturers, sometimes I wish they'd take a hard line and refuse to carve out exceptions for binary drivers, inasmuch as those can sometimes/always be considered derived works.
Maybe this is what needs to happen to force companies to change their mindset, but where I work, lawyers tell us to (1) never contribute to any GPL'ed based code, (2) never distribute GPL'ed code to anybody (e.g. not even a docker container), etc.
Their argument is: a single slip could require us to publish all of your code, and make all of our IP open, and to make sure this doesn't happen, an army of lawyers and software engineers and managers would need to review every single code contribution that has something to do with the GPL. So the risks are very high, the cost of doing this right is very high as well, and the reward is... what exactly ? So in practice this means that we can't touch GPL'ed code with a 10 foot pole, it is not worth the hassle. If I were to ask my manager, it will tell me that it is not worth it. If they ask their manager, they will tell them the same. Etc.
BSD code ? No problem, we contribute to hundreds of BSD, MIT, Apache, ... licensed open source projects. Management tells us to just focus on those.
For some older card generations (e.g. GTX 600 series) it was competitive with the official driver. But in every hardware generation since then, the GPU requires signed firmware in order to run at any decent clock speed.
The necessary signed firmware is present inside the proprietary driver, but nouveau can't load it because it's against the ToS to redistribute it.
Most GPU features are available but run at 0.1x speed or slower because of this single reason. Nvidia could absolutely fix this "tomorrow" if they were motivated.
At this point I've gotten so bloody tired of the games people play with IP, that I'm arriving at the point I think I wouldn't even mind being part of the collateral damage of our industry being burned to the ground through the complete dissolution of any software delivered or related contract. If you sell me hardware, and play shenanigans to keep me from being able to use it to it's fullest capability, you're violating the intent of the rights of First Sale.
To be honest, I think every graphics card should have to be sold bundled with enough information for a layperson (or I'll throw out a bone,a reasonably adept engineer) to write their own hardware driver/firmware. Without that requirement, this industry will never change.
The point is it's not about open sourcing your properitary driver but about not getting in the way of an alternative open source driver, maybe even letting it a bit of an have even if just unofficially.
I thing if I where nvidea I might go in the direction of having a fully it at least partially open source driver for graphic stuff and a not so open source driver for headless CUDA (potentially running alongside a Intel integrated graphics based head/GUI).
Through I don't know what they plan wrt. ARM desktops/servers so this might conflict with their strategies there.
The only tricky things involve blatantly betraying the spirit of the agreement while trying to pretend to follow the letter and hoping a judge supports your interesting reading of the law.
Even so there is no provision in law wherein someone can sue you and magically come into possession of your IP.
It would literally require magical thinking.
Take a look at a recent snapshot of changesets and lines of code to the Linux kernel contributed by various employers: https://lwn.net/Articles/816162/
Arm themselves is listed at 1.8% by changesets; but Linaro is a software development shop funded by Arm and other Arm licensees to work on Arm support in various free software, and they contributed 4% of changesets and 8.8% by lines of code. And Code Aurora Forum is an effort to help various hardware vendors, many of whom are Arm vendors, get drivers upstreamed, and they contributed 1.8% by changesets and 10.1% by lines changed. A number of other top companies listed are also Arm licensees, though their support may be for random drivers or other CPU architectures as well.
However, Arm and companies in the Arm ecosystem do make up a fairly large amount of the code contributed to Linux, even if much of it is just drivers for random hardware.
And Arm and Linaro developers also contribute to GCC, LLVM, Rust, and more.
You are not required to do that. Use nouveau, buy an AMD or intel GFX card.
You are not entitled to it either. People developing nouveau on their free time don't owe you anything, and nvidia does not owe you an open source driver either.
I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
I don't use nvidia GFX cards on linux anymore (intel suffices for my needs), but when I did, I was happy to have a working driver at all. That was a huge upgrade from my previous ATI card, which had no driver at all. Hell, I even tried using AMD's ROCm recently on Linux with a 5700 card, and it wasn't supported at all... I would have been very happy to hear that AMD had a binary driver that made it work, but unfortunately it doesn't.
And that was very disappointing because I thought AMD had good open source driver support. At least when buying Nvidia for Linux, you know beforehand that you are going to have to use a proprietary driver, and if that makes you uncomfortable, you can buy just something else.
Has internet discussion really fallen this low that all needs to be spelled out and no context can ever be implied?
We're in a thread about NVidia, so of course OP's talking about NVidia hardware here. Yeah, they can get AMD, but that does not change their (valid) criticisms of NVidia one bit.
> I don't really understand the entitlement here. None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
Windows and macOS have different standard for drivers than many Linux users do. Is it really that surprising that users who went with an open-source operating system find open-source drivers desirable too?
I find it really weird to assume that because something is happening somewhere, it's some kind of an "objective fact of reality" that has to be true for everyone, everywhere.
When you shop for things, are you looking for certain features in a product? Would you perhaps suggest in a review that you'd be happier if a product had a certain feature or that you'd be more likely to recommend it?
It's the same thing. NVidia is not some lonely developer on GitHub hacking during their lunch break on free software.
Do you also assume that the kind of music you find interesting is objectively interesting for everyone?
This has nothing to do with entitlement. It's listing reasons for why someone thinks NVidia buying ARM is a bad idea.
It is to me. When I buy a car, I do not leave a 1 star review stating "This car is not a motorcycle; extremely disappointed.".
That's exactly how these comments being made sound to me. Nvidia is very clear that they only support their proprietary driver, and they deliver on that.
I have many GFX card from all vendors over the years, and I've had to send one back because the vendor wasn't honest about things like that.
Do I wish nvidia had good open source drivers? Sure. Do I blame nvidia for these not existing, not really. That would be like blaming microsoft or apple for not making all their software open source.
I do however blame vendors that do advertise good open source driver support that ends up being crap.
What does any of this have to do with nvidia buying or not buying arm ? Probably nothing.
What nvidia does with their GFX driver can be as different from what ARM does, as what Microsoft does with Windows and Github.
That's a bad analogy. A better one would be it's like you bought a car, (an open-source operating system) and this one accessory supplier is selling you what are really motorcycle parts, but just about fit the car barely, (a less-than-great proprietary driver when you explicitly are on an open system).
Additionally, they are extremely secretive and absolutely refuse to answer any sort of questions or allow you to modify the parts you purchased from them to fit better by implementing various forms of DRM.
You can just not buy those parts and indeed that's what many users are doing.
This is separate from raising concerns about this somewhat dodgy parts manufacturer potentially acquiring another manufacturer, specifically one that does require a lot of cooperation with others by its very nature.
> Nvidia is very clear that they only support their proprietary driver, and they deliver on that.
It's more complex than that. They seem to actively implement features to make it purposely more difficult to develop an independent open-source driver. This is rather different than just being passively indifferent to open-source. Moreover their proprietary driver can be less than seller too, so am not so sure they "deliver" even on that.
Therefore we, Linux users, can refuse to support a company that only supports their (lacking) proprietary driver and certainly we are within our rights to raise concerns about its purchase of ARM given its actively hostile approach to open-source.
And you can argue that that still is all fine, and that if you're making a choice to run Linux, then you have to accept trade offs. And I'm sympathetic to that argument.
But you're also trying to say that we're not allowed to be angry at a company that's been hostile to our interests. And that's not a fair thing to require of us. If nvidia perhaps simply didn't care about supporting Linux at all, and just said, with equanimity, "sorry, we're not interested; please use one of our competitors or rely on a possibly-unreliable community-supported, reverse-engineered solution", then maybe it would be sorta ok. But they don't do that. They foist binary blobs on us, provide poor support, promise big things, never deliver, and actively try to force their programming model on the community as a whole, or require that the community do twice the amount of work to support their hardware. That's an abusive relationship.
Open source graphics stack developers have tried their hardest to fit nvidia into the game not because they care about nvidia, but because they care about their users, who may have nvidia hardware for a vast variety of reasons not entirely under their control, and developers want their stuff to work for their users. Open source developers have been treated so poorly by nvidia that they're finally starting to take the extreme step of deciding not to support people with nvidia hardware. I don't think you appreciate what a big deal that is, to be so fed up that you make a conscious choice to leave a double-digit percentage of your users and potential users out in the cold.
> None of the drivers on my windows and macosx machines are open source. They are all binary blobs.
Not sure how that's relevant. Windows and macOS are proprietary platforms. Linux is not, and should not be required to conform to the practices and norms of other platforms.
This company in no way shape or form is obligated to cater to your interests. In this case it would likely be counter to their interests.
But for entirely different reasons. Apple switched from PowerPC to Intel because the PowerPC processors IBM was offering weren't competitive. They switched from Intel for some combination of the same reason (Intel's performance advantage has eroded) and to bring production in-house, not because Intel was quarrelsome to do business with.
Meanwhile Apple refused to do business with nVidia even at a time when they had the unambiguously most performant GPUs.
Nvidia introduced a set of laptop GPUs that had a high rate of failure. Instead of working with and eating some of the cost of repairing these laptops they told their customers to deal with it. Apple being one of their customers got upset and left holding the bag of shit and hasn't worked with them since.
Intel and AMD have used their x86/AMD64 patents to block Nvidia from entering the x86 CPU market.
Nvidia purchasing ARM will hurt not the large ARM licensees like Apple and Samsung but the ones that need to use the CPU in a device that does not need any of the Multimedia extensions that NVidia will be pushing.
But yeah I don't think it's about collaboration either.
Out of curiosity, is there any large open source product from NVidia? I can't think of any.
NVIDIA contributes mostly to existing open source projects (LLVM, Linux kernel, Spark, etc.), see https://developer.nvidia.com/open-source
It is a bit like WebKit. It was based on KHTML which was an open source HTML renderer. But Apple expanded that so greatly on their own payroll that it is hard to call WebKit anything but an Apple product.
EDIT - for added detail:
> - nvidia manufactures GPU chips, collaborates with dozens of OEMs to ship graphics cards
Most (all?) of which bend to Nvidia's demands because Nvidia's been extremely successful in getting end users to want their chips, making the Nvidia chip a selling point.
> - nvidia collaborates with IBM which ships Power8,9,10 processors all with nvidia technology
IBM bends to Nvidia's demands so POWER can remain a relevant HPC platform.
> - nvidia collaborates with OS vendors like microsoft very successfully
Microsoft is the only significant OS vendor with which Nvidia collaborates successfully. It's true - but for the longest time Nvidia would have been out of business if they didn't. I will concede this point, but I don't find this is enough to paint a different picture.
> - nvidia collaborated with mellanox successfully and acquired it
Mellanox bent over to Nvidia's demands to such an extent that they were acquired.
> - nvidia collaborates with ARM today...
Collaboration in what sense? My impression is that Nvidia and ARM have a plain passive customer/supplier relationship today.
> The claim that nvidia is bad at open source because it does not open source its Linux driver is also quite wrong, since NVIDIA contributes many many hours of paid developer time open source, has many open source products, donates money to many open source organizations, contributes with paid manpower to many open source organizations as well...
Nvidia is humongously behind their competitors Intel and AMD in open source contribution while having a large amount more R&D in graphics. They are terrible at open source compared to the "industry standard" of their market, and only partake as far as it serves their short term needs.
They are perfectly entitled to behave this way, by the way. But Nvidia's open source track record is only more evidence is that they don't understand how to work in an open ecosystem, not less.
> You can take any big company, e.g., Apple, and paint a horrible case by cherry picking things (no Vulkan support on MacOSX forcing everyone to use Metal, they don't open source their C++ toolchain, etc.), yet Apple does many good things too (open sourced parts of their toolchain like LLVM, open source swift, etc.).
The "whataboutism" is valid but completely irrelevant here. I would also not appreciate Apple buying ARM.
> For example, you paint it as if Nvidia is the only company Apple has had problems with, yet Apple has parted ways with Intel, IBM (Power PCs), and many other companies in the past.
Apple has parted ways with Intel, IBM, Motorola, Samsung (SoCs) and PowerVR for technology strategy reasons, not relationship reasons. Apple had no reason to part ways with Nvidia for technical reasons (especially considering they went to AMD instead), but did so because of the terrible relationship they built.
I'm typing this on a MacBook with an Nvida GPU that was created in 2012, many years after the failing laptop GPU debacle. AFAIK, Apple used that GPU until 2015?
I'd wager that Apple has been using AMD for something as mundane as offering better pricing, rather than disagreement 12 years ago. (Again: despite all the lawsuits, Apple is still a major Samsung customer.)
This used to be true, as Apple swapped between AMD and Nvidia chips several times in 2000-2015. Then Nvidia and Apple fell out, and Apple has not used Nvidia chips in new designs in 5 years - a timeframe in which Nvidia coincidentally achieved its largest technical advantages over AMD. Apple goes as far as to actively prevent Nvidia's own macOS eGPU driver from working on modern macOS. A simple pricing dispute does not appear to be a good explanation here.
CUDA is such an "external dependency". It locks you in to something that's not an Apple product.
It is the same issue that caused the Xbox 360 red-ring-of-death, and caused "baking your graphics card" to become a thing (including AMD cards). It basically affected everyone in the industry at the time, and Apple would not have gotten any different outcome from AMD had they been in the hotseat at the time. They were just throwing a tantrum because they're apple damn it and they can't have failures! Must be the supplier's fault.
That one has always struck me as a "bridezilla" story where Apple thinks they're big enough to push their problems onto their suppliers and NVIDIA said no.
And as far as the Xbox thing... Microsoft was demanding a discount and NVIDIA is free to say no. If they wanted a price break partway through the generation, it probably should have been negotiated in the purchase agreements in the first place. NVIDIA needs to turn a profit too and likely structured their financial expectations of the deal in a particular way based on the deal that was signed.
Those are always the go-to "OMG NVIDIA so terrible!" stories and neither of them really strike me as something where NVIDIA did anything particularly wrong.
Canonical, which ships nvidia's proprietary driver with Ubuntu, is another quite major OS vendor that collaborates with nvidia successfully. Recently, Ubuntu's wayland-based desktop environment was also the first to work with nvidia's driver (the results of this work are open source).
You will find ARM Macs are cheaper than Intel Macs, even if not as fast but consume less power due to mobile technology.
Microsoft had the Surface tablet with ARM chips and an ARM version of Windows which didn't sell as well, but then they are not Apple who won't make the same mistakes as Microsoft.
This contradicts the claim from the OP that suggests that all the projects from one of these companies are all bad.
Xbox: The Xbox's security was broken, and Nvidia apaprently took the high road, claimed a loss on all existing chips in the supply chain (claiming a loss fo the quarter out of nowhere and tanking their stock for a bit) and allowed Microsoft to ship a new initial boot ROM as quickly as possible for a minimum of cost to Microsoft. When that new mask ROM was cracked within a week of release Microsoft went back to Nvidia looking for the same deal and Nvidia apparently told them to pound sand and in fact said that they would be doing no additional work on these chips, not even die shrinks (hence why there was no OG Xbox Slim). There are other reasons why Microsoft felt like Nvidia still owed them albeit, but it was a bit of a toxic relationship for everyone involved.
PS3: they were never supposed to be the GPU until the eleventh hour. The Cell was supposed to originally crank up to 5GHz (one of the first casualties of the end of Dennard scaling, and how it affected Moore's law as we conceived it) and there were supposed to be two Cell processors in the original design, and no dedicated GPU. When that fell through and they could only crank them up to 3.2GHz, they made a deal with Nvidia at the last second to create a core with an new bus interconnect to attach to the Cell. And that chip was very close to the state of the art from Nvidia. Most of it's problems were centered around duck taping in a discrete PC GPU into the console with time running out on the clock, and don't think that anyone else would have been able to deliver a better solution under those circumstances.
Like I said, Nvidia is a scummy company in a lot of respects, but I don't think the Xbox/PS3 issues are necessarily their fault.
If by working with everyone meant stepping back and relenting in every possible way then Nvidia would not be profitable. I am not sure why Microsoft felt they were entitled to Nvidia. And Nvidia just said no. It was that simple.
Nvidia wants to protect their business Interest, and that is what business is all about. And yet everyone on the internet seems to think company should do open source or throw resources into it etc.
I am just pointing out that Nvidia's evident opinion on how to run a business (their corporate culture) is not in line with cultivating an open ecosystem like ARM is running. And the cultivation of this ecosystem is ARM's key to success here. Nvidia is entitled how to run a business how they want, but I'm very much hoping that that way of working does not translate to how they will run ARM.
People everywhere in this thread are having huge difficulty separating the point "Nvidia's way of doing business does not match ARM's" with "I have personal beef with Nvidia's way of doing business". I'm trying to make the former argument.
> That is just not true.
Out of curiosity - what isn't true here? Am I missing facts, or are you expressing disagreement with my reading of the business situation? If the latter is based on some understanding I have some personal beef with Nvidia, then please reconsider.
: Which is exactly why SPARC, and with one exception Power is dying, and why RISC V is yet to deliver. Nobody (bar IBM's POWER line) is building good processors with those ISAs that make it worth the effort to use. Nothing to do with the ISA - you just need chips people are interested in using.
About the only thing that could force that to change would be another company buying up ARM and changing the licensing mechanisms (e.g. pricing or even removing some license options) going forward.. or just wrecking the product utterly.
I do think RISC-V has an opportunity here, but only if ARM sells out to NV and NV screws this up as hard as they're likely to in that situation.
The way I see it is that this may actually generate incentive for someone to do that. One of the reasons that that isn't happening yet is because there's no real need with ARM vendors supplying and no real chance with ARM vendors as competition. This could, in theory, clear the way.
This is particularly true for Android because basically the entire thing is written in portable languages and the apps even run on a bytecode VM already, so switching to another architecture or even supporting multiple architectures at the same time wouldn't be that hard.
Google could easily afford to design their own RISC-V CPUs and port Android to it, if they thought it was in their strategic interests to do so.
I think it really depends on how nVidia-owned Arm behaves. If it behaves the same as Softbank-owned Arm, I don't think Google would bother. If it starts to behave differently, in a way which upsets the Android ecosystem, Google might do something like this. (I imagine they'll give it some time to see whether Arm's behaviour changes post-acquisition.)
And given that Nvidia is a US company, that makes them quiet toxic for a Chinese company to source from.
“Arm revealed that an investigation had uncovered undisclosed conflicts of interest as well as violations of employee rules.”
The story you posted is incredible. Does this happen anywhere else in the world?
It's not like he was dismissed and he just didn't leave his office. He's challenging the legality of his dismissal.
a•verse (ə vûrs′), (adj.): Having a strong feeling of opposition, antipathy, repugnance, etc.; opposed: He is not averse to having a drink now and then.
I didn't mean to bother you, I've been pedantic, thanks for pointing it out.
It is very hard to make an anti-competition case against someone who is consistently 2nd and 3rd in the market.
On a related note: with PCs now definitely heading towards ARM, this is a sensible move by NVIDIA: they could now sell GeForce-integrated ARM chips for future Windows and Linux boxes - and then they would be the ones with the dominant marketshare.
Nvidia's GPP would require manufactures such as ASUS, Gigabyte, MSI, HP, Dell, etc. to have their gaming brands only use Geforce GPUs. So all the well known gaming brands such as Alienware, Vodoo, ROG, Auros and Omen would only be allowed to have Geforce. nVidia already has aggressive marketing plastering their brand across every esports competition, which is fair game, but the GPP would be a contractual obligation to not use AMD products.
Nike is the exclusive clothing brand of every NBA team. American Airlines is the exclusive airline of the Dallas Cowboys. UFC fighters can only wear Reebok apparel the entire week leading up to a fight.
Heck, I worked for a company that signed an exclusive deal to only use a single server vendor.
How did that work out? Did your company secure a good rate - and/or did the vendor become complacent once they realised they didn’t have to compete anymore? Did the contract require minimum levels of improvement in server reliability and performance with each future product generation?
The only reason nvidia has 20% of the GPU market at all is because their products are better, but without volume, there is very little separating you from losing the market.
If NVIDIA slips over AMD and Intel perf wise during one generation, the competition will have cheaper and better products, so it's pretty much game over.
It has happened many times.
Its ok for nvidia to release an architecture that does compute very well, but barely improves graphics, and vice-versa.
But I don't recall any compute generation where there was a better product from the competition.
Which product? It can't possibly be their GPUs you mean because that would be hilariously wrong. That is like saying a Lamborghini is a better car than a VW because it has a higher top speed.
To me -and to many many buyers- AMD is the superior product. To most Intel has the best product by far (business laptop, Chromebook, etc.)
AMD is the only option for a performant GPU with reasonable open source drivers. Intel has the drivers but they don't currently offer discrete GPUs at all. nVidia doesn't have the drivers.
AMD makes it a lot easier to do GPU virtualization.
AMD GPUs are used in basically all modern game consoles, so games that run on both are often better optimized for them.
They also have the best price/performance in the ~$300 range, which is the sweet spot for discrete GPUs.
Qualcomm however has been rebranded/tweaked (which is unclear) ARM standard CPU core designs since 2017. They very much depend on ARM doing a lot of heavy lifting.
Apple is probably the safest of the bunch given how they helped build ARM.
ARM announced it would cut ties with Huawei after the US ban but reconsidered the decision less than half a year later so I assume that the architecture license is either usually iron clad or simply too valuable to both sides to give up.
Architectural license is not necessarily perpetual
> Drew declined to comment on whether the deal was multi-generational
So they may have license for ARMv8 but not future ISAs like ARMv9.
And again, the terms of the license may vary. I have the impression that Apple has a far more permissive license than anyone else out there for example.
Qualcomm has shown in the past to be able to build great custom ARM CPUs not based on an ARM standard design. But it seems they decided the investment was not worth it after their custom Kryo design (which was not a complete failure but definitely not better than what ARM was producing at the time). But I think they'll need to go back to their own silicon at some point if this acquisition happens.
For sure Huawei and Samsung (and smaller manufacturers like Rockchip, Mediatek, Allwinner) don't have an impressive track record designing custom CPU IP and definitely not custom GPU IP. These guys should be terribly alarmed if this were to happen.
You should mind both of these things. The more oligopolistic technology is, the worse.
Other than that - fully agree with your concern. As a GPU developer I'm often frustrated with NVIDIA's attitude towards OpenCL, for example.
But anti-trust is so diluted and toothless these days, that the deal will probably be simply rubber stamped. If they aren't stopping existing anti-competitive behavior, why wouldn't they allow such bullies to gain even more power?
With this buy Nvidia has GPUs, CPUs, networking, what else do they need to be a vertically integrated shop?
Would Arm stakeholders (i.e. much of the computer industry) prefer an IPO?
In 2017, Softbank's Vision Fund owned 25% of Arm and 4.9% of Nvidia, i.e. these are not historically neutral parties, https://techcrunch.com/2017/08/07/softbank-nvidia-vision-fun...
After WeWork imploded, https://www.bloomberg.com/opinion/articles/2019-10-23/how-do...
> Neumann created a company that destroyed value at a blistering pace and nonetheless extracted a billion dollars for himself. He lit $10 billion of SoftBank’s money on fire and then went back to them and demanded a 10% commission. What an absolute legend.
Is the global industry (cloud, PC, peripheral, mobile, embedded, IoT, wearable, automotive, robotics, broadband, camera/VR/TV, energy, medical, aerospace and military) loss of Arm independence our only societal solution to a failed experiment in real-estate financial engineering?
Is Arm not profitable as a standalone business? They recently raised some license fees by 4X.
I’m skeptical that will work, but Son was dumb enough to pay $31B with no strategic value.
At the time I thought Son had clever telco synergies in mind, but I gave him far too much credit
The problem is that Son needs cash, so he's flogging off everything he can to get it.
Grossly so. They paid like 45 percent above what the stock was trading at the time.
They'd be paying a premium for a path to an all-nvidia datacenter & supercomputer.
Consider HPC applications like Oak Ridge's Frontier supercomputer. They went with an all AMD approach in part due to AMD's CPUs & GPUs being able to talk directly over the high-speed Infinity Fabric bus. Nvidia's HPC GPUs can't really compete with that, since neither Intel nor AMD are exactly in a hurry to help integrate Nvidia GPUs into their CPUs.
This makes ARM potentially uniquely valuable to Nvidia - they can then do custom server CPUs to get that tight CPU & GPU integration for HPC applications.
There is  https://en.wikipedia.org/wiki/NVLink
which is supported by  https://en.wikipedia.org/wiki/POWER9
those two combined give you  https://en.wikipedia.org/wiki/Summit_(supercomputer)
currently the worlds number 2 supercomputer(only very recently dethroned) according to the article.
Installed at Oak Ridge.
So they are already there, just needing some premium POWER?
Amazon paid 350MM for Annapurna, ~ 1/100th of 32B.
For embedded devices, Nvidia already ship Jetson boards with Arm CPUs and Nvidia GPUs.
> .. this chip is fascinating. NVIDIA has taken the parts of Transmeta's initial approach that made sense and adopted them for the modern market and the ARM ecosystem -- while pairing them with the excellent GPU performance of Tegra K1's Kepler-based solution.
> there’s an interesting theory ... that Denver is actually a reincarnation of Nvidia’s plans to build an x86 CPU, which was ongoing in the mid-2000s but never made it to market. To get around x86 licensing issues, Nvidia’s chip would essentially use a software abstraction layer to catch incoming x86 machine code (from the operating system and your apps) and convert/morph it into instructions that can be understood by the underlying hardware.
Which other Arm licensee has been talking about x86/Arm instruction morphing in 2020?
If the goal of acqui-billion-hiring the Arm reference design team is to prevent other companies from using those designs, that would endanger smaller vendors in the Arm supply chain, along with many of the devices that run modern society. Regulators may not like that.
To buy themselves back from owner Softbank, who can return money to investor Saudi Arabia? https://www.cnbc.com/2018/10/23/softbank-faces-decision-on-w...
According to some comments in this thread, the alternative is the slow destruction of the neutral Arm ecosystem. While some new baseline could be established in a few years, many Arm customers could face a material disruption in their supply chain.
With the US Fed supporting public markets, including corporate bond purchases of companies that include automakers with a supply chain dependent on Arm, there is no shortage of entities who have a vested interest in Arm's success.
If existing Arm management can't write a compelling S1 in the era of IoT, satellites, robots, edge compute, power-efficient clouds, self-driving cars and Arm-powered Apple computers, watches, and glasses, there will be no shortage of applicants.
Publicly traded companies that rely on income from "licensing" peak in revenue then stagnate because innovation becomes harder to come by.
Regarding innovation, ARM's been at it since 1990. I'm sure it's not the same now as it was 30 years ago, but we're well past the point where one can reasonably fear it to be an unsustainable business. Last time I heard numbers, they were talking about more than 50 billion devices shipped with ARM IP in them. That is a massive market.
You don't answer my question. Why wouldn't licensing businesses work as publicly traded companies? What's the fundamental difference, specially in an increasingly fabless market, between a company licensing IP to other companies and a company selling productized IP to consumers?