ARM is in such a great position currently. There's no reason to sell except that SoftBank is in desperate need of capital. On top of that, Nvidia is likely to be a terrible steward of the IP. Nvidia has a terrible track record of working with other companies, partners, and open source developers. ARM has become a de-facto standard in mobile space, and Nvidia will likely use that position to strong-arm competition. This will push vendors out of ARM and into some alternative ISA. While long-term this might end up being great for RISC-V, it's going to cause a huge fracture in software stacks at the exact WORST time. Finally we're starting seeing huge convergence on ARM in Mobile/Desktop/Server space. One ISA to rule them all! Nope, now Nvidia is going to destroy that progress and set everything back another 5+ years.
Please, somebody tell me I'm wrong. I really don't want to be so pessimistic about this.
As for the UK now realizing SoftBank's expensive 'pass the parcel game' with ARM since 2016, I'd say you only appreciate something until it is gone.  In terms of technology, the UK simply didn't know what they had.
In all fairness, consider the market from Nvidia's perspective: they've spent a few decades being a stone's throw from Intel eating them. They started as one of many add-on card manufacturers and have turned into a dominant player by innovating, iterating, and being ruthless.
Some here might not remember how cutthroat the early-GPU days were. Nvidia survived.
Now, an opportunity to become a technical peer to Intel at the CPU level? To control their own destiny?
They'd be insane not to take this opportunity.
The biggest thing if you have been in prolonged vicious competition, is to Control Your Own Destiny and leave the basket of crabs behind
For Nvidia this is a MASSIVE win
Softbank are @#$@#$ idiots for selling ARM
Right when ARM is finally taking over the world
So they can fund the next bunch of charlatans - add more disasters like WeWork and Uber?
3dfx stumbled right as nVidia dropped the GeForce and blew everything away.
ATi became nVidia's whipping boy and Matrox just added the ability to drive more monitors.
I mean aside from not knowing if Matrox is still around, not much has changed from what I can see.
They spent a lot of time and money first building motherboard chipsets with on-board GPUs for AMD CPUs only to have CMD buy ATI.
Then they started a program to build their own x86 CPU which Intel sued them over. There was a counter suite, and the EU settlement around the same time and Intel ended up paying NVidia $1.5B3 which was often read a win for NVidia. Of course in retrospect it gave Intel another 10 years of CPU domination.
So then NVidia announced they were building a desktop ARM processor. That went pretty much nowhere. So then there was their mobile play (the Tegra). That was supposed to let them dominate mobile phones, and went just about as well as their launch partner phone (the Microsoft Kin). It has found use in robotics (the NVidia Jetson series) and cars (the Telsa model 3) though.
So they've lost a bunch of fights, and outside GPUs it really has been cut throat.
But it hasn't really seen the broad based success in mobiles they were hoping for, yet.
(By comparison, a midsize mobile company like Oppo sold 115M units in 2019)
Of course. The multi billion dollar market is just too small for more than one company. Nvidia is just biding it’s time on ~20% market share until it can put the finishing moves on the other 9 major vendors.
Perhaps the competition between 3DFX and ATi or Nvidia and AMD could be described as cutthroat but the other "competitiors" in the space couldn't even compete and eventually they all just pivoted into other things without anyone noticing or caring.
It's not unlike the x86 processor market. Companies like Via and Transmeta existed but they never were any serious competition to Intel or AMD.
PowerVR still exists and is seems successful in the mobile market (having provided Apple's GPUs until A10).
When did that ever happen?
Qualcomm used to license AMD's handheld GPUs (IP), then acquired that whole business unit from AMD. They've never used either Mali or PowerVR in any products that I know of.
Source: worked at PowerVR 2006-2007, then worked at AMD's handheld division including during the acquisition by Qualcomm, and stayed there until 2017.
David, you're entirely too qualified to be speaking about these things.
As it was, it seems they made the Microsoft/Internet and Microsoft/Mobile mistake and saw market evolution only as a threat to their existing portfolio, rather than as an opportunity.
I'm also not sure if you are suggesting NVIDIA made the same mistake as Microsoft regarding mobile, but them buying ARM is anything but that.
Them buying ARM is a way for them to offer a fully vertically integrated solution for the enterprise, pushing their existing initiatives such as NVDLA (https://github.com/nvdla/) being able to exert more control over the future of ARM architectures and designs for specific fields especially automotive as well as potentially getting their graphics and compute IP into billions of devices.
Anyone who thinks NVIDIA is buying ARM to simply trash it or to mess up with their competition is wrong, I'm not saying NVIDIA would necessarily be successful but most of the so called competitors that people are point at aren't their competitors at all.
NVIDIA is also buying a lot of talent with this acquisition, specifically the likes of ARM Austin which were responsible for the A76-78 cores.
NVIDIA + ARM + Mellanox has the potential to be a player on an unprecedented level for the hyperscaler and HPC markets, especially if the likes of Apple and Amazon (and to lesser extend Cloudflare etc.) do a lot of the heavy lifting for them. Apple going with ARM is probably the best thing NVIDIA could've asked for.
> They'd be insane not to take this opportunity.
It made financial sense for ARM's board to sell to SoftBank; it makes financial sense for SoftBank to sell to NVIDIA; it makes financial and strategic sense for NVIDIA to buy ARM; it makes financial and strategic sense for NVIDIA to radically change the way the ARM ecosystem works to benefit themselves.
And yet, the world will lose out massively by having the ARM ecosystem radically changed to benefit NVIDIA.
Capitalism certainly does a lot to create value for society as a whole, but this is just one of many examples of where capitalism also destroys value for society as a whole.
If NVIDIA treats ARM poorly, that benefits alternative ISAs and may ultimately undermine NVIDIAs investment. If NVIDIA treats ARM well, then that benefits the platform as a whole. The reality is going to be somewhere in the middle.
Market equilibrium does not imply maximum benefit for any one interest group. "Consumers" or "end-users" are often conflated with "society".
And while people may grouse about many things, chiefly paying a perceived Nvidia tax, I don't think anyone would accuse Nvidia of being strategically incompetent.
The more likely negative outcome seems to be that Nvidia would steer future ARM development so that their CPU+GPU solutions are more performant than anything anyone else can afford to produce.
Which seems like a negative... but far from the worst negative.
TBH, I think ARM's non-GPU business isn't interesting enough to Nvidia to screw with.
if that's the case, then the people who stand to lose from this _should_ be pushing the gov't to standardize and prevent fracturing of ecosystems from happening.
At least that's the theory of capitalism..
It was a crown jewel of a company, hard to imagine the French government letting it slide or the US government letting a intel be sold to a foreign company.
Well that’s the theory; nobody has actually been able to reproduce Silicon Valley except arguably in SF (it’s more like the valley grew a tentacle is the peninsula, though the distance is far enough that the culture is slightly different).
As far as Cambridge goes, I don’t see that this purchase makes any difference for those factors, except bragging rights in Whitehall.
I live in Silicon Peach.
Yes. Largest technology hub in Europe sits in Eindhoven (Veldhoven), Netherlands where you have a technical university, but the real anchor was Philips who were headquartered there. Eindhoven is home to ASML and Signify (formerly Philips Lighting) among others.
Shenzhen and Israel have been pretty successful.
Being able to say "we control the architecture in seventy trillion and counting devices" gets you a seat at the tech industry grown-ups' table (next to the US, China, Japan, Korea).
You mean something like GE buying off Alstom Énergie using corruption and government pressures ? Or the Chinese getting a piece of all our ports and nuclear tech ?
NVIDIA does design their own CPU cores.
You're blaming Nvidia. You should be blaming SoftBank and Masayoshi Son: bunch of fucking hucksters misleading people and making bad investments.
Now they need to sell ARM to fill some of the gaping void that has resulted. They couldn't care less who they sell to and what the outcome of that sale will be. I have nothing but contempt for them.
An absolutely miserable turn of events.
You shouldn't be blaming bad investors. They pay for their decisions with their, or more often somebody's else, money.
Blame the collective ownership of means of production, which in form of public companies have dominated the Western world for the last 30+ years.
When people think of of selling their business, which is their life's work, as a raison d'etre, something is really messed up with the business culture in that person's coutnry.
> Blame the collective ownership of means of production, which in form of public companies have dominated the Western world for the last 30+ years.
Can you unpack these two points? I don’t understand. It sounds like you’re saying “don’t blame the owners (because investors ARE owners); blame the owners”.
Publicly traded companies allow shareholders to limit their risk - when things start to go wrong you sell your shares to a bigger fool, and it's not your problem anymore. Also, as shareholders are not personally responsible for wrongdoings of their companies it also limits the risk to the buyer.
In short: publicly traded companies are enablers for moral hazard that we constantly observe in financial markets.
That's a bit of a grandiloquent take on reasons that could lead one to start a business.
Vision Fund isn't "public", and most of the companies they invest in aren't "public".
There exists no good alternative.
ARMv8.2 or newer is a very well designed ISA, while RISC-V is a very bad ISA and I would hate to be forced to use it.
OpenPOWER is a far better ISA than RISC-V, but unfortunately most developers do not have any experience with POWER and they have the wrong belief that POWER is some antique ISA while RISC-V must be some modern fashionable ISA. Therefore even if OpenPOWER is much better, it is less likely than RISC-V to be used as a replacement for ARM.
I and probably thousands of other engineers could design a much better ISA than RISC-V in a week of work, but no one of the creators of those thousands of new ISA variants would be able to convince all the other people to choose his/her variant over the others and start the significant amount of work needed for porting all the required software tools, e.g. LLVM and gcc.
So, if ARM would no longer be an acceptable choice, I do not see any hope that its replacement would not be greatly inferior.
The summary is that RISC-V is inefficient because it requires more instructions to do the same work as other ISAs and it does not have any advantage to compensate for this flaw.
Those extra instructions appear especially in almost all loops and the most important reason is that RISC-V has a worse set of addressing modes than the the vacuum-tube computers from more than 60 years ago, which were built only with a few thousands tubes, compared to the millions or billions of transistors available now for a CPU.
Because of this defect of the RISC-V ISA, the Alibaba team who designed the RISC-V implementation with the highest current performance (Xuantie910, which was presented last month at Hot Chips) had to add a custom ISA extension with additional addressing modes, in order to be able to reach an acceptable speed.
Whenever the designers of the RISC-V ISA are criticized, they reply that the larger number of instructions is not important, because any high-performance implementation should do instruction fusion, to be able to reach the IPC of other ISAs.
Nevertheless, that is wrong for 2 reasons, instruction fusion cannot reduce the larger code size due to the inefficient instruction encoding and the hardware required for decoding more instructions in parallel and for doing instruction fusion is much more complex than the hardware required for decoding less instructions with a better encoding as in other ISAs.
RISC-V includes a compressed extension that makes instruction encoding competitive or better than x86(!), and with none of the drawbacks of ARM's Thumb modes.
If you would apply the same compression methods to a more compact original encoding then the compressed code would be even smaller.
Competing ISAs, such as ARM (Thumb), MIPS (nanoMIPS) and POWER also have compressed encoding variants.
(and if not, what would be some of the metrics to objectively compare the architectures?)
If you just compile some benchmark programs for 2 different architectures and you look at the program sizes and the execution times, the differences might happen to be determined mostly by the quality of the compilers, not by the ISAs, in which case you could reach a wrong conclusion.
Many years ago, on one occasion I have spent many months working at the porting of a real-time operating system between Motorola 68k and 32-bit POWER. At another time I have also spent a couple of months with the porting of many device drivers between 32-bit POWER and 32-bit ARM and Thumb.
Such projects required a lot of examination of the code generated by compilers for the target architectures and also a lot of time spent with writing some optimized assembly sequences for a few parts of the code that were critical for the performance.
After spending so much time, i.e. weeks or months, with porting some large program, whose performance you understand well, between 2 ISAs, you may be reasonably confident of having a correct comparison of them.
If you want to reach a conclusion in a few hours at most, it is unlikely to be able to find an unbiased benchmark.
RISC-V is however a special case. Even if I have never spent time with implementing any program for it, after having experience with assembly programming for more than a dozen ISAs, when I see that almost any RISC-V loop may require up to a double number of instructions compared to most other ISAs, then I do not need more investigations to realize that reaching the same level of performance with RISC-V will require more complex hardware than for other ISAs.
Also, when comparing ISAs, I place a large weight on how good those ISAs are at GMPbench, i.e. at large number arithmetic. In my experience with embedded system programming large integer operations are useful much more frequently than traditional RISC ISA designers believe.
While x86 has always been very good at GMPbench, many traditional RISC ISAs suck badly, because they lack either good carry handling instructions or good double-word multiply/divide/shift instructions.
RISC-V also seems to have particularly bad multi-word operation support.
I'm curious whether vector operation support in RISC-V might also make up for any apparent shortcomings in raw arithmetic throughput - I guess a lot of it will depend on the types of workloads involved.
If you look at things here in 2020 it looks like it was essentially the RISCiest of the CISC architectures, x86, and the CISCiest of the RISC architectures, ARM that succeeded in the modern day while architectures more tied to an ideological camp like Alpha or SPARC or VAX have all died out in our current era. That makes me think that another very ideologically RISC architectures like RISC-V will, at the very least, have a tough row to hoe in getting widespread commercial acceptance.
What convinced me was how thin the RISC-V book is, and I have seen both ARM and Intel reference manuals. For example the V extension removes need to specify data size, greatly reducing instructions needed.
I have no idea about Open Power btw.
P.S. I am hiring U.S. candidates for ARMv8, x86 roles: https://careers.vmware.com/main/jobs/R2009422?lang=en-us
Thoughts on OpenPOWER vs ARM v8.2 in terms of ISA?
And for those who may be interested in OpenPOWER, https://github.com/antonblanchard/microwatt
Therefore there is no surprise that it is an efficient ISA.
The only significant flaw in its first version was the lack of atomic instructions, but that was corrected in the subsequent versions.
The 32-bit POWER was a very nice ISA, but it was not designed to be extendable to 64-bit. It had blocks of the encoding space reserved for future extensions, but various details of the instruction word formats depended on the fact that the size of the registers was 32 bit.
When POWER was extended to 64-bit, much earlier than IBM expected, i.e. only 5 years after the introduction of the 32-bit variant, the extension was constrained because IBM has chosen to not have a mode switch like ARM but they have chosen to make a compatible ISA extension, i.e. which has the original POWER ISA as an instruction subset.
This has constrained the instruction encodings, so the 64-bit POWER ISA has some parts that seem more clumsy that in ARMv8 and the result is that programs for POWER are usually slightly larger than their ARMv8 equivalents. However, the hardware implementation effort for equivalent performance levels should be very similar for POWER and ARM and significantly less for both than for x86.
POWER also had a compressed encoding variant, but that was implemented in very few chips. Now the latest ISA variant has introduced 2 instruction word lengths, i.e. both 64-bit and 32-bit long instructions, instead of just 32-bit long instructions.
This allows the embedding of large immediate constants in the instructions, which is an important advantage of x86 vs. traditional RISC ISAs. This might help to reduce the sizes of many POWER programs.
Basically doesn't concern Apple at all.
Sure! But that code is generally core OS code written by a small number of people and deployed widely. If Apple or Microsoft or Google decided that OpenPOWER was the way to go, we'd be able to switch over at least some use cases. ALL of the above have current Arm offerings. Arm has reached "good enough" performance - the snapdragon 855 is as fast as a i5-8630U, and uses half the power.
The primary barrier to processor migration is compatibility with user-mode programs. Since the vast majority of programs are now using primarily ILs, such as JS or C#, once the runtimes are ported (and they have been), there's not a lot of lasting incompatibility since programs bind to APIs, not ISAs. Apps for Work (CAD, Photoshop, dev tools) are traditionally the last to port, but there has been no better time to switch architectures than now, and it's only getting easier.
Five years is no big deal. In fact there’s so much in the pipeline it’s likely little will be different For 3-5 years.
The bigger problem is RISC-V is immature and to use ARM as an example will need another decade to get its act together (for which I have high hopes BTW). I agree with you that this will definitely be a shot in the arm for RISC-V.
> it's going to cause a huge fracture in software stacks at the exact WORST time [due to platform convergence]
This is far from the worst time, and platform monoculture is not something to be celebrated. So from that perspective you’ve given me some encouragement from the news of this deal.
Do you think that RISC-V can overcome the inefficiencies discussed elsewhere in this thread with time? Specifically, can they improve the instruction set with more powerful addressing tools and reduce the number of instructions needed within loops?
I wonder what the best ISA we could create would be in a universe where there were no rent-seekers looking to recoup their R&D investment by limiting what a new ISA could contain.
Softbank needs cash. sells ARM. not a good reason?
I dunno, would you rather have ARM technology owned by a US company or by a front for the Saudi sovereign wealth fund?
When you choose to include ARM technology in some of your products, then you will spend a lot of time and money for the implementation and you will need a long term availability of that technology, to be able to recover you costs and to give you time to switch to another technology, if desired.
Now any long-term commitment for providing any US-owned technology can no longer be believed, because precedents show that such commitments can be unilaterally canceled at any time, without an early enough warning.
only if you are not already under the thumb of the US gov't. Otherwise, it's a non-issue.
Softbank has a chainsaw prince as a main creditor. This is a very compelling reason to get cash in any way possible.
In this case the ROI does not depend on nvidia able to leverage ARM in their portfolio, even if they stopped developing ARM now , the licensing alone is worth a lot, while they hope to do more than that they don't have to.
IP can be extremely valuable in a merger like this ,
Google bought motorola and Microsoft bought nokia for largely IP reasons at 15B and 8 B
The net result is that the overall ecosystem is going towards a degenerative state, rather than fostering innovation and diversity in tech.
As someone who is not that well versed in hardware, can someone explain to me why it would be good to converge towards ARM? Why there is disdain for x86?
What I believe most people are concerned with are low power high performance devices. ARM owns that space. It would be at the very least disruptive to have ARM platforms die.
On the high end Intel CPUs in servers are very expensive. Many people on HN have expressed hope that cheap ARM cores become good enough and drive down the cost of both on-prem and cloud servers.
For all the PR ARM's old founder has been doing, i don't see he's been able to scare up a buyout partnership.
This is a business that has no overlap over what NVIDIA does and maybe NVIDIA could spin off this part of ARM as a separate company, because I can hardly imagine the NVIDIA management focusing in this business, where they do not have any experience.
On the other hand, ARM Cortex-A CPUs are better than those designed by NVIDIA and they are used in products (e.g. by Qualcomm) that directly compete with NVIDIA products.
I cannot imagine that NVIDIA will continue to design better Cortex-A cores and then give them immediately to their competitors.
That would be amazing. Nvidia has arguably the best GPU designs in the market, this would be a boost for ARM IP business.
what would this mean? are there any CPU ISAs with a GPU integrated into the ISA instead of just a peripheral?
Sure, they make SoCs for the Switch and some cars and a media dongle, but they used to be so much more interested. They basically owned the Android gaming market with Tegra 2 and 3, made a really cool handheld and licensed a incredibly economical tablet design to EVGA with Tegra 4, and released their own gaming tablet with the Shield.
Then they stopped. Why spend the last five years whittling down their ARM product line and then try to buy ARM?
Any other compatibility with Apple devices tends to be a matter of coincidence. When they moved to x86 hardware, running Windows on Macs became so easy that building Boot Camp — effectively an install wizard and a few driver packs — was easy, therefore Apple giveth. However Apple Silicon Macs will drop without Boot Camp for the obvious reasons and Apple won’t apologise as they taketh away.
"Nvidia has been the single worst company we have ever delt with so Nvidia F_ck You!" -- Linus Torvalds 
None of the FAANGs cared in the slightest.
I bring this up just because I feel it probably probably represents their arrogance well, in addition to everything else they do.
- $40bn isn't a lot in aggregate for the companies that are heavily invested in the Arm ecosystem (Apple, Qualcomm, Amazon etc) - maybe even Intel would take a small stake!
- The $40bn is partly in Nvidia's (arguably) inflated stock. Would cash be more attractive?
- Could probably partly fund through a public offering in due course.
Additionally, since Apple controls the whole stack including dev tools on their platforms, they have a lot more freedom to change underlying chip architecture changes than many others, who must rely entirely on third party software ecosystems to move.
Apple probably wouldn't get their money's worth at current price, given they wouldn't want to take on ARM's tech licensing business, since it's well out of scope for Apple's core business. A consortium would only be another messy strategic entanglement to deal with.
I suspect that the calculation is that Nvidia would be a 'good enough' steward of the Arm ISA.
This is SoftBank we're talking about. Do we really want to bring up inflated stock as one of their concerns? ;)
Doesn't Son have a stake in Nvidia so maybe this is to help support the Nvidia share price?
I'm not seeing this in the article. How much of it is in Nvidia's arguably inflated stock?
However, I can't see how they will avoid enormous conflicts of interest between Nvidia and other competing Arm customers and that this will be to the detriment of everyone who makes and uses Arm based products (except Nvidia).
Yes, this. I see people talk about this but not enough imo. How on earth would Nvidia ever be allowed to purchase Arm? That's a massive conflict of interest. I know the rules don't really matter when we're talking about companies this large but this is so blatant, to me.
Intel will always be the "Chipzilla". nVidia won't replace them but, will join it as the "Chipkong". So we'll have two problems, not a single but, different one.
nVidia is becoming an AI/HPC behemoth. GPUs for Compute, ARM for feeding the GPUs, Infiniband for interconnect. All in a tightly integrated, closed package. This is a clear monopoly.
They're light years ahead in terms of GPU development and Debugging tools when compared to AMD. CUDA cornered AI/GPU computing it seems. Intel's interconnect foray has fizzled, like their Xeon Phi / Larabee efforts. So, nVidia has the interconnect (Infiniband) and compute part for now.
CPUs can be challenged and disrupted. It's a mature technology. AMD can catch nVidia in enterprise in medium term (hopefully), but Infiniband has no competitors for what it does. And no, 100G Ethernet is no match for 100G Infiniband (we use it a lot since DDR. It's an insane tech).
We're living in interesting times.
Yes, AVX has a clock penalty but, if your code is math heavy (scientific, simulation, etc.) it's extremely convenient for some scenarios still.
GPUs are not perfect for "streaming data processing" or intermittent processing because their setup and startup time is still in seconds. You need to transfer data first to the GPU if you want the full speed also. In CPU computing this overhead is nonexistent.
I develop a scientific application and we've seen that with the improvements in the FPU and SIMD pipelines across generations, a 2GHz core can match a 3.7GHz one in per core performance in some cases. This is insane. This is a simple compilation with -O3 only. march and mtune were not added intentionally.
Unless GPU becomes as transparent as CPU, we either need to catch or surpass X86 on SIMD / pure math level to replace it completely.
Every infiniband adapter is self-aware & topology aware. They know where the other nodes are so, they can directly talk with each other, regardless of the topology (network is mapped, managed and maintained by a daemon called subnet-manager which can either run on switches or a dedicated server).
This hardware and software combo results in three things:
1. Memory to memory transfers: IB can transfer from RAM of a host to RAM of another host, directly with RDMA. This means, when you run MPI and send a message to other processes, it's magically beamed over there, direct to RAM of the target(s). IB is transparent to MPI via its libraries, so everything automagic and 100x faster.
2. Latency: A to B latency is around 2-5 ns (nanoseconds). This means, when running stuff like MPI, machines become one as much as they can be. Until ethernet assembles your one package, you're there; possibly finished your transfer and continuing to churn your code.
3. Speed: 40Gbps IB means 38+ Gbps real throughput. For every p2p connection if you're running through a cube topology core switch. 80Gbps means around 78 or so. So theoretical and maximum is not so distant from each other. In most cases, 100 means 100 sustained, 80 means 80 sustained and so on (you can attach storage devices to IB network and enjoy that speed and latency on your HPC compute nodes for files).
Moreover, with more modern cards and switching hardware, It hardware accelerates MPI operations (broadcast, atomics, summation, etc.) and has multi-context support for supporting multiple MPI processes without blocking each other as much as possible.
For HPC, it's a different universe of speed, latency and processing acceleration. Moreover, you can run TCP/IP over it but, we generally run another gigabit network for server management.
Then you use your Neuralink brain-computer interface (communicating with the home supercomputer cluster with an ultra-compact WiGig module) to "program" it by talking to an AI avatar that pops up in the middle of your living room (or whatever simulation you are replacing it with currently). The cluster runs the AI and the simulation.
> A to B latency is around 2-5 ns (nanoseconds).
What are A and B? and where did you get these numbers? HCA latency is more like ~500 ns.
For that matter, what does Intel have a monopoly on today?
There is more competition in the CPU market this year than there has been for a long time. Things are getting better, not worse.
8B + 24B loan turned into 40 Billion - 24B loan = 16B or 8B in profits minus interest. Which could be up to a 19% annual ROI 1.19^4 ~= 2x. I don’t know what their loan interest rate look like, but I suspect it’s shockingly low.
I suspect that Nvidia changes the business model of Arm a little and starts to sell high performance Nvidia Arm CPU's directly to server, laptop and mobile manufacturers.
Then we will have Intel, AMD and Nvidia in both CPU and GPU markets.
- Will Nvidia sell you a license to A78 to enable you to continue to compete with Tegra or are you stuck on A77?
- If you can't get a license to A78 where do you turn? RISC-V? Possibly but will you still have a business by the time a competitive RISC-V design emerges from somewhere?
The point is that Nvidia might play fair but the temptation to hinder those who compete directly with its own SoCs - where it will make a lot more money - will be great, and who will stop them if they do?
To my knowledge, Apple was not involved in any founding of ARM (the company or the ISA).
ARM history, according to Wikipedia: https://en.wikipedia.org/wiki/ARM_architecture#History
Edit: I guess you could quibble about ‘founding’, but really I’m wrong, and my own link proves it!
Advanced RISC Machines Ltd. – Arm6
In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd., which became Arm Ltd when its parent company, Arm Holdings plc, floated on the London Stock Exchange and NASDAQ in 1998. The new Apple-Arm work would eventually evolve into the Arm6, first released in early 1992. Apple used the Arm6-based Arm610 as the basis for their Apple Newton PDA
The ironies of history: one of Apple's most infamous failures ended up being the foundation of their later success.
Die of an Arm610 microprocessor
In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd., which became Arm Ltd when its parent company, Arm Holdings plc, floated on the London Stock Exchange and NASDAQ in 1998. The new Apple-Arm work would eventually evolve into the Arm6, first released in early 1992. Apple used the Arm6-based Arm610 as the basis for their Apple Newton PDA.
"First, ARM has two different kinds of licenses. One is for chipset designs. You pay your fee, you take your Cortex cores or whatever, you get them fabbed, and you've got your CPUs.
The other is an ISA license. With that, you get no chip design. None. All you get is the instruction set architecture. You have to roll the actual design yourself.
And that's what Apple's been doing. Making their own custom designs that use the ARM instruction set. For years."
It is standard business offering from ARM.
If Apple was really unhappy about this and didn't want full ownership they could probably pull together a consortium to put together a rival bid. Doesn't look like it's happening though.
Perhaps this: https://www.youtube.com/watch?v=lGT3zSGDN3k
For all we know they just pay for that license like everybody else.
Apple built their GPU studio from ex-Imagination staff and will introduce 3 GPUs over the next year: Sicilian, Tonga, Lifuka to support their mobile and desktop plans.
The question is whether ARM will sub license this combined GPU tech, or if it will be NVidia silicon only.
Are the now so far down their own road of development that it doesn't really matter?
I doubt that NVidia controlled ARM will ever be so inclined to sell another architecture license ... they would rather sell you their own designed ARM chips.
Seriously. I'm excited to see ARM owned by a hardware centric company. That said, I really don't expect this to have much impact in the near term. Licenses are already in place. China will probably spin competing chips based on their own ISA before too long(5-10 years).
I'm frankly interested to hear what folks in HW have to say. Hearing the repetitive, uninformed opinions of users and SW folks isn't really telling me anything informative about this. I'm an embedded SWE and I'm not seeing much to worry about. Would it be better if Apple bought ARM? Huawei?
This notion that a hippie commune is going to buy ARM and lead us all into open source nirvana where free, cutting edge IP rains down from the sky is frankly goofy.
Nvidia can (and do) build their own ARM ISA CPU's offer them in the market so we already have access to Nvidia's take on the architecture. Do they have established expertise in ISA design or microcontrollers?
Maybe I'm missing something?
A 128 bit version might be an issue in the future.
Having a native uint128_t would make dealing with IPv6 addresses a lot nicer though :)
I can see 128-bit pointers being a thing: not because of 128-bits of address space, but for the ability to embed type information directly in the pointer - which could improve performance for dynamic-dispatch scenarios or runtime type-safety built-in to the hardware itself.
> Having a native uint128_t would make dealing with IPv6 addresses a lot nicer though :)
[We're already there](https://stackoverflow.com/questions/34234407/is-there-hardwa...)
Wider data arguments are already implemented as AVX/2/512.
And today they get $40B but partly in Nvidia stocks... Assuming that Softbank manages to sell immediately the Nvidia stock and get their $40B, it is more like 14%. In the same timeframe Nvidia stock went from $30 (Jan. 2016) to $480 (Friday) or x16.
Softbank would have a way better outcome if they had invested in Nvidia 4 years ago, than investing in ARM. The potential 14% they got over last 4 years, is not great.
Compared to other investments, an ARM sale of $40B would be a home run...
It was never in doubt that nvidia was going to grow massively. 3-4 years back cryptocurrency mining made it next to impossible to actually buy a gpu. Machine learning was always going to keep growing massively. Cuda had cornered the market . AMD GPU is far behind .
Similarly AMD's growth is never in doubt after ryzen launched , there is nothing on intel's next 3-5 years that is going to be remotely competitive. They can only compete by the sheer force of their sales . AMD will only keep growing next 5 years, by how much depends on how well they can execute their own sales.
The only people I’ve ever met that actually love nVidia, are enthusiast PC gamers and that’s because the games are optimised for nVidia due to nVidias anticompetitive efforts in the game development space.
It is just not the developers either, even the bean counters hate them. There is always constant chatter they will replace nothing actually happens.
Developer happiness is not how most of enterprise sales happens. Most of the big cloud vendors today only offer Nvidia GPUs . Unless AMD GPUs can become so much better that nvidia is left behind nobody is going to change.
Cost of change is enormous in large organizations . This why intel is and will still make money even though they are behind in tech in almost all parameters.
That is actually when they are behind , AMD has nothing on the horizon to actually beat nvidia.
I don't buy nvidia GPUs because I like it, I buy them because the total cost of ownership of investing in a stack everyone uses is cheaper. Hiring is easier , cloud and library support is great, lesser bugs that is already not been seen and handled before.
The only reason it will make financial sense is if either Nvidia is so expensive that it is worth exploring alternatives or AMD is offering something radically different that the effort in going a different path can generate a lot of biz value. Neither is applicable today or next few years
Even if it wasn't a subsidiary Apple already operates some divisions and teams differently, such as its Webkit team (one of the few teams where employees are allowed to have Github.com user accounts...)
The apple now is a very different beast.
Apple could shutdown the licensing just to keep competitors behind, if they did that it would set back all their android competitors by 5-10 years atleast and potentially fragment the chip market . Maintaining the iphone chip dominance is definitely worth 40B .
So Apple would create minimal disruption and a huge antitrust case that they would lose. Why do you think Apple would be this dumb?
Apple uses just ARM instruction set and design the microarchitecture by themselves.