
SoftBank set to sell UK’s Arm Holdings to Nvidia for $40B - JumpCrisscross
https://www.ft.com/content/6bfe40a5-2426-4743-98cd-6fed9dd01b98
======
BluSyn
I think this is a very bad and short-sided deal.

ARM is in such a great position currently. There's no reason to sell except
that SoftBank is in desperate need of capital. On top of that, Nvidia is
likely to be a terrible steward of the IP. Nvidia has a terrible track record
of working with other companies, partners, and open source developers. ARM has
become a de-facto standard in mobile space, and Nvidia will likely use that
position to strong-arm competition. This will push vendors out of ARM and into
some alternative ISA. While long-term this might end up being great for
RISC-V, it's going to cause a huge fracture in software stacks at the exact
WORST time. Finally we're starting seeing huge convergence on ARM in
Mobile/Desktop/Server space. One ISA to rule them all! Nope, now Nvidia is
going to destroy that progress and set everything back another 5+ years.

Please, somebody tell me I'm wrong. I really don't want to be so pessimistic
about this.

~~~
stefan_
Why on earth would we want to converge on ARM? The first ARM architecture that
was somewhat palatable was ARMv7, everything before that an unusable mess of
different chips with vastly different capabilities. Their extensions are bad
(read: subtly incompatible) late copies of what Intel and AMD are driving.
Most of the innovation happens not at ARM, but their respective licensees. It
took others inventing EFI to get some form of BIOS-equivalent for ARM, but
even today the company gives the impression that they couldn't care less.

~~~
adrian_b
I agree with what you said, but there is a reason for wanting to converge on
ARM.

There exists no good alternative.

ARMv8.2 or newer is a very well designed ISA, while RISC-V is a very bad ISA
and I would hate to be forced to use it.

OpenPOWER is a far better ISA than RISC-V, but unfortunately most developers
do not have any experience with POWER and they have the wrong belief that
POWER is some antique ISA while RISC-V must be some modern fashionable ISA.
Therefore even if OpenPOWER is much better, it is less likely than RISC-V to
be used as a replacement for ARM.

I and probably thousands of other engineers could design a much better ISA
than RISC-V in a week of work, but no one of the creators of those thousands
of new ISA variants would be able to convince all the other people to choose
his/her variant over the others and start the significant amount of work
needed for porting all the required software tools, e.g. LLVM and gcc.

So, if ARM would no longer be an acceptable choice, I do not see any hope that
its replacement would not be greatly inferior.

~~~
acidbaseextract
Why is RISC-V bad? You spend paragraphs ripping on it without actually
explaining why it's bad.

~~~
adrian_b
This has been discussed many times on many forums on the Internet.

The summary is that RISC-V is inefficient because it requires more
instructions to do the same work as other ISAs and it does not have any
advantage to compensate for this flaw.

Those extra instructions appear especially in almost all loops and the most
important reason is that RISC-V has a worse set of addressing modes than the
the vacuum-tube computers from more than 60 years ago, which were built only
with a few thousands tubes, compared to the millions or billions of
transistors available now for a CPU.

Because of this defect of the RISC-V ISA, the Alibaba team who designed the
RISC-V implementation with the highest current performance (Xuantie910, which
was presented last month at Hot Chips) had to add a custom ISA extension with
additional addressing modes, in order to be able to reach an acceptable speed.

Whenever the designers of the RISC-V ISA are criticized, they reply that the
larger number of instructions is not important, because any high-performance
implementation should do instruction fusion, to be able to reach the IPC of
other ISAs.

Nevertheless, that is wrong for 2 reasons, instruction fusion cannot reduce
the larger code size due to the inefficient instruction encoding and the
hardware required for decoding more instructions in parallel and for doing
instruction fusion is much more complex than the hardware required for
decoding less instructions with a better encoding as in other ISAs.

~~~
zozbot234
> Nevertheless, that is wrong for 2 reasons, instruction fusion cannot reduce
> the larger code size due to the inefficient instruction encoding

RISC-V includes a compressed extension that makes instruction encoding
competitive or better than x86(!), and with none of the drawbacks of ARM's
Thumb modes.

~~~
adrian_b
Compression methods reduce the size of the code, but that does not matter when
assessing the efficiency of the base instruction encoding.

If you would apply the same compression methods to a more compact original
encoding then the compressed code would be even smaller.

Competing ISAs, such as ARM (Thumb), MIPS (nanoMIPS) and POWER also have
compressed encoding variants.

~~~
jka
Are there any benchmarks (in terms of code size, runtime performance, energy
effiency, ...) available for OpenPOWER vs RISC-V?

(and if not, what would be some of the metrics to objectively compare the
architectures?)

~~~
adrian_b
It is very difficult to make a non-biased comparison between different ISAs.

If you just compile some benchmark programs for 2 different architectures and
you look at the program sizes and the execution times, the differences might
happen to be determined mostly by the quality of the compilers, not by the
ISAs, in which case you could reach a wrong conclusion.

Many years ago, on one occasion I have spent many months working at the
porting of a real-time operating system between Motorola 68k and 32-bit POWER.
At another time I have also spent a couple of months with the porting of many
device drivers between 32-bit POWER and 32-bit ARM and Thumb.

Such projects required a lot of examination of the code generated by compilers
for the target architectures and also a lot of time spent with writing some
optimized assembly sequences for a few parts of the code that were critical
for the performance.

After spending so much time, i.e. weeks or months, with porting some large
program, whose performance you understand well, between 2 ISAs, you may be
reasonably confident of having a correct comparison of them.

If you want to reach a conclusion in a few hours at most, it is unlikely to be
able to find an unbiased benchmark.

RISC-V is however a special case. Even if I have never spent time with
implementing any program for it, after having experience with assembly
programming for more than a dozen ISAs, when I see that almost any RISC-V loop
may require up to a double number of instructions compared to most other ISAs,
then I do not need more investigations to realize that reaching the same level
of performance with RISC-V will require more complex hardware than for other
ISAs.

Also, when comparing ISAs, I place a large weight on how good those ISAs are
at GMPbench, i.e. at large number arithmetic. In my experience with embedded
system programming large integer operations are useful much more frequently
than traditional RISC ISA designers believe.

While x86 has always been very good at GMPbench, many traditional RISC ISAs
suck badly, because they lack either good carry handling instructions or good
double-word multiply/divide/shift instructions.

RISC-V also seems to have particularly bad multi-word operation support.

~~~
jka
Thanks for the perspective and GMPbench reference. I'm sure you're correct
that RISC-V has a lot of optimization work to do both at the compiler and chip
implementation levels.

I'm curious whether vector operation support in RISC-V might also make up for
any apparent shortcomings in raw arithmetic throughput - I guess a lot of it
will depend on the types of workloads involved.

------
klelatti
On reflection I'm a bit surprised that a consortium hasn't emerged to offer
more for Arm given the concerns expressed here:

\- $40bn isn't a lot in aggregate for the companies that are heavily invested
in the Arm ecosystem (Apple, Qualcomm, Amazon etc) - maybe even Intel would
take a small stake!

\- The $40bn is partly in Nvidia's (arguably) inflated stock. Would cash be
more attractive?

\- Could probably partly fund through a public offering in due course.

~~~
ethbr0
> _The $40bn is partly in Nvidia 's (arguably) inflated stock._

This is _SoftBank_ we're talking about. Do we really want to bring up inflated
stock as one of their concerns? ;)

~~~
klelatti
There is bound to be some sort of financial engineering going on too.

Doesn't Son have a stake in Nvidia so maybe this is to help support the Nvidia
share price?

~~~
valuearb
Giving $40B of Nvidia stock to a cash strapped seller hurts the stock price.

~~~
klelatti
I was wrong on Son having a holding in Nvidia too - seems to have sold it.

------
klelatti
I think that concerns about this being the end of Arm are overdone - I expect
that the UK government will get some assurances about maintaining the HQ in
the UK etc and that Arm's business is sufficiently different to Nvidia's to
make wholesale merger with Nvidia's existing operations counterproductive.

However, I can't see how they will avoid enormous conflicts of interest
between Nvidia and other competing Arm customers and that this will be to the
detriment of everyone who makes and uses Arm based products (except Nvidia).

~~~
wAYshzRtw
> However, I can't see how they will avoid enormous conflicts of interest
> between Nvidia and other competing Arm customers and that this will be to
> the detriment of everyone who makes and uses Arm based products (except
> Nvidia).

Yes, this. I see people talk about this but not enough imo. How on earth would
Nvidia ever be allowed to purchase Arm? That's a massive conflict of interest.
I know the rules don't really matter when we're talking about companies this
large but this is so blatant, to me.

------
gautamcgoel
If this goes through, I expect Nvidia to become the new Intel - a humongous,
anticompetitive chip monopoly. Today, people refer to Intel as Chipzilla;
tomorrow Nvidia will carry that moniker.

~~~
bayindirh
> Today, people refer to Intel as Chipzilla; tomorrow Nvidia will carry that
> moniker.Today, people refer to Intel as Chipzilla; tomorrow Nvidia will
> carry that moniker.

Intel will always be the "Chipzilla". nVidia won't replace them but, will join
it as the "Chipkong". So we'll have _two_ problems, not a single but,
different one.

~~~
season2episode3
Duopolies everywhere.

~~~
bayindirh
It won't be a duopoly because nVidia doesn't make x86 hardware. Even if they
did, there would be three contenders (AMD, Intel, nVidia).

nVidia is becoming an AI/HPC behemoth. GPUs for Compute, ARM for feeding the
GPUs, Infiniband for interconnect. All in a tightly integrated, closed
package. This is a clear monopoly.

They're light years ahead in terms of GPU development and Debugging tools when
compared to AMD. CUDA cornered AI/GPU computing it seems. Intel's interconnect
foray has fizzled, like their Xeon Phi / Larabee efforts. So, nVidia has the
interconnect (Infiniband) and compute part for now.

CPUs can be challenged and disrupted. It's a mature technology. AMD can catch
nVidia in enterprise in medium term (hopefully), but Infiniband has no
competitors for what it does. And no, 100G Ethernet is no match for 100G
Infiniband (we use it a lot since DDR. It's an insane tech).

We're living in interesting times.

~~~
krick
Looking 10 years into the future, do we really need x86 though? Is it not
possible that Intel will loose and CISC will become basically obsolete? (Yeah,
I'm ignoring AMD here for no good reason, but we were talking about nVidia vs
Intel anyway.)

~~~
bayindirh
Looks somewhat unlikely. We may have other architectures as mainstream and,
they would be more energy efficient for the same performance of x86 in
mainstream use but, pure SIMD computation is underrated IMHO.

Yes, AVX has a clock penalty but, if your code is math heavy (scientific,
simulation, etc.) it's extremely convenient for some scenarios still.

GPUs are not perfect for "streaming data processing" or intermittent
processing because their setup and startup time is still in seconds. You need
to transfer data first to the GPU if you want the full speed also. In CPU
computing this overhead is nonexistent.

I develop a scientific application and we've seen that with the improvements
in the FPU and SIMD pipelines across generations, a 2GHz core can match a
3.7GHz one in _per core_ performance in some cases. This is insane. This is a
simple compilation with _-O3_ only. _march_ and _mtune_ were not added
intentionally.

Unless GPU becomes as transparent as CPU, we either need to catch or surpass
X86 on SIMD / pure math level to replace it completely.

------
perardi
Regardless of the consequences of selling to Nvidia…that’s not a great exit,
is it? They bought ARM for $32 billion, and they are getting $40 billion for
it? With all the money sloshing around in the debt markets, I figured they’d
get a bit more.

~~~
Retric
Depends on how much leverage they used. If it was all cash on hand the that’s
a poor ROI, if they only put up say 8b then that‘s a nice return over 4 years.

~~~
valuearb
8B at zero percent interest would only increase the return to about 6%
compounded.

~~~
Retric
SoftBank bought Arm in 2016 for $32 billion. A 32B investment turned into 40B
over 4 years is 6% ROI. (32 * 1.06^4) = 40.3

8B + 24B loan turned into 40 Billion - 24B loan = 16B or 8B in profits minus
interest. Which could be up to a 19% annual ROI 1.19^4 ~= 2x. I don’t know
what their loan interest rate look like, but I suspect it’s shockingly low.

~~~
valuearb
Sorry I misread your post, thanks for the clarification.

------
nabla9
Nvidia already has their own ARM SoC, they just merge Arm developers to the
team.

I suspect that Nvidia changes the business model of Arm a little and starts to
sell high performance Nvidia Arm CPU's directly to server, laptop and mobile
manufacturers.

Then we will have Intel, AMD and Nvidia in both CPU and GPU markets.

------
Nemo_bis
Also on Reuters: [https://uk.reuters.com/article/uk-arm-holdings-m-a-
nvidia/nv...](https://uk.reuters.com/article/uk-arm-holdings-m-a-
nvidia/nvidia-nears-deal-to-buy-chip-designer-arm-for-more-than-40-billion-
sources-idUKKBN2630XH)

------
stewbrew
Oh the joys of stock markets. Nvidia makes around 3B a year (profit), they
have 10B cash. IIRC Arm isn't really a cash cow. How can they afford such a
deal? There market capitalization is 330B - for whatever reason people are
willing to pay that much for their shares. Money is cheap nowadays - also
thanks to Corona. So, they sell some warrants, sell a few new shares and it's
done.

------
justinclift
[https://archive.vn/PLQr3](https://archive.vn/PLQr3)

------
almost_usual
The disaster that was WeWork could ruin ARM.

~~~
bredren
or: The disaster that was WeWork could accelerate the adoption of an open-
sourced, royalty-free ISA.

~~~
discodave
An open source ISA isn't going to be any better than ARM if it's being sold to
you by AWS, GCP, and Azure.

------
dippersauce
It simply wouldn’t make sense for NVidia to maliciously impose upon ARM after
acquisition, as many of the commenters are concerned. I’m not saying they
won’t, but because the present situation would limit any efforts to do so long
enough that it would be futile.Consider the perpetual license agreements ARM
holds with companies like Apple. These companies are best positioned to resist
meddling with ARM. As an example Apple will forever have access to the ARM
ISA, so NVidia can’t simply stop them from using existing designs. The
processors Apple uses are all custom designs anyway. If future designs were
purposely kneecapped, they could just improve their currently licensed designs
until a suitable alternative is produced. Hindering future processor designs
won’t hurt the biggest players in the short term, and in the long term it will
only drive them to the competition. NVidia could take the approach of slowly
drifting new designs to greater integration with their own GPUs in such a way
that alternatives would be displaced - either by favor of NVidia GPUs or by
the difficulty in using alternatives. This would be obvious though, and would
again drive their users away. NVidia hasn’t had great success like they have
had with GPUs in any other market. I see this as an opportunity to diversify
and secure their future, and they want to take it.

~~~
klelatti
It's not just about Apple. Suppose you're an SoC designer without an
architecture license who makes SoCs that successfully compete with say Tegra.
Maybe you have a license to use Cortex A77. Lets assume that A78 is the
successor to A77.

\- Will Nvidia sell you a license to A78 to enable you to continue to compete
with Tegra or are you stuck on A77? \- If you can't get a license to A78 where
do you turn? RISC-V? Possibly but will you still have a business by the time a
competitive RISC-V design emerges from somewhere?

The point is that Nvidia might play fair but the temptation to hinder those
who compete directly with its own SoCs - where it will make a lot more money -
will be great, and who will stop them if they do?

------
mcintyre1994
Do Apple actually work with Arm currently or is all their Apple Silicon stuff
completely on their own? I don’t actually understand the issue they have with
Nvidia but they seem to have one.

~~~
kitsunesoba
As I understand it, Apple has a special perpetual license for the architecture
thanks to it being one of the companies that founded ARM. This deal shouldn't
have too much impact on Apple.

~~~
jrmg
I think you’re thinking of PowerPC?

To my knowledge, Apple was not involved in any founding of ARM (the company or
the ISA).

ARM history, according to Wikipedia:
[https://en.wikipedia.org/wiki/ARM_architecture#History](https://en.wikipedia.org/wiki/ARM_architecture#History)

Edit: I guess you could quibble about ‘founding’, but really I’m wrong, and my
own link proves it!

————

Advanced RISC Machines Ltd. – Arm6

In the late 1980s, Apple Computer and VLSI Technology started working with
Acorn on newer versions of the Arm core. In 1990, Acorn spun off the design
team into a new company named Advanced RISC Machines Ltd.,[30][31][32] which
became Arm Ltd when its parent company, Arm Holdings plc, floated on the
London Stock Exchange and NASDAQ in 1998.[33] The new Apple-Arm work would
eventually evolve into the Arm6, first released in early 1992. Apple used the
Arm6-based Arm610 as the basis for their Apple Newton PDA

~~~
toyg
_> Apple used the Arm6-based Arm610 as the basis for their Apple Newton PDA_

The ironies of history: one of Apple's most infamous failures ended up being
the foundation of their later success.

~~~
pjmlp
Including how much JavaScript kind of resembles NewtonScript.

------
mohankumar246
Along with cpu's ARM also makes mobile gpu ip (Mali), nvidia as well in their
tegra. I would be surprised if this deal went through.

~~~
rbirkby
Mali was an ARM acquisition. It makes sense for NVidia to smash Mali and RTX
together to allow Android manufacturers to compete against Apple.

Apple built their GPU studio from ex-Imagination staff and will introduce 3
GPUs over the next year: Sicilian, Tonga, Lifuka to support their mobile and
desktop plans.

The question is whether ARM will sub license this combined GPU tech, or if it
will be NVidia silicon only.

~~~
ece
I think Android manufacturers need SoCs with 16MB L3 shared across the
CPU+GPU+DSP+NN like the A13 more than they need RTX in a GPU.

------
philistine
Apple is famous for compiling all its OSes to different architectures. We saw
it with the quick switch to Intel and this upcoming switch to ARM. I’m
convinced someone somewhere at Apple is firing off an email asking: "hey, are
we compiling our stack on RISC-V?"

------
stjohnswarts
Hopefully European and American governments will step in and stop this. This
is terrible for the future of ARM as a somewhat neutral platform and the
Industry as a whole.

------
jonplackett
What effect does this have on Apple and them basing their own chips on Arm?

Are the now so far down their own road of development that it doesn't really
matter?

~~~
slipheen
None whatsoever. They have a license from the Newton days which allows them to
do whatever they want perpetually.

~~~
justaguy88
Is there a citation for the perpetual licence? I would have just assumed they
would have purchased a long-term licence to build cores for a particular ISA
version

~~~
tibbydudeza
You just have to implement ARMv8 architecture in your SoC also nothing
prevents you from extending it further with ML or GPU bits.

I doubt that NVidia controlled ARM will ever be so inclined to sell another
architecture license ... they would rather sell you their own designed ARM
chips.

------
ablekh
Related read (a reminder on the history of the original acquisition):
[https://www.wired.co.uk/article/softbank-vision-
fund](https://www.wired.co.uk/article/softbank-vision-fund).

------
halfer53
UK government should save ARM !!!

------
valuearb
Well, looks like I’ve been hugely wrong on this all along. Seems like one of
the biggest overpays in acquisition history, I can’t fathom how such a tiny
company with such mediocre growth deserves such a massive premium.

------
blodfreed
Unpaywalled link
[https://pastebin.com/ugZpqfPE](https://pastebin.com/ugZpqfPE)

------
tehabe
ARM should be converted into a non-profit foundation or something. Should be
able to be financed by the licensing deals. But SoftBank couldn't make 40
billion with such an idea.

------
jayd16
Well... on the bright side I like what Tegras have been able to achieve. The
Shields and the Switch are neat machines. Maybe we'll see some more ARM
designs that can compete with Apple.

~~~
01100011
No, everything Nvidia is evil. We're supposed to be hating on them, right? Can
I get my upvotes for perpetuating the hive mind now?

Seriously. I'm excited to see ARM owned by a hardware centric company. That
said, I really don't expect this to have much impact in the near term.
Licenses are already in place. China will probably spin competing chips based
on their own ISA before too long(5-10 years).

I'm frankly interested to hear what folks in HW have to say. Hearing the
repetitive, uninformed opinions of users and SW folks isn't really telling me
anything informative about this. I'm an embedded SWE and I'm not seeing much
to worry about. Would it be better if Apple bought ARM? Huawei?

This notion that a hippie commune is going to buy ARM and lead us all into
open source nirvana where free, cutting edge IP rains down from the sky is
frankly goofy.

~~~
klelatti
Why would you be excited?

Nvidia can (and do) build their own ARM ISA CPU's offer them in the market so
we already have access to Nvidia's take on the architecture. Do they have
established expertise in ISA design or microcontrollers?

Maybe I'm missing something?

~~~
jayd16
Like I said, maybe there's room for GPU improvements in the ARM designs. Seems
like nVidia could do something like pack in hardware DLSS. They license out
their GPU designs so its not far fetched that their GPU features make their
way into mobile ARM chip designs. The Tegra was a bit power hungry but maybe
having control of the full stack can alleviate that.

------
natcombs
I wish they went public. I’d love to invest in them long term

~~~
yourabanana
Well now you can. Through NVDA.

~~~
vmception
gigglesnort

------
gigatexal
This is so dumb. Just IPO and take the windfall.

------
OpticalWindows
SoftBank really really needs that money

------
tibbydudeza
The important players like Samsung/Apple/Huawei are already ARM architecture
license holders so they can do their own thing if they don't like the
direction that NVidia is going in.

A 128 bit version might be an issue in the future.

~~~
xxpor
I could imagine we might see an architecture with 128 bit word sizes, but I
doubt we'd see 128 bit pointers (aka a 128 bit address space) any time soon.
Itd just be more annoying than anything. I personally have even written
software where we do pointer path to get 32 bit indexes rather than storing
full 64 bit pointers simply because of space constraints (like trying to fit a
hot struct into a single cache line)

Having a native uint128_t would make dealing with IPv6 addresses a lot nicer
though :)

~~~
DaiPlusPlus
> but I doubt we'd see 128 bit pointers (aka a 128 bit address space) any time
> soon.

I can see 128-bit pointers being a thing: not because of 128-bits of address
space, but for the ability to embed type information directly in the pointer -
which could improve performance for dynamic-dispatch scenarios or runtime
type-safety built-in to the hardware itself.

> Having a native uint128_t would make dealing with IPv6 addresses a lot nicer
> though :)

[We're already there]([https://stackoverflow.com/questions/34234407/is-there-
hardwa...](https://stackoverflow.com/questions/34234407/is-there-hardware-
support-for-128bit-integers-in-modern-processors))

~~~
saagarjha
Not type information, but CHERI capabilities:
[https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri...](https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/cheri-
morello.html)

------
LatteLazy
25% more than they paid just 2 years ago?

~~~
llsf
Softbank bought ARM in 2016 and paid $32B ($35B of 2020 USD after 4 years of
inflation).

And today they get $40B but partly in Nvidia stocks... Assuming that Softbank
manages to sell immediately the Nvidia stock and get their $40B, it is more
like 14%. In the same timeframe Nvidia stock went from $30 (Jan. 2016) to $480
(Friday) or x16.

Softbank would have a way better outcome if they had invested in Nvidia 4
years ago, than investing in ARM. The potential 14% they got over last 4
years, is not great.

~~~
TomVDB
Softbank acquired 4.9% of Nvidia for $4B in May 2017. They sold it in February
2019 for $3.6B (according to this [https://www.cnbc.com/2019/02/06/softbank-
vision-fund-sells-n...](https://www.cnbc.com/2019/02/06/softbank-vision-fund-
sells-nvidia-stake.html)). Another hilarious example of Softbank's market
timing expertise.

Compared to other investments, an ARM sale of $40B would be a home run...

~~~
TylerE
I wonder what the return would be just taking the opposite position of
everything Softbank does is...

------
spzb
I would have thought Apple was a better strategic fit for Arm and likely to be
able to pay bigger bucks too.

~~~
wmf
People are speculating that Nvidia might end Arm core licensing, but Apple
would definitely shut it down. Apple's culture is totally incompatible with
Arm's business model.

~~~
stjohnswarts
What how would Apple shutdown what Nvidia is doing? If they buy ARM they can
stop selling licenses for it if they like. Of course they have to honor any
prior contracts with other customers but when those run out, adios. Apple has
their own license to use ARM architecture/ISA as long as they like, so what
Nvidia does when they own ARM is really no concern to the fruit company

~~~
fulafel
Do you have a reference about Apple having a perpetual license? I just found
many articles that claimed they have an ARM "architecture license" but no
specifics, and 8 years ago in 2012 ARM PR'd the deal as "long term" which
might be relatively close to today.

------
plg
what are the implications for Apple

~~~
nabla9
Apple has architectural licence, so they can do whatever they want.

Apple uses just ARM instruction set and design the microarchitecture by
themselves.

------
mmanfrin
With the removal of the 'web' button, could we possibly get a non paywalled
link?

------
shmerl
The anti-trust works so well these days...

------
tus88
Hopefully they use it to f&#@ Qualcomm.

------
lazylizard
why dont huawei/xiaomi/lenovo/zte buy arm?

and then the hilarity of trump banning doing business with arm afterwards
would be worth some popcorn...

------
ausjke
now nvidia is the new intel, it owns arm and can do whatever it wants to.

time for other big companies checking out alternatives

------
alvern
This makes sense. Most of the Jetson/Xavier/Nano boards already use a Carmel
ARMv8 chip. I just hope this spurs more development in ARMv8. Currently the
majority of ARM is ARMv7 (smartphones and Raspi).

~~~
csdreamer7
Raspberry Pi 4 uses ARMv8 according to Wikipedia: the ARM Cortex-A72.

[https://en.wikipedia.org/wiki/Raspberry_Pi#Processor](https://en.wikipedia.org/wiki/Raspberry_Pi#Processor)

~~~
fulafel
Pi 3 and the V1.2 revision of the Pi 2 were ARMv8 too (Cortex-A53).

edit: also re smartphones, Android SoCs started to move to v8 CPU cores 5
years ago, in the Nexus 5X/Pixel 1 generation.

