
Exploring DynamIQ and ARM’s New CPUs: Cortex-A75, Cortex-A55 - msh
http://www.anandtech.com/show/11441/dynamiq-and-arms-new-cpus-cortex-a75-a55
======
jepler
g-d those are some deceptive graphs! Take a look at the slide titled "Pushing
the performance envelope". A 16% improvement (LMBench memcpy) is displayed so
that it looks like a 164% improvement (size of bar increased from 50px to
132px)!

~~~
snvzz
Might have to do with the pressure they're feeling from the unavoidable:
RISC-V replacing ARM.

~~~
ant6n
This is an obvious troll, but I'd like to point out that if risc-v where to
become big, ARM could probably come up with one of the best implementations.

------
cm2187
Stupid question: if ARM keeps adding aggressively instructions and co-
processors, how long before it becomes blotted and power hungry like the x86
architecture? Isn't its simplicity the strength of the ARM platform?

~~~
pedroaraujo
It's not like ARM randomly slaps features just for the fun of it. These
features are carefully modeled and studied before released into the market.

~~~
Aissen
Of course every design team wants to think it works this way. But the
big/little mess (at the OS level), and the fact that Apple is beating them at
their own game (like Qualcomm was, at the time) is a proof that it's not that
simple.

~~~
pedroaraujo
An ARM CPU is not exactly a self contained system like an x86 CPU, it is very
dependent on the rest of the SoC.

Qualcomm and Apple have their own tricks which work very well in their
platform alone but they wouldn't be able to simply plug out their ARM
implementation and sell it as a really optimized system without their own SoC.

~~~
dooferlad
The rest of the SoC plays a huge part, just like the architecture of a PC
does. My personal mantra when thinking of how to get the most out of a CPU is
"feed the beast" since idle time is the killer. In the most simplistic terms
your L1 caches need to be big enough to hold the program you are running and
the data it is using. You can hint to the CPU about what data you will need
and get it to preload and branch prediction does a similar job for fetching
instructions and avoiding branch misses.

I for one like to see big caches in SoCs, but the cost of putting it there
needs to be balanced against the performance requirements of the product (no
point having a bigger chip than you need). There is also the numbers game of
64 bit / number of cores / RAM etc which are easy to parse but difficult to
understand. An irrelevant number of consumers care about IPC and time taken to
wake a core or switch a workload between cores, so great innovations like big
little are used as marketing numbers rather than to tune a SoC to its best
performance. I would like to see 1 big, 2 little cores and more cache myself.

So what do Apple have? Lots of cores? No. If you build an 8 core SoC and can't
keep those cores running then you have listened to marketing.

------
Symmetry
I'd heard about them allowing heterogeneous clusters ahead of time and I was
wondering how that would work. Private L2s should do it, you really need to
design the L2 to match the profile of the supported CPU(s) but with L3 the
latencies are higher and there's less need to be specialized.

------
jumpkickhit
Nice write-up.

Also I was curious when consumers might see this in their products, last line
in the article says late 2017/early 2018.

~~~
DCKing
If current trends continue, Huawei (through their subsidiary HiSilicon) will
be the first to launch a product with the new ARM IP.

The Cortex A72 was announced in April 2015, Huawei launched the Kirin 950 (4x
Cortex A72 + 4x Cortex A53) in November 2015 as part of the Huawei Mate 8. The
Cortex A73 was announced in May 2016, Huawei launched the Kirin 960 (4x Cortex
A73 + 4x Cortex A53) in November 2016 as part of the Huawei Mate 9.

So yeah, my guess is on the Huawei Mate 10 with a Kirin 970 (4x Cortex A75 +
4x Cortex A55) in November this year. The market has become very iterative and
predictable, and this ARM announcement confirms it. Don't actually give Huawei
any money for this hardware though, their software update policies are
horrible. The more interesting and useful implementations will come from
Samsung, Qualcomm and maybe Nvidia in 2018.

