
The ARM vs x86 Wars Have Begun: Power Analysis of Atom, Krait, and Cortex A15 - lispython
http://www.anandtech.com/show/6536/arm-vs-x86-the-real-showdown
======
ChuckMcM
This is a fascinating article both in what it says, and what it doesn't say.
If the future is really low powered SoC's then Intel is in a world of hurt.
The reason for that is that if Intel can _only_ match power/performance of ARM
in their chips, then its a toss up for the manufacturer in terms of user
visible impact on which to use, and that makes cost and/or differentiation the
next selector in the 'whose do we buy' tree.

If Intel has to go to the mat on price it really kills their business model.
One of the amazing things about Intel over the last 20 years has been that
their part sells for up to 30% of the total cost of _all parts_ of the system.
And what _that_ means is that if you get the CPU for half as much you save 15%
on your total parts cost. That is huge savings. But if Intel has to cut their
margins to get sales, they lose a lot of R&D dollars and their legendary
advantage in Fabrication is suddenly underfunded. That is a huge problem for
them. Intel has to win this fight or they will have to radically change the
way they have structured their business.

The second part is differentiation. Intel just doesn't license the x86
architecture any more, they got out of that when competitors were out
executing them in architecture design. What that means is that you really
can't add an instruction if you're an Apple or a Cisco or what not to your CPU
to make your particular use of the part more efficient. But with ARM you can.
If you are an instruction set licensee you can make internal changes and keep
the compatibility high. Apple just demonstrated with the A5x that this was
something they were willing to do. There is no relationship with Intel where
that sort of option would be possible.

So if Intel can only match ARM on its performance and power curves, they lose.
They have to be either 50% faster at the same power levels on the same
workload, or 50% more efficient at the same performance. +/- 10% or so isn't
going to cut it when they are commanding a huge chunk of the parts cost.

~~~
polshaw
On the contrary, intel made significant changes to their CPUs court apple's
business: the low power core cpu was started specifically to enable the MBA,
as well as the focus on a powerful integrated graphics chip for laptop CPUs.
Yes, you need to be a major player to have this kind of relationship, but the
same goes for if you want to design your own arm CPUs.

Intel's plan continues to be to make up 30%+ of system cost-- but they don't
do this by simply providing a powerful CPU-- they integrate more and more into
the CPU silicon, and that is their mobile/atom strategy too (see digital
baseband integration, etc). It is not abnormal however; battery/digitizer etc
are all commodity parts now, screens are quickly getting there too, the SOC is
the only area that is so loosely bounded in terms of it's improvement
potential.

Things have rarely looked better for intels fabs either, they almost totally
dominate the desktop/laptop/server space, and have a serious prospect for
significant expansion for the first time in a long time. There may be a small
decline in PC sales, but the overall $ spent on processing systems (ie add
phones and tablets) is rising a lot more.

I do agree with your broad point that intel will have to be noticeably better
to get higher revenues for their chips, though. But if they get atom onto
their leading CPU process (as planned), as well as out-of-order, then they are
very likely to achieve this.

~~~
tjoff
"On the contrary, intel made significant changes to their CPUs court apple's
business: the low power core cpu was started specifically to enable the MBA,
as well as the focus on a powerful integrated graphics chip for laptop CPUs."

Um, since the CPUs you are talking about came out years before MBA came out
that sounds more like Steve talking...

Also, intel GPUs are the most commonly used in the (x86) world and has been
for a long time (before apple even made products using x86 processors). How
the idea or execution of including it in the CPU has anything to do with apple
I haven't got a clue.

I'd be more willing "blame" the performance arms race of integrated GPU
solutions on AMD buying ATi.

------
trotsky
You'll notice that Intel isn't coming to Anandtech and offering to break open
a bunch of android phones and comparing them with the atom android port.

On windows Intel is benefitting from a kernel and compiler that's spent 15
years being optimized for their isa. I am sure qualcomm worked very hard with
microsoft getting RT out the door but I would wager that there is a lot more
room left for optimization of krait on windows rt than there is for atom on
windows 8.

Intel and Acer also have a lot of experience optimizing for the hard
separation between the platform code and the os which typically involves a lot
of cheating / second guessing. I'm pretty sure microsoft requires their arm
platforms to support the traditional x86 style platform interfaces like uefi
and acpi. Arm SOC manufacturers gave traditionally benefitted from being able
to do deep integration into the OS and exploit tons of manufacturer specific
optimizations. I highly doubt qualcomm enjoyed the same freedom with the NT
kernel.

So while it's impressive to see atom operating with much better gating than it
traditionally has had, I suspect that if you did the comparison on neutral
ground using gcc on linux and let all the manufacturers do as much optimizing
as they wanted you'd see the arm systems improving their performance per watt
significantly. Meanwhile the atom would be lucky to just tread water.

~~~
WatchDog
You can argue it both ways, one the one hand windows benefits from the last 15
years of x86 optimization. On the other hand arm is benefiting from a lifetime
of embedded optimization. Intel has only relatively recently shifted their
focus to low wattage parts, and this demonstration on atom is not on their
latest process.

What this benchmark shows is that the speculation about x86 being inherently
inferior to ARM for low wattage is just that, speculation. x86 and Arm are
very much in the same ballpark and we can expect the competition to really
heat up in the coming years.

------
sami36
There is no war to be had until Intel seriously considers operating on much
lower margins. Their problem is not just idle power & heat dissipation, their
real problem is cost/ unit. Whatever Intel does going forward, their fat days
are over.

~~~
yk
Intel is producing Atoms on an ancient process, while they are one generation
ahead of every body else in cutting edge process. So this is Intel getting its
feet wet, not Intel trying to compete. And atop of this, I am pretty sure
there is a market for really high powered smart phones above the current price
range.

[Edit: reformulated last sentence]

~~~
WatchDog
I'm curious as to what will be the applications that drive the need for more
phone performance. I am pretty happy with the performance of my S3, if it were
four times faster, I don't see how it is going to noticeably improve my user
experience.

~~~
yk
Actual desktop replacement. Think about a phone with similar specs as a three
year old desktop. You can then carry your desktop around, do actual work
wherever you find a reasonable screen and keyboard and you do no longer need
to sync several devices.

Apart from this, any real augmented reality application can easily burn any
excess computing power you may carry.

~~~
mitchty
My only worry about having my phone be my entire computing infrastructure is
it getting stolen. Not sure I'm up for that future possibility just yet.

~~~
tjoff
Full disc encryption and, obviously, a serious backup solution should take
care of most issues.

Also, using it more as a thin client would also minimize the risk.

~~~
mitchty
I think I'd still want them to be more akin to Sun's original vision of
network appliances. The phone acting more as a session token to an external
system and a thin client.

Just noting the size and always present aspect makes security a different
problem.

------
kevingadd
The graphs in this article border on unreadable. I don't know why you'd post
graphs like that unsmoothed unless your goal was for your readers to ignore
them completely.

The claims he makes about power consumption certainly seem interesting, but I
don't really feel like I can take them without a grain of salt given how hard
it is to read a lot of the graphs he's drawing conclusions from.

The general trend of Atom finishing benchmarks earlier without drawing much
more power is pretty interesting, at least. I never would have guessed that
Atom would be a winner here - it has such a bad reputation.

~~~
mtgx
The bad reputation came from the low performance (in low-end Windows
machines), and it looks like it will continue to have that now that Cortex A15
is out.

From the power consumption point of view, it was completely unusable for a
smartphone or tablet until recently, because when they made it, Atom had a 10W
TDP, and for the past 5 years they've mainly tried to get that to 2W, while
keeping the performance mostly unchanged. That performance used to be much
higher than the high-end ARM chip at the time when they released it, but
that's not the case anymore.

~~~
dspillett
Another key problem with early Atom based machines was the rest of the chipset
(io controller and such) that went with them, which could draw a fair amount
of power too. I presume this has been improved as well as the CPU's power
needs.

------
vondur
I believe that one overlooked fact is the ability of a company to purchase a
license to design an ARM processors for their own particular usage scenarios
as Apple has done. I believe this gives ARM a distinct advantage vs. Intel.
With Intel you have to wait until they release a processor and then you have
to integrate into your design. I think the ARM way of doing licensing will be
a boon for them.

------
green7ea
Has anyone else noticed how odd these benchmarks seem?

They run sunspider on different browsers in different OSes and use the results
to compare processor performance per watt.

On the same OS, with the same browser, I can have higher variations on
performance per watt with a single compiler flag (fastmath, -O2 vs -O3, etc).
The same could be said by changing the kernel's scheduler. If we keep in mind
that the browser, compiler and OS are all different for these tests, how can
we accept the results as anything but noise from those differences?

One striking example of this is that the new version of chrome on the nexus 10
could easily account for the performance difference in the kraken test. How
can we use that data to compare the performance per watt of the processors?

------
yk
I think Intel is in an interesting business position. Just when AMD is getting
into serious trouble, raising possible anti trust issues, ARM steps up.
However ARM has actually the inferior architecture (for desktop/ modern tablet
use).[1] So they can avoid anti trust issues by pointing to a competitor, who
simply does not threaten Intels core revenue generators ( Desktops, Notebook
[2]).

[1] I belive they still use in order architecture. And certainly no one but
Intel has a 22nm fin fet process running. [2] I am actually not as sure about
servers.

~~~
wolf550e
A15 is out of order.

~~~
yk
Thanks, I should have checked that. ( The argument is probably about prefetch
logic, but basically similar.)

~~~
mtgx
Cortex A9 was out of order, too.

Ironically, Atom is still in-order.

------
macavity23
A question for iOS devs out there: how tied to ARM is iOS Objective-C
programming?

I ask because Android already supports Atom-based devices, and presumably
Win8RT apps are easy to recompile for Intel - so it seems likely that it would
be straightforward for any of the players in the mobile space to switch
architectures on fairly short notice.

~~~
melling
Objective-C was created in 1983.

<http://en.wikipedia.org/wiki/Objective-C>

Apple and NeXTStep have run on Intel, Motorola 68k, or PowerPC chips in the
past. iOS is a subset of Mac OS, which currently runs on Intel.

In short, the CPU architecture choice is irrelevant.

~~~
wtallis
You left out SPARC and PA-RISC. NeXTSTEP was probably the most cross-platform
desktop OS ever sold.

------
InclinedPlane
Let's remember that Intel is sitting on a mountain of cash and even more
importantly mountains of chip design talent and mountains of fab hardware.
Intel can afford to play the long game. The interesting thing is how
competitive Intel's first gen x86 part has been.

------
dman
Anand does an exceptional job of acting as a proxy for Intel's marketing
department.

~~~
sixbrx
This comment would have a lot more force with specifics that show how
Anand/Intel's _numbers_ are wrong or misleading.

~~~
polshaw
I think the parent post is too strong, but numbers are by no means the only
way of showing a bias.

Whilst i believe he tries to be objective, Anand may hold a slight bias
towards intel. Take the discussion of the future; he talks of the advancements
of the core architecture, comparing a yet-unseen 8W TDP[1] haswell, then
talking confidently of halving that at 14nm. This is in no means a certainty;
i think it is fair to say this would be a best-case scenario where intel
focused fully on reducing power, as well as having a very successful 14nm
node.

On top of this, there is no mention of price in this equation; i can't see a
'core' CPU in a nexus tablet (too expensive for the cost) or an ipad (too
expensive for apple's margins) if intel want to keep their margins.

I should reiterate that everyone has a bias, and if Anand's is slightly pro-
intel, then it is surely propagated by the high level of access intel seem to
give him (take for example the 'x86 power myth busted' article; where the atom
was compared to the intel cherry-picked (outdated) tegra3), and the fact that
we have a very capable intel today. As said i believe he attempts to be
objective still, so i still think his journalism is some of if not the best in
this area.

1\. keep in mind TDP is not peak power (which is where the A15 would be close
to 4W), but the sustained heat output that should be dissipated in whatever
enclosure, for which there is no standard. This would put the A15 in the 1-2W
range.

~~~
sixbrx
That's the kind of specifics I was looking for (thanks). I agree about
numbers, and one doesnt have to answer with numbers if they can be brushed
aside with specifics that offer proper perspective.

I'm pretty hardware-ignorant myself, so I'd be at the mercy of articles like
this if it weren't for comments.

It's interesting times - the stakes seem to be high because of the way that
some platforms are tied to hardware architectures (which is my real interest
here).

