

The x86 Power Myth Busted: In-Depth Clover Trail Power Analysis - shawndumas
http://www.anandtech.com/print/6529

======
SlipperySlope
The Intel version of the surface tablet gets only half the battery life of the
ARM version.

ARM is also moving forward with a big.little architecture that duplicates the
CPU cores for either high-energy or low-energy situations.

Intel still has a way to go…

~~~
dragontamer
The Intel version of the Surface is an ultrabook-class i5 CPU. It is dozens of
times faster than the ARM one. They're just not comparable at all. True, it
has much less battery life, but also significantly more processing power...
enough to run full scale Photoshop.

Intel has much to fear from ARM, but not from NVidia... especially big.little.
Outside of NVidia-specific project Shield, it doesn't sound like Tegra 4 has
any design wins. In fact, NVidia's big.little architecture is arguably a
failure. Surface RT == NVidia Tegra 3 with big.little (btw: Surface RT doesn't
use big.little features at all. Perhaps Microsoft is trying to keep SurfaceRT
code portable across different ARM chips). In comparison, the iPad4 is an
Apple custom ARM chip, and the Nexus 4 is Qualcomm Snapdragon. Current Nexus 7
is Tegra3, but the next one is rumored to be Qualcomm Snapdragon as well.

Basically, NVidia is weakening in the mobile sector. ARM overall is strong,
but big.little is arguably a failure at this point.

~~~
timthorn
The Tegra 3 doesn't use big.LITTLE. It does have a 5th, smaller core - but
that's not big.LITTLE.

~~~
dragontamer
Thanks for the clarification. I guess I got things mixed up.

Are there any big.LITTLE implementations out there? Wikipedia only mentions
Samsung Exynos 5 Octa, which doesn't have any designs right now.

~~~
timthorn
I don't believe so, but willing to be proven wrong!

------
qubitsam
Previous discussion: <https://news.ycombinator.com/item?id=4964355>

------
smnrchrds
I'm not sure if this is a place to ask questions, but here it is: How can I
compare ARM and x86 CPUs based on their specs, e.g. without actually testing
them. RaspberryPi website says its "overall real world performance is
something like a 300MHz Pentium 2" although the frequency of the chip is
700MHz. Can I use 3/7 as a rule of thumb?

~~~
Symmetry
No, different specific architectures can have more than an order of magnitude
difference in performance per clock in ways that don't have anything to do
with the instruction set used.

In this case, the Pentium 2 is an out of order 2-wide superscalar architecture
and so is the A9 ARM processor talked about in this article, but the ARM 11 in
the RasberryPi is a 1-wide in order processor, which mostly accounts for the
wide disparity in performance. You'd actually normally expect the difference
to be larger, but ARM instructions are a bit more powerful than x86
instructions so you'll tend to need less of them[1]. But x86 is more
efficiently encoded, so you only need the same number of bytes of instructions
to accomplish the same task for each.

The best way to compare different processors is by benchmarks. But remember to
choose a benchmark that looks like what you're trying to do, because different
architectures can excel in different situations.

[1] x86 has the advantage of having combined load/store and operate
instructions, but ARM has 3-operand instructions, twice the registers
instruction predication, and a free barrel roll when using it's ALU. Changes
to the x86 instruction set since the Pentium 2 make things more fair.

------
MichaelGG
While this looks great, isn't there a fundamental issue that the test tablets
were running _different OSes_? For instance, Microsoft's x86 code might have a
decade more tuning done to it versus their ARM code.

------
natec
Comparing the power consumption of a 40nm chip and a 32nm chip and trying to
draw conclusions about what's the better /architecture/? Please. That's like
playing a tennis match between Federer and Nadal, making Nadal play with a
frying pan instead of a racquet, and then claiming Federer is the better
player because he won using a racquet.

Cool data, though.

------
mtgx
Intel had a lead in performance while they were working on fixing power
consumption. Because it took them too long to fix that, now they lost the
performance lead in both CPU, and especially in GPU performance. Apple gets
credit all the time for having the "fastest chip", when it really only has the
fastest GPU, but it seems people even forget mentioning that Intel has the
_weakest_ GPU, even when compared to Tegra 3.

~~~
corresation
Intel wasn't "fixing" power consumption, but rather it simply wasn't a
critical requirement: The CPU was often a small part of the overall platform
power consumption, and people favoured performance over efficiency.

Obviously tablets and mobile devices change that equation, so Intel adapted.
Intel is in a difficult situation with their products however in that one of
their biggest concerns is competing with themselves too much.

Regarding the GPUs, Apple doesn't have the fastest mobile GPU - PowerVR from
Imagination Technologies does. Intel can use a PowerVR SGX554MP4 if they
wanted to integrate it. Is it better than an HD 4000? I honestly have no idea
as I've never seen them actually compared (and note that the best mobile GPUs
have miserable performance compared to discrete desktop GPUs, so don't assume
that because the HD 4000 doesn't compete with a $400 gaming card that it can't
compete with an Adreno or PowerVR).

EDIT: Just blew my mind to learn that the HD 4000 on the Intel SoC _is_ a
PowerVR GPU.

~~~
mtgx
That wasn't my point. I know they use PowerVR GPU's, but they never use the
latest ones. In fact they always use very old PowerVR GPU's. They had nothing
last year that was even close to the other GPU's let alone the ones in iPads
and the iPhone 5, and they will have nothing this year to compete with Apple's
PowerVR Series 6 GPU. Their PowerVR drivers suck, too.

Why even bring the Intel HD 4000 in the discussion? We're talking about Atom
here. Not a $200 17W TDP Core ULV Core i5 chip. If you want to compare HD4000
do it against Nvidia and AMD's own chips in laptops.

~~~
ajross
Note that the "latest" PVR cores are really just wider variants of the early
ones. GPUs are mostly commoditized at this point, and short of increasing the
core count and clock speed there's really not much that can be done
architecturally to improve them. All of the "PVR SGX 5[345]5(MP[24])" are
really just the same architecture.

------
bhauer
Can we get a 2012 added to the title?

~~~
JoelSutherland
It was written two months ago about chips that are still the newest on the
market!

~~~
1SaltwaterC
Tegra 3 is anything but new. It's a 15 months old SoC. Besides, not enough
info about Cortex A15.

------
programminggeek
Here's the thing, it's not just about speed or efficiency, it's also about
cost and price. Part of ARM's advantage is that chips are cheap and they have
no incentive to support a high margin business that Intel does.

Fundamentally, Intel doesn't want to switch everything over to Atom chips
because it would ruin the economics of Intel's business. Right now they would
much rather sell a i5 at $200 or an i7 at $300+. Compare that to an Atom chip
at say $25 per chip. Intel would need to have 10x volume to make the same
money.

So, while Intel wants to be competitive with ARM, the economics of that
business just aren't as attractive as the PC/Server chip business.

~~~
ippisl
Selling chips for mobile phones doesn't cannibalizes intel's business. Selling
chips for tablets might, but at this point in time it would be very risky to
think in such a way, and they know this very well.

So why hasn't they given their top game for this fight, i.e. offering 22nm
mobile/tablet chips ? I think it's because their 22nm is expensive and not
very fit to manufacture cheap chips. From what i read, their 22nm tech costs
30%-40% more than using TSMC 28nm (for manufacturing the same parts). Maybe
their rising investment in fabs is due to this problem.

Another explanation would be that they are planning a coordinated attack:
better atom and an LTE chip using a 14nm process. That combination might let
them seriously decrease power over competitors. And doing that in bang has
strategic value of creating pressure/fear over mobile handset makers and
carriers to accept their new design, or be left out.

