
Intel’s erratic Core M performance leaves an opening for AMD - nkurz
http://www.extremetech.com/gaming/202991-intels-erratic-core-m-performance-leaves-an-opening-for-amd
======
chroma
Unfortunately, AMD's Carrizo is in no way competitive with Intel's Core M
(Broadwell).

1\. Carrizo isn't shipping yet. Broadwell shipped in September, and Intel's
next microarchitecture (Skylake) ships late this year.

2\. Carrizo will be fabbed on a 28nm process.[1] Broadwell is 14nm. When it
comes to fab tech, AMD is over 3 years behind Intel. Ivy Bridge (22nm) shipped
in April of 2012.[2]

3\. Carrizo's lowest TDP is 12W.[1] The Core M version of Broadwell has a 3-6W
configurable TDP.

I compliment AMD's PR department, but when it comes to performance or
efficiency, AMD is not a serious competitor. Intel's fab tech is simply too
far ahead. I'd love to see a repeat of the Athlon days, but for the
foreseeable future AMD can only compete on price.

1\. [http://arstechnica.com/gadgets/2015/02/amds-carrizo-
system-o...](http://arstechnica.com/gadgets/2015/02/amds-carrizo-system-on-
chip-more-transistors-more-performance-less-power/)

2\.
[http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)](http://en.wikipedia.org/wiki/Ivy_Bridge_\(microarchitecture\))

~~~
sjm
Isn't it a bit worrying to see AMD drop off so much? I've stuck to Intel CPUs
since the Pentium 4, but I shudder to think where we'd be today without the
competition AMD brought to Intel over the years.

~~~
bebna
Don't worry. AMD was just replaced by ARM (and the manufactures who use their
designs).

That's why Intel did so much R&D in power efficiency. ARM CPU are sometimes
still favorable to Intel ones, in some markets Intel still doesn't have a
chance.

PC is dominated by Intel. Smartphones by ARM. Intel invades the tablet market,
ARM the notebook.

Personally I would love me an nice ARM based notebook, but other factors like
housing, keyboard and display are often better on Intel devices. But I don't
think until ARM steps up and produces a high quality device like Google did
for their Chromebooks, no manufacture will switch from the thinking that ARM
is for cheap devices only.

My god I could already live with a Apple A7 level CPU and modern E-Ink
display, which I have in my ebook reader who also can run linux and all tools
I really need to work, fast enough to not limit or annoy me. Only thing why I
don't use it as daily driver is the missing physical keyboard.

But you have to see, I use to roughly 80% of my time with just two consoles:
vim and command window for running scripts and git. So I'm not the norm.

But what about the norm?

Germany has magazine called c't, who just recently published an article how
the life with a smartphone as your work pc looks like. Speed wasn't the
concern, because their ARM CPUs where powerful enough to run a browser or
office software on a full screen monitor without problems. Connectivity was
the only problem, which could negated with the right smartphone.

There was also an online article about a reporter test drive, who switched
from his notebook to a smartphone and a Bluetooth keyboard for his mobile work
setup. He liked it, except it was harder to deploy on the lap and he often
needed a table to work. (I think it was engadget, but I can't find at the
moment and I run out break time, so please look for yourself.)

TLDR: ARM is the new AMD.

~~~
higherpurpose
Unfortunately, Microsoft has kind of killed ARM's entrance into Windows
notebooks, now that Windows RT is essentially dead. And Google - Google of all
companies, with their architecture-agnostic Chrome OS - has also helped _push_
Intel into its now near-monopoly in Chromebooks. At least with Microsoft it's
somewhat understandable why it would kill Windows RT - no compatible apps. But
ChromeOS?!

Google should be pushing ARM in notebooks like crazy. If ARM doesn't end up
with at least 70% market share in Chromebooks (due to the _multiple_ ARM chip
makers, just like Android has the majority of market share in mobile operating
systems), then I'll consider that a major competition failure.

Granted it's not all Google's fault for pushing Intel. Intel managed to enter
the Chromebook market with some shameless lies:

1) First lie: promoting the new Intel Chromebooks as "Haswell Chromebooks". I
have a good memory and I remember at the time most people thought that was
basically _Core i5_ Chromebooks - at least for a while.

2) Second lie: Those chips were Haswell Celeron, but at those device price
points of $200-$250, it would've been _impossible_ for Intel to sell them
profitably. So they sold them _below cost_ , so people then can have the
choice between what is essentially a $110 chip and a $20 chip, yet the devices
cost roughly the same ($250). So of course the choice was made for them, and
people would rather get the $250 device with the $110-value chip than the $20
one.

It still pisses me off to no end that governments aren't taking action against
Intel over this. It's highly monopolistic and anti-competitive behavior. It's
no different than Microsoft eating the cost of IE and bundling it with
Windows. Intel has also wasted over $8 billion in subsidizing their mobile
chips so far, to be able to sell them under-cost, and competitively against
ARM. No ARM maker, not even Samsung or Qualcomm, could ever afford to do
something like that. So why is Intel allowed to do it to take out the
competition from the market in this way?

3) Third lie: After people got used to $250 Celeron (Haswell) Chromebooks -
Intel did a _bait and switch_ and switched those more powerful Haswell
Celerons with _Atom_ -based Celerons, which are no more powerful than ARM
chips, but people now _think_ those "Celerons" are.

It gets worse. Now that people remember that the ~$110 or so "Celerons" were
relatively powerful (thanks to the Haswell/Core architecture), they are now
starting to charge _that much_ for the Atom-based Celerons. Even though as I
said, they are the equivalent of $20-$30 ARM chips. But this is what monopoly
can get you (at least in Windows PCs and Chromebooks, where the ARM
competition has been all but wiped out).

The lies also continue with the way Intel is now calling its new Atoms -
"Braswell", as if they had anything to do with Haswell or Broadwell.

Intel has become (or perhaps has always been, I've only been following this
stuff _closely_ for the past few years) an _incredibly shady_ company. And it
saddens me that more people don't see it, and that Intel gets away with it
stealing ever more market share in nefarious and anti-competitive ways.

~~~
tracker1
I mostly agree regarding Intel... A lot of this is all pretty shady, I will
make one exception to your comment about MS eating IE's costs.

When MS started including IE with windows, every other graphical OS included a
browser (OS/2 Warp, and early Linux distros). I think the irritation was how
deeply they embedded the browser, which does make some sense as it allowed
them to create the CHM help files, which is a pretty nice format. As well as
some of the push desktop stuff early one, which I kind of liked. The down side
is that support for newer browsers on older OS versions was effectively made
into a cliff for IE.

I'm glad that Chrome and Firefox carry as much weight as they do. Seeing a
roughly 1:1:1 split among the top three is best for the larger community. The
down side being how long it takes for an IE version to ship (still), which
means that IE while ahead at launch quickly falls behind and becomes our least
common denominator for public websites for years.

In particular, while I don't mind using JS transpilers like Babel/6to5 to get
new features today, the fact that I can't use these without such tools for the
next 4-5 years is kind of sad. As an aside, I really wish there was a common
background worker abstraction that worked in both node and the browser... I
know I can fork/cluster in node and use webworkers where available, it would
still be nice to see a common interface for this stuff.

------
jpgvm
The thing is the performance isn't exactly erratic.

It's purely a function of the cooling available and the target skin
temperature. This doesn't mean you can't "trust" it.

It just means you should do what you should have -always- done which is buy
machines in metal chassis (and of reasonable size) if you want passive cooling
or get a machine with a decent thermal design and active cooling (fans).

Trying to spin this into FUD seems a little desperate.

~~~
acqq
Yes, the author expects something unrealistic and than presents his
unfulfilled expectations as the problem. There's no technical reason that
every notebook had to have the same cooling technology. That the processor can
adjust to the cooling available is actually a good thing. And mobile phones
behave exactly the same.

~~~
osivertsson
It doesn't matter if it is unrealistic from an engineering perspective, what
matters is end-user experience.

I've heard technically clueless people complain that their new highend Core-M
laptop is slower than their old laptop when they just do work
(ie+powerpoint+excel+mail), and they are frustrated.

Right then it doesn't matter that the new laptop only weighs half of the old
one and still has much better battery life, work is still done mostly at a
desk with power nearby.

~~~
acqq
The top speed is actually the function of the heat generated, and you can
generate more when you have better cooling, as simple as that. If some
"expert" claimed to the users that by looking at the processor name they can
know how fast something is it's the "expert" who isn't.

~~~
m_mueller
> top speed is actually the function of the heat generated

That knowledge is rather dated and probably goes back to the Pentium 4 era.
Top speed is a function of heat, but also of the process technology and the
concurrency available+used. It's best to wrap your head around it by going
back to first principles:

> The dynamic power consumed by a CPU is approximately proportional to the CPU
> frequency, and to the square of the CPU voltage:

> P = C V^2 f

> where C is capacitance, f is frequency, and V is voltage. [1]

Capacitance goes down with smaller transistors, voltage and frequency goes up
with higher sequential performance given the same architecture. So, by
replacing the need for sequential performance with more concurrency
(vectorization/multicore/gpu acceleration), the need for power goes down
dramatically. Process technology is another big factor.

On a related note: Comparing ARM with Core M just using some concurrency using
benchmarks is not really comparing Apples to Apples - I expect Core M to have
a higher sequential performance - so if you want to have the same performance
on ARM, you'll need more concurrency in the software.

[1]
[http://en.wikipedia.org/wiki/CPU_power_dissipation](http://en.wikipedia.org/wiki/CPU_power_dissipation)

~~~
seanmcdirmid
I believe you mean parallelism, not concurrency right? Concurrency is dealing
(correctly) with more things at the same time, parallelism is doing more
things at the same time.

The Pentium 4 had a very deep pipeline, which got very hot and wasn't fast
compared to its clock. The pentium M went back to more instruction level
parallelism via shallower parallel pipes (multi core came later).

I don't think ARM is competitive with Intel even with parallel benchmarks;
Intel does performance well and doesn't play many tricks with it, ARM's
advantage has always been power efficiency and cost.

~~~
m_mueller
> I believe you mean parallelism, not concurrency right? Concurrency is
> dealing (correctly) with more things at the same time, parallelism is doing
> more things at the same time.

Yes, parallelism makes more sense in this context. I wanted to express it in a
more general way because running multiple programs on different cores at the
same time I understand as concurrency, which is _also_ made more efficient by
parallel performance rather than sequential - IMO the terms are a little bit
fuzzy at times, i.e. it's not that easy to convey the ideas behind parallel
programming easily such that there is no ambiguity.

Edit: About ARM vs. Core M in benchmarks: After looking at it again,
apparently the two are still quite far apart, you were right about their
performance numbers: [1]

[1] [http://www.fool.com/investing/general/2014/09/22/intel-
corpo...](http://www.fool.com/investing/general/2014/09/22/intel-corporations-
core-m-significantly-outpaces-a.aspx)

------
peatmoss
I'd like to see AMD try and differentiate by being the hardware vendor that
supports zero binary firmware blobs anywhere in the stack and provides
reference boards where Core Boot is king. Being the open choice for geeks and
security paranoid governments, with open firmware and drivers for everything
they build is probably the biggest thing AMD could do right now to carve out
market share against Intel that doesn't involve tons of R&D.

------
rikkus
This chart suggests there's no competition at the moment:
[http://cpubenchmark.net/power_performance.html](http://cpubenchmark.net/power_performance.html)

I'm always wary of the charts on cpubenchmark, however. To a desktop/laptop
user, single core performance is usually much more important than total
performance, as they're not using all the cores. Even for my use case,
software development, compiling uses multiple cores but doesn't take any
significant time these days anyway (C# in VS, JIT for .NET and JavaScript), so
I tend to check:

[http://cpubenchmark.net/singleThread.html](http://cpubenchmark.net/singleThread.html)

AMD A4 Micro-6400T APU: 607 Intel Core M-5Y10c @ 0.80GHz: 1,115

Almost twice the (benchmark) speed for the Core M. I'd like some real world
usage reviews, but it's still looking like Intel is the best choice if you can
afford it.

~~~
higherpurpose
It's easy for Intel to "win benchmarks". Those benchmarks last for a few
minutes, which is perfect for Turbo. Do a benchmark that lasts for an hour or
even 30 minutes, and then you'll see the limitations of Intel's chips with
Turbo.

And it's not about affordability in the case of Core-M. It's about poor
performance/price ratio. You get very little for a lot of money, all to get
that 5W TDP, which is also misleading because those chips needs to be put into
devices that support more like 10W TDP, not 5. That's why most Core-M devices
overheat. Intel overpromised on the TDP, and now OEMs get the blame for it.

------
alextgordon
Let's be honest, the most computationally intensive thing anyone does is to
open Chrome and have 20 tabs each with five different Flash ads.

For the average user, it shouldn't matter what CPU they have in their machine.
If their computer is slow, it's because that CPU is busy executing crapware.

~~~
osivertsson
Yet reviews of Core M machines by average users are almost all of them
pointing to sub-par performance including premium products such as the Lenovo
Yoga 3 Pro, for which they paid $$$.

Leaving this impression with users is not good for your brand in the long run.

I think Intel optimized too much for battery life and TDP with Core M, and it
was also priced and targeted at the wrong highend market.

~~~
alextgordon
It's tough medicine but it needs to happen so that Microsoft, Apple, Google
and all us programmers stop shipping slow, bloated crap. (Google can start by
enabling click-to-flash in Chrome by default)

Imagine if you installed Windows 98 on one of these things. It would _fly_.

The CPU is perfectly capable of the workloads that users want to throw at it.
The software isn't yet.

~~~
dzhiurgis
It's a GPU problem.

The difference between Win98 and OS X 10.10 is screen resolution, animations
and virtual desktops.

------
Synaesthesia
Frankly I think people are far too concerned about these performance numbers,
I think they're excellent. It's on par with the Sandy Bridge CPU in my 2011
Macbook Air, which is an excellent performer. In fact I have plugged that MBA
into a 27" external and been quite happy with it.

Combined with 8gb of RAM and a very fast SSD, a decent iGPU and I think
performance will be very satisfactory indeed.

~~~
ohitsdom
CPU performance in 2015 on par with a CPU from 2011 is a terrible, even when
considering the lower power consumption.

~~~
bluedino
The CPU's in the Air haven't really gotten that much faster since Sandy
Bridge. The 2013's were actually a slight step down from the 2012's in many
benchmarks.

~~~
ohitsdom
That is Apple's choice for the Air (and they are notorious for slow refreshes
in general). But the article is talking about the CPU industry as a whole,
which is why I strongly disagreed with the "good enough since 2011" line of
thought.

------
rasz_pl
AMD seems to only succeed with outside help. K6 was all NexGen, K7 was
practically a DEC Alpha, new mobile chips were only possible due to ATI
takeover. Sadly there is no one in sight to buy out/poach from/take over to
make the next performance bump. Intel and AMD bought everything that moved.
AMD even picked up Cyrix leftovers from National Semi (Geode). What is left?
insignificant VIA(centaur)? Vortex86(Rise mP6)? Not to mention they seem to be
taking ages coming out with v8 arm, all they do is talk and produce slides,
with no products in sight.

~~~
dman
When youre competing against someone that 70 times larger you take all the
help you can get :). AMD however has provided its fair share of innovation to
the x86 ecosystem over the years - AMD64, integrating memory controllers,
launching multi core chips, current work with integrating GPU's and CPU's (HSA
and shared memory spaces).

------
driverdan
Core M chips cost $281. That seems crazy high to me, especially when compared
to ARM.

------
tomglindmeier
I still remember the days when AMD was clearly outperforming Intel. But these
days are over.

~~~
kale
Yeah. I still build AMD because they're the underdog (and I'm familiar with
it, and I run integer heavy multithreaded apps, which is it's best use
condition).

By some chance I have managed to find myself with an AMD Athlon 760k system
with 32 GB Ram, and SSD, and a R9-290x GPU. It's pretty much fastest, top-of-
the-line components (144 Hz monitor, too!), with a budget AMD chip. This is a
32 nm chip, with 2x2 MB of L2 cache (there's no L3 on the Athlon line these
days). But coupled with a fast GPU and crazy amounts of RAM. The CPU is
watercooled, too, with a double-sized radiator and 2 120mm fans. And just to
be stupid, I picked up a killer bigfoot NIC as well.

I can't say I've noticed any real limitations on the CPU itself. I know it's
hampered pretty badly, but I think I'm going to hang on to it as long as it's
relevant and see what happens. I don't find myself being limited by that much.
It plays Shadow of Mordor at highest detail settings (I bump it down a bit to
get 100+ FPS).

It's sad to see them struggle, but Intel has superior technology right now,
and everyone depends on technology to survive. I hope they can continue to
stay relevant in the next couple of years until they get another home-run
platform out of the door.

HSA is very interesting to me, but I don't know how important it will be to
others. A few algorithms that I write can burn a lot of time being transferred
to GPU and back. A gigantic FPGA is what I need, but I can't afford one of
those.

------
emirozer
What do you guys think of the new 12" macbook that carries one of these cpu's
as a development machine ? Is it a really bad choice ?

~~~
dagw
What are you developing? Simpler web apps with all your heavy tests running on
a remote server, sure no problem at all. Heavy duty 3D applications in C++
with large data sets and long compile times, not so much.

Basically spend some logging how much you stress your CPU and GPU during a
normal day developing. If the results are "basically never rises above 5-10%"
then the macbook will be fine.

~~~
emirozer
I see your point, thanks for the feedback.

Although my work is mostly based on infra nowadays, i use vagrant & docker to
test things locally so i think i should move my worry over to the 8 gb memory
part :)

------
mark-r
The whole article is based on a false premise. If those different OEMs had put
in the heat sinking required for the AMD part, the significant performance
differences they saw in the Intel chips would disappear and they'd whup the
AMD because they'd be running turbo all the time.

------
fursund
Perhaps this is more of an opening for non-x86 mobile chips (MIPS, ARM, etc.)
than for AMD.

------
nickhalfasleep
I wonder if AMD will perhaps bank on ARM-centric models for low power winners.

