Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel’s erratic Core M performance leaves an opening for AMD (extremetech.com)
102 points by nkurz on April 10, 2015 | hide | past | favorite | 81 comments


Unfortunately, AMD's Carrizo is in no way competitive with Intel's Core M (Broadwell).

1. Carrizo isn't shipping yet. Broadwell shipped in September, and Intel's next microarchitecture (Skylake) ships late this year.

2. Carrizo will be fabbed on a 28nm process.[1] Broadwell is 14nm. When it comes to fab tech, AMD is over 3 years behind Intel. Ivy Bridge (22nm) shipped in April of 2012.[2]

3. Carrizo's lowest TDP is 12W.[1] The Core M version of Broadwell has a 3-6W configurable TDP.

I compliment AMD's PR department, but when it comes to performance or efficiency, AMD is not a serious competitor. Intel's fab tech is simply too far ahead. I'd love to see a repeat of the Athlon days, but for the foreseeable future AMD can only compete on price.

1. http://arstechnica.com/gadgets/2015/02/amds-carrizo-system-o...

2. http://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)


Isn't it a bit worrying to see AMD drop off so much? I've stuck to Intel CPUs since the Pentium 4, but I shudder to think where we'd be today without the competition AMD brought to Intel over the years.


Don't worry. AMD was just replaced by ARM (and the manufactures who use their designs).

That's why Intel did so much R&D in power efficiency. ARM CPU are sometimes still favorable to Intel ones, in some markets Intel still doesn't have a chance.

PC is dominated by Intel. Smartphones by ARM. Intel invades the tablet market, ARM the notebook.

Personally I would love me an nice ARM based notebook, but other factors like housing, keyboard and display are often better on Intel devices. But I don't think until ARM steps up and produces a high quality device like Google did for their Chromebooks, no manufacture will switch from the thinking that ARM is for cheap devices only.

My god I could already live with a Apple A7 level CPU and modern E-Ink display, which I have in my ebook reader who also can run linux and all tools I really need to work, fast enough to not limit or annoy me. Only thing why I don't use it as daily driver is the missing physical keyboard.

But you have to see, I use to roughly 80% of my time with just two consoles: vim and command window for running scripts and git. So I'm not the norm.

But what about the norm?

Germany has magazine called c't, who just recently published an article how the life with a smartphone as your work pc looks like. Speed wasn't the concern, because their ARM CPUs where powerful enough to run a browser or office software on a full screen monitor without problems. Connectivity was the only problem, which could negated with the right smartphone.

There was also an online article about a reporter test drive, who switched from his notebook to a smartphone and a Bluetooth keyboard for his mobile work setup. He liked it, except it was harder to deploy on the lap and he often needed a table to work. (I think it was engadget, but I can't find at the moment and I run out break time, so please look for yourself.)

TLDR: ARM is the new AMD.


I am seriously considering ARM too. As a daily driver they are more than good enough. I need a lot of power so my laptop is just a terminal to a server and some desktop applications, mostly CLI-based.

ARM wins in terms of low heat and low energy consumption, which are huge plusses IMHO. With a bit of effort, they can also win in openness. They will be powering e.g. http://wiki.vero-apparatus.com/ARM64OpenLaptopConcept and http://pyra-handheld.com/

Companies should realize design wins over features. If anyone comes up with a developer oriented laptop that (i) has top notch Linux support (ii) can be upgraded / expanded / serviced (iii) has a good keyboard (iv) has a matte 4:3 screen they'll gain loads of backing. With quick SSDs and cheap RAM, CPU speed is not that relevant.


I think the problem is that nobody has implemented/opened up a standard boot system... There's no equivalent to UEFI/BIOS for ARM. Which is a shame, because it prevents some of the variety in terms of OSes targeting a broader range of devices that we see on x86 based platforms.

All of that said, I've been hopeful to see something compelling in an ARM powered laptop for a while... Also, with the likes of the transformer tablet/netbook setups, I'm really surprised nobody has done an interface for a smart phone docking/powering a laptop yet (at least that I'm aware of), carrying the same screen/keyboard/touchpad across multiple generations of mobile device.

Such a beast would have to be able to run at least 2x 1080p displays without much lag and several applications. It would probably need a means of docking with active cooling while docked. It would be an interesting design challenge.


I've heard coreboot is being ported to ARM. I can't find the source now. Some of the low end Chromebooks will be quite interesting when this becomes a reality.


I am not optimistic about the future of arm chips. Intel is working hard to take over the cheap tablet market. There are more and more intel android tablets coming out. I don't see arm having much success in the notebook market - even chromebooks are are using intel more and more. If apple wasn't designing their own custom arm chips I would think arm's days were numbered.


Intel has spent over $7bn subsiding chips for tablets and chromebooks (about $50 per device!) The subsidies are being phased out now. You'll want to check what happens when there is no longer the subsidy for actual levels of Intel support.

http://appleinsider.com/articles/14/11/16/after-losing-apple... (there are many articles from other publications saying the same thing)


    Personally I would love me an nice ARM based notebook
Then you can install RiscOS on it. Finally, someone will have caught up to 20 years ago.


>Don't worry. AMD was just replaced by ARM (and the manufactures who use their designs)

Maybe for mobile, but even AMD is better than ARM for most server tasks. Maybe a new generation of higher performance 64-bit ARM chips will change this...


Unfortunately, Microsoft has kind of killed ARM's entrance into Windows notebooks, now that Windows RT is essentially dead. And Google - Google of all companies, with their architecture-agnostic Chrome OS - has also helped push Intel into its now near-monopoly in Chromebooks. At least with Microsoft it's somewhat understandable why it would kill Windows RT - no compatible apps. But ChromeOS?!

Google should be pushing ARM in notebooks like crazy. If ARM doesn't end up with at least 70% market share in Chromebooks (due to the multiple ARM chip makers, just like Android has the majority of market share in mobile operating systems), then I'll consider that a major competition failure.

Granted it's not all Google's fault for pushing Intel. Intel managed to enter the Chromebook market with some shameless lies:

1) First lie: promoting the new Intel Chromebooks as "Haswell Chromebooks". I have a good memory and I remember at the time most people thought that was basically Core i5 Chromebooks - at least for a while.

2) Second lie: Those chips were Haswell Celeron, but at those device price points of $200-$250, it would've been impossible for Intel to sell them profitably. So they sold them below cost, so people then can have the choice between what is essentially a $110 chip and a $20 chip, yet the devices cost roughly the same ($250). So of course the choice was made for them, and people would rather get the $250 device with the $110-value chip than the $20 one.

It still pisses me off to no end that governments aren't taking action against Intel over this. It's highly monopolistic and anti-competitive behavior. It's no different than Microsoft eating the cost of IE and bundling it with Windows. Intel has also wasted over $8 billion in subsidizing their mobile chips so far, to be able to sell them under-cost, and competitively against ARM. No ARM maker, not even Samsung or Qualcomm, could ever afford to do something like that. So why is Intel allowed to do it to take out the competition from the market in this way?

3) Third lie: After people got used to $250 Celeron (Haswell) Chromebooks - Intel did a bait and switch and switched those more powerful Haswell Celerons with Atom-based Celerons, which are no more powerful than ARM chips, but people now think those "Celerons" are.

It gets worse. Now that people remember that the ~$110 or so "Celerons" were relatively powerful (thanks to the Haswell/Core architecture), they are now starting to charge that much for the Atom-based Celerons. Even though as I said, they are the equivalent of $20-$30 ARM chips. But this is what monopoly can get you (at least in Windows PCs and Chromebooks, where the ARM competition has been all but wiped out).

The lies also continue with the way Intel is now calling its new Atoms - "Braswell", as if they had anything to do with Haswell or Broadwell.

Intel has become (or perhaps has always been, I've only been following this stuff closely for the past few years) an incredibly shady company. And it saddens me that more people don't see it, and that Intel gets away with it stealing ever more market share in nefarious and anti-competitive ways.


I mostly agree regarding Intel... A lot of this is all pretty shady, I will make one exception to your comment about MS eating IE's costs.

When MS started including IE with windows, every other graphical OS included a browser (OS/2 Warp, and early Linux distros). I think the irritation was how deeply they embedded the browser, which does make some sense as it allowed them to create the CHM help files, which is a pretty nice format. As well as some of the push desktop stuff early one, which I kind of liked. The down side is that support for newer browsers on older OS versions was effectively made into a cliff for IE.

I'm glad that Chrome and Firefox carry as much weight as they do. Seeing a roughly 1:1:1 split among the top three is best for the larger community. The down side being how long it takes for an IE version to ship (still), which means that IE while ahead at launch quickly falls behind and becomes our least common denominator for public websites for years.

In particular, while I don't mind using JS transpilers like Babel/6to5 to get new features today, the fact that I can't use these without such tools for the next 4-5 years is kind of sad. As an aside, I really wish there was a common background worker abstraction that worked in both node and the browser... I know I can fork/cluster in node and use webworkers where available, it would still be nice to see a common interface for this stuff.


First let me state that roughly the same opinion, except that I don't think Google should push ARM notebooks, I think Samsung as one of the biggest Fab owners and also notebook manufacture should push it. See my reference to the high quality device.

> Those chips were Haswell Celeron, but at those device price points of $200-$250, it would've been impossible for Intel to sell them profitably.

If I'm not mistaken, these are mostly trash from production who are not good enough for the regular QA, but still run with clipped wings.

> It still pisses me off to no end that governments aren't taking action against Intel over this.

snort If that is all, you seem to be much happier with your government than I dare to dream of.


The architecture of the Chromebook should allow Google to change their CPU preference at will. If Intel is producing superior price/performance then it would be stupid of them to pass it up. Having that flexibility will keep the pressure on Intel.


That's the way markets who rely on heavy CapEx work. Too much money, too much power.


I don't think connectivity is much of a problem as is RAM.

And once you have just enough RAM and swapping, you realise that the biggest problem is organising the open tabs in browser.

We have just approached the state when computers can finally fully assist us. The problem is in the software. There just isn't a way to manage our shared computer memory.


I find AMD's decline a little sad, but Intel has plenty of reasons to keep improving their products. Even though their fab tech is ≈18 months ahead of everyone, they can't seem to get a foothold in the mobile space. Cheap, low-power, ARM-based SoCs dominate mobile. ARM-based laptops are making inroads (Chromebooks), and there's a good chance they'll start getting into the server market. Intel makes the fastest, most efficient x86 CPUs, but fewer and fewer people are stuck on that architecture. If nobody needs x86, Intel's only differentiator is their fab advantage, which they could easily lose.


Actually, Intel has very little reason to improve its chips in performance, and as we've seen basically since Ivy Bridge, their focus has mainly been dropping the power consumption of their chips, not increasing the overall performance of their chips (sure they've increased the IPC a bit more, but does it really count if the actual shipping chips then come out with a lower base frequency than the previous generation, making the overall performance at best stagnant, at worst significantly less than before - such as with Core M).


Not totally true. Intel obviously builds its chips to fit its markets, and while the consumer chips have been improving mostly in power consumption (because of the shift to laptops and the miniaturisation of those laptops) their server chips have been improving in performance also, where there are no such constraints.

For example, Intel will happily sell you an 18 core 2.3GHz Haswell Xeon with a design power of 145W, or a 10 core 3.1Ghz processor that puts out 160W.


AMD fate is a little dramatic. The ATi merge wasn't pragmatically useful, but a financial burden. The idea of having CPU/GPGPU in one SoC was good[1], but AMD couldn't flip the market with their products at that time, they put a lot of efforts swimming against the mainstream, and now the mainstream is going their way, they lost steam while Intel seems unstoppable[2].

[1] AMD SoC architecture reviews showed originality and value, but you needed a different software stack to take advantage of it. Without that, it's just slower, hotter silicon. I think that's how mantle started. Again, it's not a competitive advantage anymore now that Vulcan is a standard.

[2] I may be wrong, but I don't think ARM SoC are really a threat to Intel. They did eat a nice chunk of his lunch though, but Intel's IP and assets seem so huge...


The server market is going to get interesting. Right now Intel wins relatively easily against the first ARM servers, but you can tell from their recent developments that they feel the pressure (Atom C2000-series, upcoming Xeon D, ...)


Yeah, that's the issue

Nobody buys desktops anymore. Profits are falling

Mobile is the new competition area. And if Intel wants to sell x86 there they'll have to earn it.


"Nobody buys desktops anymore."

Nobody buys desktops anymore because they haven't gotten faster in any real way in the last 5 years. Probably about 80% of intel's product line can be beaten by a 5 year old intel machine. Sure at the absolute top, there has been a few percent gain, but most people don't seem willing to buy a couple thousand dollar machine to gain a few percent. Sure it may be more power efficient, but I don't think most people buying desktops really care if the CPU draws 30 watts or 80 watts, as its plugged into the wall and the cost savings over years isn't enough to pay for the upgrade.

Intel could be pushing features into the desktop space (say ECC for example) but that would further cannibalize their high profit chips.


Beside performance is not a marketing driver anymore since mainstream usage doesn't require anything faster than 5yo CPU, except maybe for HD Video. CPU is not a bottleneck since ages, memory / storage bandwidth is. With heat being a problem, manufacturers improve where it can be seen.

A SSD has more impact on your life than the latest CPU. I've seen it numerous times. My neighbors latest SONY i5 laptop doesn't hold a candle to my uber aging ThinkPad simply because of this (and an OS that doesn't accumulate crap over time)

And people bought desktops because that's how it was. Under the slightly fallacious upgrade-ability of towers. But a fanless mini-itx would please anybody in less space. I don't know how important this is for people, do they prefer to have a dedicated area for a large box vs tiny imac / vesa mounted box somewhere.


"Beside performance is not a marketing driver anymore since mainstream usage doesn't require anything faster than 5yo CPU"

Because software is constrained by the lack of performance improvement.

If someone managed to double single threaded performance every 18 months for the next 5 years, I can assure you that it would force an upgrade cycle. If for no other reason than than FF/chrome/websites would bloat up enough that people would have to upgrade.

Its not like people sat around for hours waiting for their 286's to boot, or to print word processing documents. Faster technology enables "features" which suddenly people cannot live without which drives the upgrade cycle. There are plenty of technologies which are not yet "mature" and could benefit from more single threaded computing power.

Speech recognition, compression, and on and on, are algorithms which are tuned for available computing power rather than the best compression ratio, or the most accurate recognition.


Agreed, but few things:

1) I firmly believe we've reached a plateau, I'm having a hard time imagining what new complex feature people would need that much. That's anecdotal and subjective but even amazing things like computer vision or speech recognition ended up in the gimmick side of things to me. The web is becoming bloated for no reason, old websites had more content, less chrome. Fast web is easy, no javascript, text. ps: bloated websites are enabled by more capacity.

2) Speed is not everything, there's a sweet spot between instantaneous and unusable. Most people aiming for speed are after user conversion, in the fear that response time would lower their numbers. For real usage, speed is not a real factor as long as its predictable and meaningful. I have soft machines with jaggy and unreliable behavior which annoy me more than my old 350MHz machine.

3) Hardware constraints are good. Without limitation nobody would have cared about space or time efficient algorithms (jpeg, mp3, or even zip). See how late Microsoft has been trying to improve storage use of its OS.

All in all it goes both ways.

I'm really open to ideas, but I'm failing to see what I'm doing now that I couldn't do under win2k days (except for HD video, 3D). I used to render cgi animation, compose multitrack, abuse photoshop...


I really really wish ECC were available on desktops and laptops. Why are my programs and data less important than a servers? This is an area where AMD could catch Intel sleeping if they implemented it across the line, but unfortunately I don't see a way to make the public care enough to count.


IIRC some ASUS boards for AMD desktop CPUs suport ECC (Unregistered). Discovered this after I had already setup my ZFS homebrew NAS box, but after all the trouble I had with it, going back to Synology. And an FX-8350 is plenty fast for most uses... especially if you are using all the cors (running server VMs, docker, etc).

All of that said, a Core i3-5010U has almost the same single-core performance and 40% the multicore while using less than 1/8 the power... I can now just run two systems, one for the server, another for my desktop and still see less power. After getting said CPU in a Brix box for my HTPC, I'm considering such a move for my next home desktop (still a year off though).


I remember when AMD were the best performance per dollar: it took Intel a long while to catch up. They then released the Athlon 64 (which I was the proud owner of) and, again, Intel had to play catch up. I've never touched AMD because they simply became irrelevant.

I really hope AMD pushes the industry ahead once again. Though, they haven't had the best track record when it comes to power and heat.


The hope for AMD is to have enough resources to endure the gap until catch 14nm. Lithography's future beyond 14nm seems bleak at this moment. Like the GHz wall this could be the nm wall. Suposing Intel and AMD working at the same lithography scale, could be interesting to see that.


AMD needs to be acquired by someone with a lot of money so they can at least use the new process nodes as they become available from other foundries. Even using Samsung/GF's 14nm FinFET process this year would do AMD's chips a lot of good. At 28nm, forget it.

But that doesn't mean Core-M is a "good" chip. I would stay away from it. The submitted article does a decent job of explaining why. You're much better off with a cheaper (yes, cheaper!) Broadwell Core i5 at 15W (or whatever its TDP is for this generation) than this chip.

Also, correct me if I'm wrong, but first off "Broadwell" only shipped as Core-M in 2014, not any other version. And it didn't do it in September, I think it was more like December, and only in a couple of devices that I think everyone forgot about.

Broadwell "technically" shipped in 2014, but 2015 is actually the year of Broadwell. I imagine the same will happen with Skylake - 99% of Sylake devices will appear in 2016.


x86 license is not transferable so if AMD is acquired then bye-bye to AMD x86 chips.

Core M allows passive form factors which to me is super important - having a notebook that doesn't sound like an airplane in a distance as it is the case with my rMBP and Zenbook Prime under full load is a major benefit. No noisy interruptions when I need to focus on solving some difficult problem and am most sensitive to external distractions.


This gets mentioned a lot but I find it impossible to believe that whoever buys AMD can't also get an x86 license from Intel.

You do realize Intel also has to get an x64 license from AMD, right? So unless Intel is good with not making 64-bit chips anymore, I think it will continue to give AMD that license...


It really depends on what the license agreement between Intel and AMD is. We know they cross-license almost all their architectural stuff to each other; what are the exact conditions and consequences for acquisitions we can only guess.


> x86 license is not transferable so if AMD is acquired then bye-bye to AMD x86 chips.

Why? If AMD gets acquired, that might just mean that the buyer now holds AMD stock. I can't see how that could possibly affect the license agreement.


Carrizo was supposed to be on 20-22nm originally however they seem to have dropped ball completely :-( Not to mention it is yet to be seen if consumers can actually get their hands on the low-power versions of Carrizo - I wanted to buy a low-power Kaveri for a NUC-style mini PC but there was never any possibility to get one via normal sales channels... I can live with 12W as it packs faster graphics than Braswell/Core M and possibly allows 2GHz+ DRAM chips so that memory is no longer such a painful bottleneck as with Intel chips.


AVFS surely is as susceptible to OEM hardware design as Intel's Turbo solution. They're both doing the same thing, stepping up/down performance in response to thermal conditions and thread counts.

The only slight advantage to AMD is that that can be a little more fine grained, but still a crappy thermal design from an OEM is still going to result in a chip that is throttling itself, just like we're seeing with Intel. We might end up closer to the optimal balancing point with AMD, but it's still going to be wildly different performance from manufacturer to manufacturer.


The thing is the performance isn't exactly erratic.

It's purely a function of the cooling available and the target skin temperature. This doesn't mean you can't "trust" it.

It just means you should do what you should have -always- done which is buy machines in metal chassis (and of reasonable size) if you want passive cooling or get a machine with a decent thermal design and active cooling (fans).

Trying to spin this into FUD seems a little desperate.


That seems like a good heuristic. However, the Yoga Pro 3 has both a metal base and a fan, and it's the worst performer. That's because it has a really low target temperature.

They have to do this because there are so many different ways to hold a Yoga Pro 3 that they cannot allow any part of the laptop to get uncomfortably hot in the hand.


Yes, the author expects something unrealistic and than presents his unfulfilled expectations as the problem. There's no technical reason that every notebook had to have the same cooling technology. That the processor can adjust to the cooling available is actually a good thing. And mobile phones behave exactly the same.


It doesn't matter if it is unrealistic from an engineering perspective, what matters is end-user experience.

I've heard technically clueless people complain that their new highend Core-M laptop is slower than their old laptop when they just do work (ie+powerpoint+excel+mail), and they are frustrated.

Right then it doesn't matter that the new laptop only weighs half of the old one and still has much better battery life, work is still done mostly at a desk with power nearby.


The top speed is actually the function of the heat generated, and you can generate more when you have better cooling, as simple as that. If some "expert" claimed to the users that by looking at the processor name they can know how fast something is it's the "expert" who isn't.


> top speed is actually the function of the heat generated

That knowledge is rather dated and probably goes back to the Pentium 4 era. Top speed is a function of heat, but also of the process technology and the concurrency available+used. It's best to wrap your head around it by going back to first principles:

> The dynamic power consumed by a CPU is approximately proportional to the CPU frequency, and to the square of the CPU voltage:

> P = C V^2 f

> where C is capacitance, f is frequency, and V is voltage. [1]

Capacitance goes down with smaller transistors, voltage and frequency goes up with higher sequential performance given the same architecture. So, by replacing the need for sequential performance with more concurrency (vectorization/multicore/gpu acceleration), the need for power goes down dramatically. Process technology is another big factor.

On a related note: Comparing ARM with Core M just using some concurrency using benchmarks is not really comparing Apples to Apples - I expect Core M to have a higher sequential performance - so if you want to have the same performance on ARM, you'll need more concurrency in the software.

[1] http://en.wikipedia.org/wiki/CPU_power_dissipation


I believe you mean parallelism, not concurrency right? Concurrency is dealing (correctly) with more things at the same time, parallelism is doing more things at the same time.

The Pentium 4 had a very deep pipeline, which got very hot and wasn't fast compared to its clock. The pentium M went back to more instruction level parallelism via shallower parallel pipes (multi core came later).

I don't think ARM is competitive with Intel even with parallel benchmarks; Intel does performance well and doesn't play many tricks with it, ARM's advantage has always been power efficiency and cost.


> I believe you mean parallelism, not concurrency right? Concurrency is dealing (correctly) with more things at the same time, parallelism is doing more things at the same time.

Yes, parallelism makes more sense in this context. I wanted to express it in a more general way because running multiple programs on different cores at the same time I understand as concurrency, which is also made more efficient by parallel performance rather than sequential - IMO the terms are a little bit fuzzy at times, i.e. it's not that easy to convey the ideas behind parallel programming easily such that there is no ambiguity.

Edit: About ARM vs. Core M in benchmarks: After looking at it again, apparently the two are still quite far apart, you were right about their performance numbers: [1]

[1] http://www.fool.com/investing/general/2014/09/22/intel-corpo...


I don't think that anybody could reasonably read top speed being a function of heat as top speed only being a function of heat. I wouldn't really associate that idea with the Pentium IV era either since the problem then was that most people weren't thinking about power use at all.

Even that equation you gave says that if you increase P you will be able to increase f, though the interplay between f and V means that your increases are sub-linear.


Let's look at the quote again.

> top speed is actually the function of the heat generated

emphasis mine.

> Even that equation you gave says that if you increase P you will be able to increase f, though the interplay between f and V means that your increases are sub-linear.

I don't get your point. Did I write that performance and power is not related?


I feel this is one of those rare cases where some feedback about the downvote I just gave is called for. I've downvoted you because someone has asked you to apply the principle of charity to someone's post, and you've chosen to dig in instead. Even if you want to continue this otherwise-interesting discussion, please don't do it this way.


Thanks for the explanation. I didn't read GPs post as a call for charity on OP. Looking at this phrase..

> I don't think that anybody could reasonably read top speed being a function of heat as top speed only being a function of heat

.. I still don't see it this way. My post wasn't meant to be an attack on OP, rather I wanted to expand some points about chip performance that seemed to me weren't clear. The post I'm quoting however did read uncharitable to me.


Again, I don't understand how you got your reading. If something is the function of something else that means that the function is singular, not that the function only takes a singular argument. Obviously there are more things that are a function of heat than just clock speed so the op was wrong in that sense. That's the sort of mistake I make in revision all the time so I overlooked it. But even taking it literally it doesn't actually hurt the point the op was trying to make.


Only looking at the specs, it looks very similar to 5th-gen Core i7 U-series, with just significantly lower TDP.

http://ark.intel.com/compare/85215,84672

If that's the case, no wonder it gets bogged down by thermal throttling.


The Anand article is spot-on I believe.

It's not a processor that is meant to be doing sustained heavy work (compiling, gaming, simulations), but attend to small burts of CPU loads and then go back to low power mode.

It seems to me it would be more than enough for office work, remote ssh, (non flash) web browsing, e-mail etc.

Anyone buying a thin and light device like the ones mounting this processor, and expecting the performance of a desktop replacement laptop have unreasonable expectations to start with.


My thoughts exactly. The same cited article "with sobering conclusions" from Anandtech says exactly that.

More than erratic, I would say it's very "flexible", since it can be used by different devices with different requirements in terms of performance and skin temperature.


The problem is except for Asus, which usually has great cooling designs, most Core-M based devices are going to overheat. Because Intel overpromised on the capabilities of Core-M (5W TDP with 2.9GHZ Turbo!).


I'd like to see AMD try and differentiate by being the hardware vendor that supports zero binary firmware blobs anywhere in the stack and provides reference boards where Core Boot is king. Being the open choice for geeks and security paranoid governments, with open firmware and drivers for everything they build is probably the biggest thing AMD could do right now to carve out market share against Intel that doesn't involve tons of R&D.


This chart suggests there's no competition at the moment: http://cpubenchmark.net/power_performance.html

I'm always wary of the charts on cpubenchmark, however. To a desktop/laptop user, single core performance is usually much more important than total performance, as they're not using all the cores. Even for my use case, software development, compiling uses multiple cores but doesn't take any significant time these days anyway (C# in VS, JIT for .NET and JavaScript), so I tend to check:

http://cpubenchmark.net/singleThread.html

AMD A4 Micro-6400T APU: 607 Intel Core M-5Y10c @ 0.80GHz: 1,115

Almost twice the (benchmark) speed for the Core M. I'd like some real world usage reviews, but it's still looking like Intel is the best choice if you can afford it.


It's easy for Intel to "win benchmarks". Those benchmarks last for a few minutes, which is perfect for Turbo. Do a benchmark that lasts for an hour or even 30 minutes, and then you'll see the limitations of Intel's chips with Turbo.

And it's not about affordability in the case of Core-M. It's about poor performance/price ratio. You get very little for a lot of money, all to get that 5W TDP, which is also misleading because those chips needs to be put into devices that support more like 10W TDP, not 5. That's why most Core-M devices overheat. Intel overpromised on the TDP, and now OEMs get the blame for it.


> This chart suggests there's no competition at the moment: http://cpubenchmark.net/power_performance.html

There isn't, AMD doesn't have anything which comes even remotely close to a 3~6W TDP, which is the Core M target. Furthermore they don't have the fab technology so a 3~6W AMD part would necessarily be even slower and thermally constrained than the Core M.


Let's be honest, the most computationally intensive thing anyone does is to open Chrome and have 20 tabs each with five different Flash ads.

For the average user, it shouldn't matter what CPU they have in their machine. If their computer is slow, it's because that CPU is busy executing crapware.


Yet reviews of Core M machines by average users are almost all of them pointing to sub-par performance including premium products such as the Lenovo Yoga 3 Pro, for which they paid $$$.

Leaving this impression with users is not good for your brand in the long run.

I think Intel optimized too much for battery life and TDP with Core M, and it was also priced and targeted at the wrong highend market.


It's tough medicine but it needs to happen so that Microsoft, Apple, Google and all us programmers stop shipping slow, bloated crap. (Google can start by enabling click-to-flash in Chrome by default)

Imagine if you installed Windows 98 on one of these things. It would fly.

The CPU is perfectly capable of the workloads that users want to throw at it. The software isn't yet.


It's a GPU problem.

The difference between Win98 and OS X 10.10 is screen resolution, animations and virtual desktops.


I have Core M machine and performance indeed was poor if you didn't pay attention to what you run. After getting rid of Flash, performance is actually very good and I am happy with it.


more like OEMs underestimated people's performance requirements.


Frankly I think people are far too concerned about these performance numbers, I think they're excellent. It's on par with the Sandy Bridge CPU in my 2011 Macbook Air, which is an excellent performer. In fact I have plugged that MBA into a 27" external and been quite happy with it.

Combined with 8gb of RAM and a very fast SSD, a decent iGPU and I think performance will be very satisfactory indeed.


I'm still happy with my 2012 Air, so I get where you are coming from.

With less power consumption, it would be even better.


CPU performance in 2015 on par with a CPU from 2011 is a terrible, even when considering the lower power consumption.


The CPU's in the Air haven't really gotten that much faster since Sandy Bridge. The 2013's were actually a slight step down from the 2012's in many benchmarks.


That is Apple's choice for the Air (and they are notorious for slow refreshes in general). But the article is talking about the CPU industry as a whole, which is why I strongly disagreed with the "good enough since 2011" line of thought.


AMD seems to only succeed with outside help. K6 was all NexGen, K7 was practically a DEC Alpha, new mobile chips were only possible due to ATI takeover. Sadly there is no one in sight to buy out/poach from/take over to make the next performance bump. Intel and AMD bought everything that moved. AMD even picked up Cyrix leftovers from National Semi (Geode). What is left? insignificant VIA(centaur)? Vortex86(Rise mP6)? Not to mention they seem to be taking ages coming out with v8 arm, all they do is talk and produce slides, with no products in sight.


When youre competing against someone that 70 times larger you take all the help you can get :). AMD however has provided its fair share of innovation to the x86 ecosystem over the years - AMD64, integrating memory controllers, launching multi core chips, current work with integrating GPU's and CPU's (HSA and shared memory spaces).


You make some great points; I had not previously connected those dots...


Core M chips cost $281. That seems crazy high to me, especially when compared to ARM.


I still remember the days when AMD was clearly outperforming Intel. But these days are over.


Yeah. I still build AMD because they're the underdog (and I'm familiar with it, and I run integer heavy multithreaded apps, which is it's best use condition).

By some chance I have managed to find myself with an AMD Athlon 760k system with 32 GB Ram, and SSD, and a R9-290x GPU. It's pretty much fastest, top-of-the-line components (144 Hz monitor, too!), with a budget AMD chip. This is a 32 nm chip, with 2x2 MB of L2 cache (there's no L3 on the Athlon line these days). But coupled with a fast GPU and crazy amounts of RAM. The CPU is watercooled, too, with a double-sized radiator and 2 120mm fans. And just to be stupid, I picked up a killer bigfoot NIC as well.

I can't say I've noticed any real limitations on the CPU itself. I know it's hampered pretty badly, but I think I'm going to hang on to it as long as it's relevant and see what happens. I don't find myself being limited by that much. It plays Shadow of Mordor at highest detail settings (I bump it down a bit to get 100+ FPS).

It's sad to see them struggle, but Intel has superior technology right now, and everyone depends on technology to survive. I hope they can continue to stay relevant in the next couple of years until they get another home-run platform out of the door.

HSA is very interesting to me, but I don't know how important it will be to others. A few algorithms that I write can burn a lot of time being transferred to GPU and back. A gigantic FPGA is what I need, but I can't afford one of those.


What do you guys think of the new 12" macbook that carries one of these cpu's as a development machine ? Is it a really bad choice ?


What are you developing? Simpler web apps with all your heavy tests running on a remote server, sure no problem at all. Heavy duty 3D applications in C++ with large data sets and long compile times, not so much.

Basically spend some logging how much you stress your CPU and GPU during a normal day developing. If the results are "basically never rises above 5-10%" then the macbook will be fine.


I see your point, thanks for the feedback.

Although my work is mostly based on infra nowadays, i use vagrant & docker to test things locally so i think i should move my worry over to the 8 gb memory part :)


The whole article is based on a false premise. If those different OEMs had put in the heat sinking required for the AMD part, the significant performance differences they saw in the Intel chips would disappear and they'd whup the AMD because they'd be running turbo all the time.


Perhaps this is more of an opening for non-x86 mobile chips (MIPS, ARM, etc.) than for AMD.


I wonder if AMD will perhaps bank on ARM-centric models for low power winners.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: