Hacker News new | past | comments | ask | show | jobs | submit login
Apple Purchases Another Processor Design House (arstechnica.com)
62 points by razerbeans on April 3, 2010 | hide | past | favorite | 19 comments



Question for the CPU-savvy people here:

Would there be any advantage in making a desktop computer that ran on the ARM architecture (instead of x86)?

Power consumption seems to be the obvious one; would it be possible to ramp up the performance of ARM to match modern x86 but while keeping a power-efficiency bonus?

Maybe with OpenCL/Grand Central/etc you could have an ARM-based desktop that used minimal power most of the time, spreading the work over many slower low-power cores, and used the GPU when heavy lifting is required..?


I started off writing a reply here, but it became way too long. So I converted it to a blog post, and then it became way too long.

So I apologize in advance for the super-long post, but I hope it'll clear up your questions: http://neosmart.net/blog/2010/the-arm-the-ppc-the-x86-and-th...

EDIT: Submitted. http://news.ycombinator.com/item?id=1238930


Apple will switch the Mac to ARM if and only if Microsoft does it, too. Let's set aside the technical questions for a moment--the ability of a Mac to painlessly virtualize a Windows VM or boot into Windows is a key feature.

Also keep in mind that x86 has already defeated several technically superior ISA's in the marketplace by a combination of lock-in and Intel's ability to keep performance high on x86 parts.


Apple seem to be keeping a close eye on LLVM which would allow architecture-independent binaries with relatively little penalty.

I'm not an expert in the field, but I suspect that x86 will continue to offer the best value in terms of performance relative to cost, which is the most important factor to consider in desktop computing. However, having the option to run the same binaries on other architectures opens the door for using other architectures on handheld, low-powered, or small-form-factor devices, where factors other than performance matter and Apple may be able to compete favourably.


I think it goes a bit beyond "keeping a close eye on"...

Apple's employed Chris Lattner (the creator/primary author of LLVM) for a number of years now, and I think LLVM is widely regarded as being a pretty major part of their plans going forward. (And it's not just Apple, either -- Nvidia's using LLVM for their OpenCL implementation, for example.)

As for x86 vs. ARM...in the high-performance single-core/multi-core area (up to say, 8 cores or so), I think x86, ugly as it is, is likely to remain entrenched. There have been a hell of a lot of dollars and man-hours pumped in to making it (single-threaded) fast over the last couple decades, and in that area it may actually have some technical advantages over a cleaner ISA (such as ARM, though ARM isn't exactly the epitome of RISC purity). Two things off the top of my head -- a) the instruction encoding, as insane as it is, allows pretty dense code, meaning good I-cache utilization, and b) the two-operand instruction format simplifies dependency-checking and forwarding logic, for which hardware costs run O(n^2) (where n is proportional to operands-per-instruction and issue width).

In the "manycore" area though, I think the power and area costs of decoding the x86 ISA will be a more significant disadvantage (this may have something to do with why the power consumption of Larrabee prototypes was rumored to be so enormous). Tilera, for example, makes manycore chips based on MIPS, which I'd say is a somewhat cleaner ISA RISC-wise (meaning better-suited for small/efficient hardware implementation) than ARM.

So, if manycore is indeed the future, I'd say there is definitely hope for avoiding a completely x86-monopolized world...


Something like this is almost certainly going to happen, especially as performance-critical code is rewritten to be massively multithreaded over the next few years. At that point, if you can get two or three times as many cores, running about the same speed, on the same number of square millimeters, with the same power usage, then your computer will effectively be two or three times as fast. At that point, it becomes hard to justify the massive amounts of chip real estate and power consumption devoted to decoding 386 instructions, slightly accelerating single-threaded performance with huge out-of-order execution buffers, and that kind of thing.

Tilera seems to be kind of betting on this today. GreenArrays is betting on it a lot harder; they have what appears to have taped out silicon with 144 processors, running an aggregate of 100 billion instructions per second, on 650 milliwatts, linearly scaling down to 14μW on idle.

Things I disagree with in your description:

• It's not clear that OpenCL or Grand Central will be the software platform on which people will achieve massive parallelism for most performance-critical software. It might turn out to be something older like R, or OpenMP, or MPI, or assembly language with manual synchronization.

• If your many slower cores aren't fast enough, the GPU isn't going to help you; it only speeds things up when your work is already parallel.


Yeah, but other than some specialised tasks, most code will never be written to be massively multi-threaded. This is an incredibly difficult endeavour and for many tasks it may be downright impossible. This the Itanium thinking all over again -- lets make a CPU for which code is incredibly hard to write for, and the programmers will come because our CPU is so nifty.

I think Intel have learned their lessons and are now making CPUs according to code people are already writing and not according to something that may happen in the future. And it is usually worth it to spend a lot of silicon to speed up a single pipeline because programmers do not have time and resources for the incredibly difficult task of efficiently splitting their code into 100 threads.

As PG said, the most important resource to be optimised nowadays is not speed of execution, but programmer time.


Luckily the world seems to be so that the things that have to be fast also happen to be relatively easy to parallelize (most tasks using huge data sets, rendering, matrix operations, ...). What are applications that have to be fast but are not easy to parallelize?


I disagree that parallelization is abnormally difficult. We just don't yet have the tools to help. It is like trying to do objected oriented programming back in the assembly language days: to be avoided except where absolutely necessary. With appropriate library support, many personal computing tasks are amenable to map-reduce approaches.

Can you think of any highly serial desktop apps? All I come up with is certain spreadsheets, and the final step of assigning page breaks during document pagination.

What killed Itanium was Intel's childish obsession with clock rate and single-threaded performance. It turned out that back end software was limited by off-chip bandwidth and had already been forced to parallelize at any cost, while front end software was slowed down by trivially parallelizable graphics and signal processing. Itanium is an excellent solution, just to the wrong problem.


> Would there be any advantage in making a desktop computer that ran on the ARM architecture (instead of x86)?

The MIPS people said that they had to have 10x on some dimension and as good as on all other dimensions just to get folks to consider switching ISAs.

Where's your 10x? (Note that Intel is already planning to play the "many cores" game.)

Note that CPUs aren't the only thing that use power in a computer.


I'd say there's no point to it. Probably a night-light's worth of power (5 or 10W) when the CPU itself is idle - Intel and AMD are getting pretty good at shutting down idle circuits, down-clocking, etc.

Assuming you made an ARM with all the artillery (FPUs, etc.) to match guns with x86. People would be comparing it to the 3-gazillion-points-in-benchmark-X CPUs.


I disagree about Intel and AMD getting pretty good at shutting down idle circuits. The more-than-40-watts of idle consumption of the CPU alone in Intel's latest i-incarnations is not anything remotely close to "pretty good", even with the hindsight of how things used to be 10 years ago.

Here's an interesting fact, btw.: some cores in the P3 family that have a TDP of 15-20 watts can idle at about 2-3 watts - numbers and percentages that the Core 2 and iN architectures don't even come close to.


There's a couple of things going there. One of them is that nowadays you have enormous caches (6 MB?) to fill out the chip and wring out performance. Unfortunately, in today's deep submicron processes, you get significant static current which was not a problem ten years ago. (If you put the cache in an ARM, it will have the same power draw.)

The other is that the server chips (not to mention laptop) are where to look for the best in power efficiency. For consumer/gamer machines with 200W graphic cards ...


I couldn't find any concrete numbers from a resource I trust, but a quick search reveals a few articles:

This claims that the idle power consumption of the Quad Core 45nm Nehalems is around 17 watts: http://www.lostcircuits.com/mambo/index.php?option=com_conte...

And this one is about a 32nm Westmere (dual core) where the entire system draws 27 watts at the wall when idle: http://anandtech.com/show/2846/4

Not great considering that the chip isn't doing anything, but definitely not the 40+ you're claiming.


Yes, there is a huge power advantage to parallel cores. This is definitely worth reading for some background on why Apple is doing what they are doing:

http://perilsofparallel.blogspot.com/2010/02/parallel-power-...

The comments on it are also really good.


ARM itself isn't particularly interested in that; that much I know - though I haven't had any "inside info" for as much as 5 months now :-).

They've spent the last few years gearing up for defeating intel in the higher-end-smartphone/netbook space.


Yes, there would be. I personally believe, from experience with both the x86 and the ARM architectures, their ISAs, and their current implementations, that the world and society - pardon the almost religious intonation - can't be delivered into an era of more efficient computing, on the energy consumption front as well as the computational power front, as long as it is held back by x86.

I truly believe that ARM will be the architecture that will bring computing out of this current technological womb, and I suspect that Apple, due all of the spotlights on them, will be the actor responsible for making the biggest push in this rebirth.

The article says "If the sudden disappearance of Intrinsity's web site is truly an indication that Apple has made another purchase, it's a clear sign that Cupertino has really big plans for ARM and doesn't see a future for x86 outside of its desktops and laptops." And I believe that today, some years after the "switch", with ARM's rapid advances, Apple doesn't really see a future for x86 in their desktops and laptops, either. I hope, wholeheartedly, that the world will soon enough be ARM powered, instead of x86 powered.


If I were a betting man, I'd bet that five to seven years from now, Apple won't be selling anything currently recognizable as a desktop or laptop PC.

When they renamed the company from "Apple Computer, Inc." to "Apple, Inc.", they meant it.


There will still be a place for Apple to sell developer tools, which would likely still be desktops and laptops. Or to put it another way, Apple engineers will probably want to use Apple computers to create Apple products.

It’s possible though that some point in the future that these desktops or laptops will probably end up with the mindshare similar to their xserve servers, pretty much unheard of to consumers who know Apple as “the iPod/iPhone/iPad” company.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: