
X86's Days as a Consumer Microarchitecture are Numbered - andrewmunn
https://plus.google.com/100838276097451809262/posts/1K5RYkiTyTU
======
stephenjudkins
This is an largely vapid and meaningless prediction by someone who doesn't
demonstrate anything but the most superficial knowledge of the microprocessor
industry. Perhaps he knows something we don't, but as far as I can tell he's
only extrapolating current market trends.

Obviously Intel (once led by Andy Grove, author of "Only the Paranoid
Survive") is aware of the threat posed by ARM. If someone could explain how
Intel will fail to meet the challenges of getting x86's performance-per-watt
to match ARM's, and how this compares to the challenges ARM vendors face in
order to get raw performance up to Intel's level, I would love to read it.
However, this post offers little such insight.

~~~
andrewmunn
Hello, I'm the author of this article. I do not have a degree in electrical or
computer engineering. I'm merely stating the trends I've seen in the PC
industry over the last few years.

I am, however, a Software Engineer. I know that most of the perceived lag on a
modern desktop is not due to the CPU, but inefficient I/O to the hard disk or
network. One must only look at the iPad 2 too see that's very possible to make
a fast computer with beautiful 60 FPS animations and snappy applications using
only an 800 MHZ dual core ARM CPU. Ironically, my iPad feels way faster than
my Macbook Pro most of the time.

You don't need to be an expert in the microprocessor industry to know that the
CPU performance race is over. It's all about power consumption now, and X86
fails miserably at lower power computing. Unless you know something I don't.

~~~
Pinhedd
_I know that most of the perceived lag on a modern desktop is not due to the
CPU, but inefficient I/O to the hard disk or network_

Most perceived lag on a modern desktop comes from excessive abstraction which
results in poor coding practices. You could certainly argue that IO
bottlenecks or a lack of system resources will certainly have an impact but
that impact wont be realized until the environment is somewhat saturated. A
simple solution to the hard drive bottleneck is to throw a SATA3 SSD in there
instead, or to give a system more RAM to boost disk caching, problem solved.
On the other hand, no amount of system resources will alleviate a performance
hit caused by shoddy coding. This is the reason that I refuse to use Google
docs, the performance is about as good as Wordperfect on Windows 95 because of
all the abstraction insanity.

 _One must only look at the iPad 2 too see that's very possible to make a fast
computer with beautiful 60 FPS animations and snappy applications using only
an 800 MHZ dual core ARM CPU._

The iPad 2 is about as powerful as my Pentium 4 was back in the early 2000s.
Shrinking it down to that level is certainly an accomplishment but it's not
worth the shock and awe that you present it to be. It's nice to have a device
such as the iPad 2 to fill the time when you wish you had a computer but it is
in no way a full desktop substitute.

 _Ironically, my iPad feels way faster than my Macbook Pro most of the time._

Your MacBook is a fundamentally different device than your iPad. They may feel
similar but this is purely superficial, the underlying operations are vastly
different. If your MacBook is that sluggish, it's either because you're using
an Apple product or you've got a PEBKAC error.

 _You don't need to be an expert in the microprocessor industry to know that
the CPU performance race is over_

Yes you do. The CPU performance race has been over for the past 5 years but
not for the reason you think it is. The CPU performance race is over because
AMD choked and threw in the towel. In 2007 AMD's flagship Phenom processor was
bested by Intel's then worst in class Core2Quad Q6600 in almost benchmark (if
not every benchmark). In 2011 AMD's flagship octal core Bulldozer processor
was beaten by a Intel's worst in class quad core i7 920 from 2 years ago which
also had an added handicap of only having 2 of its 3 memory channels loaded
with DIMMs. Don't blame AMD's failures on the market, or Intel, blame them on
AMD.

The fact that the CPU performance race is over doesn't mean that Intel has
won, it merely means that Intel is the only competitor since AMD is
effectively now a non-contender. It also doesn't mean that there is room in
the desktop market for ARM CPUs, or that desktop hardware manufacturers are
suddenly going to start writing drivers for two completely different
architectures.

While it is certainly true that ARM is gaining on Intel in the performance
space, it is still a long long way behind and that gap is only going to get
harder and harder to close as time goes on. This is going to be doubly
difficult when ARM manufacturers try to catch up to Intel in the general
purpose execution department. It's easy enough to say that ARM has a lead in
performance per watt if you ignore all of the special hardware capabilities
that Intel CPUs have which are mostly absent on ARM or if you forget that
power consumption scales logarithmically with voltage and that voltage is
necessary to maintain a higher frequency.

 _It's all about power consumption now, and X86 fails miserably at lower power
computing. Unless you know something I don't._

I do know something you don't. Architectures aren't designed to scale
infinitely in both directions on the power scale yet Intel still manages to
operate dual core full featured processors in the 17 watt range that will
still destroy any dual or quad core ARM processor that gets put up against it.
Also, I'm not sure how you can justify your statement "it's all about power
consumption" because for 95% of the desktop market heat is a non issue whereas
a lack of performance certainly is. If you live in a datacenter the constant
whine of fans and AC units can certainly get annoying but as I mentioned
above, there are already low power solutions that can be had without
reinventing the wheel.

~~~
derleth
_Most perceived lag on a modern desktop comes from excessive abstraction which
results in poor coding practices._

This is worthless without actual numbers, which I doubt you have. Hardware
people blame software, software people blame hardware, as it has always been,
so mote it be, amen.

~~~
v413
Here though it is not about blaming software or hardware people.

Here is what John Carmack talks about his troubles with the lack of PC
performance due to the multitude of APIs to reach the hardware:

John Carmack: ... That's really been driven home by this past project by
working at a very low level of the hardware on consoles and comparing that to
these PCs that are true orders of magnitude more powerful than the PS3 or
something, but struggle in many cases to keep up the same minimum latency.
They have tons of bandwidth, they can render at many more multi-samples,
multiple megapixels per screen, but to be able to go through the cycle and get
feedback... “fence here, update this here, and draw them there...” it
struggles to get that done in 16ms, and that is frustrating.

Later in the article John expands on the thick software problem.

The article is here: [http://pcper.com/reviews/Editorial/John-Carmack-
Interview-GP...](http://pcper.com/reviews/Editorial/John-Carmack-Interview-
GPU-Race-Intel-Graphics-Ray-Tracing-Voxels-and-more/Intervi)

~~~
lutorm
That quote is a bit out of context. The paragraph starts "I don't worry about
the GPU hardware at all. I worry about the drivers a lot...". He's talking
specifically about GPU performance.

------
haberman
This reads like a person who has been saying "RISC beats CISC" since the
Apple-on-PowerPC days because of ideological opinions about elegance, and is
looking for any excuse to re-express that viewpoint.

Look, I still think microkernels are better than monolithic kernels, but you
don't see me claiming Linux is doomed just because the L4 microkernel is
running on 300 million mobile phones worldwide
(<http://en.wikipedia.org/wiki/Open_Kernel_Labs>).

Monolithic kernels aren't going anywhere, and neither is x86.

~~~
CPlatypus
X86 is not going away, I agree, but Intel can hardly exercise the kind of
dominance they've enjoyed for the last several years when they're facing
serious threats at both the low and high end. At the low end, ARM simply beats
x86 for anything with a battery. Intel has already lost the phone and tablet
markets, and laptops are highly likely to follow.

At the high end, look at the <http://top500.org/>. #1 is based on SPARC
VIIIfx. #10 is based on the PowerXCell 8i. #2 and #4 both derive much of their
power from GPUs. Even that understates the situation, because many of the most
powerful computers next year - Blue Waters, Mira, Sequoia - will also be based
on non-x86 architectures. Then look at what Tilera or Adapteva are doing with
many-core, what Convey is doing with FPGAs, what everyone is doing with GPUs.
Intel is going to be a minority in the top ten soon, and what happens in HPC
tends to filter down to servers.

So Intel has already lost mobile and HPC. Even if Intel keeps all of the
desktop market, what percentage of the laptop and server markets could they
afford to lose before they follow AMD? Maybe it will happen, maybe it won't,
but anybody who can see beyond the "Windows and its imitators" segment of the
market would recognize that as a realistic possibility.

~~~
chancho
> what happens in HPC tends to filter down to servers

Is this conventional wisdom? How does a petaflop race affect app servers and
databases? It seems like most traditional server workloads could get by
without a single FPU. The only thing they have in common is IO. Are there many
data centers using Infiniband? (Maybe there are I don't know.)

The Cell architecture is an evolutionary dead end. SPARC is no more of a
threat to X86 now than before. GPUs may be the next big thing for HPC but its
got a long way to go to get out of its niche in the server market. (That niche
being... face detection for photo sharing sites? Black-Scholes? Help me out
here.)

I mean, I agree with your overall point, but I think it's more likely that ARM
will steal all the data center work before anything from the HPC world does.
They are too focused on LINPACK.

~~~
CPlatypus
Are there many data centers using IB? Yes. SMP was common in HPC before it
came down-market, likewise NUMA. Commodity processors have many features -
vector instructions, specilative execution, SMT - first found in HPC. Power
and cooling design at places liks Google and Facebook is heavily HPC-
influenced as well. Certainly some things go the other way - e.g. Linux - but
usually today's server design looks like last year's HPC design.

I'm not quite sure it's valid to write off SPARC as an architectural dead end
when the current fastest computer in the world uses it, and the next crop of
US competitors for that crown are all based on the Cell/BlueGene lineage. GPUs
are also more broadly applicable than you might think. Besides video and audio
processing, they can be used for many crypto-related tasks (witness their
popularity for Bitcoin mining), various kinds of math relevant to data storage
(e.g erasure codes or hashes for dedup), and so on. Many of their
architectural features are also being copied by more general-purpose
processors as core counts increase, as well.

Yes, high-end HPC is too obsessed with LINPACK. Nonetheless, it remains a good
place to look when trying to predict the future of commodity servers. Even if
ARM does displace x86 instead, many features besides the ISA are likely to
come from HPC. Perhaps more relevantly, either outcome is still very bad for
Intel.

------
feralchimp
"Performance per watt" is great, but when I step within 2 meters of a power
outlet I want "performance." And many machines (including laptops) spend their
lives within 2 meters of a power outlet.

At what point did "consumer" start meaning "low end" or "handheld"? The 27"
quad-core i7 iMac is a consumer computer. Gaming PCs are consumer computers.

~~~
shasta
Presumably you still care about "performance per dollar" though, and we're
approaching a time when "dollars per Watt" isn't negligible.

------
alf
This is really more AMD throwing in the towel vs. Intel. It's really
disappointing to see a competitor leave such an important market. I'm much
more disappointed to see the X86 market go from 2 to 1, than I am happy to see
the ARM market go from 4 to 5.

~~~
dman
If intel manages to beat AMD out of the x86 market then it will be the end of
an era. AMD has done a pretty good job of guarding the x86 flanks in critical
times - the P4 debacle, the move to 64 bit, operating in low margin parts of
the market where intel has few offerings, offering 4 socket servers, currently
offering higher number of cores than intel equivalents. Also they do a good
job of exploring the design space and have at various times come up with
useful innovations. AMD going out would make x86 a much more monolithic entity
in the market and much more open to attack from competitors. Even currently
AMD's low power brazos designs eke out a segment of the market where Intel has
no real offerings ("good" opengl performance on a SOC).

~~~
fpgeek
Of course, AMD has been in trouble before and those times weren't as
significant. I think what could end the era this time is not AMD throwing in
the towel as such, but the fact that AMD could throw in the towel because
there is somewhere else to go.

------
bryanalves
I think in a lot of ways this is related to the relative overpowering of
desktop computers for daily use. It's been true for years now that computers
are highly overpowered for what they are typically used for.

Quad core machine with a million gigs of ram for email and a web browser?

Sure there are LOTS of good reasons for having legitimate CPU power, but a lot
of times any random Ghz level processor is going to provide plenty of
responsiveness for daily tasks. The only thing I can think of that people
typically do that is processor intensive is HD playback, and that is easily
accelerated nowadays.

It's not always about absolute performance, it's about "good enough"
performance. If ARM is going to supply good enough performance with the
additional benefits of being cheaper and more portable, then why NOT use it?

This isn't about ARM versus Intel. This is about having adequately powered
portables.

Intel is losing the low-end CPU market. That much is true. But the low-end CPU
market is the new middle-end CPU market. I think we are going to see an age
where more and more people have "low-end" portables as their main computers.
The barrier between low-end, middle-end, and high-end has shifted
significantly I think. A few years ago, we all had uses for high-end
computers. Nowadays, what would be considered high-end is a waste for most
people.

Also, we can't forget the impact of the cloud on this. We don't need a lot of
computing power locally now. For many of the types of applications that one
would need high cpu for, the cloud potentially provides those solutions for
us.

I, for one, don't see myself trading in my desktop at work anytime soon. But I
do see myself using my laptop a lot more than my desktop at home. My couch is
a lot more comfortable than my computer chair.

~~~
__david__
> Quad core machine with a million gigs of ram for email and a web browser?

I hear this sentiment a lot, and in general I agree, but the fact is the app
with the largest RAM footprint on my laptop is Firefox. Given the
proliferation of web-based apps, I don't see the complexity of web browsers
going down. We can _always_ use more power.

~~~
zobzu
And Firefox is actually one of the most memory efficient browsers right now.

What people fail to see is that "just a browser" is a completely idiotic and
misinformed statement.

Browsers are probably one of the _most_ complex and powerful app on the
system.

Browsers are basically running entiere applications, virtualized, in a sandbox
per tab!

Heck, some websites are just not viewable on mobile right now (unfortunately)
because mobile jus't aint nearly fast enough. Think WebGL for example. Few
mobile browsers support it, but when they do, its pretty slow if the author
didnt make a super low polygon and texture count version...

------
jfpoole
I'm excited about ARM processors for low-power applications but I can't see
ARM replacing x86 processors (even in the consumer space) until ARM
performance is comparable to x86 performance. Right now the slowest MacBook
Air is 6x faster than the iPad 2; until the difference is 2x or less I just
can't see companies or consumers switching to ARM.

~~~
CPlatypus
6x the performance . . . for 8.5x the TDP. It's pretty easy to see why ARM
designs are already preferable for almost anything that has a battery, and
laptops are outselling desktops already. Sure, there will always be
applications where single-thread performance will matter more than total
performance or performance per watt for many cores - believe me, we learned
that lesson at SiCortex - but those applications are not enough to sustain
Intel as we know it. There's a reason they're developing MIC; without it
they'd be squeezed between ARM clients and servers with 20+
ARM/MIPS/POWER/SPARC cores per chip (not even counting GPU/FPGA server plays).
They need their own many-core product to compete.

~~~
pbharrin
I will take 6x the performance for 8.5x the TDP. My 3 year old MacBook Pro has
plenty of battery life for my use case. You have to look at absolute
performance numbers for the application.

~~~
divtxt
The only thing I use that causes the fans on my Core 2 Duo MacBook Air to kick
in is Flash, so I would definitely consider a slower CPU.

Also, note that the savings/benefits can be in ways other than more battery
life e.g. price, fanless & sealed cpu, etc.

------
cryptbe
I'm surprised that nobody's mentioned that X86 and ARM are instruction set
architectures, not microarchitectures.

~~~
sliverstorm
The two _are_ interlinked, though.

~~~
4ad
Absolutely not at all.

------
cageface
We don't mind using an assortment of different tools to accomplish other
tasks. Why should we expect that a one-size-fits all world of ARM tablets and
phones is going to completely displace the tools we're using now?

------
plq
There are many reasons why the current status quo won't change so soon and an
important one is because ARM still lacks 64-bit addressing. Servers have long
made the jump to 64-bit, so has most of the consumer-grade computers.

Without 64-bit support from its competitors, Intel doesn't have much to fear,
especially in the datacenter space, where performance per watt is a powerful
selling point.

~~~
Mordor
ARMv8 64-bit instruction set architecture announced last month, designs due
next year. When should Intel become afraid?

~~~
4ad
Absolutely never solely because of this. Intel makes ARM CPUs. It's a very
important player in the ARM market.

------
jimbobimbo
It's more of a "AMD days are numbered": instead of having competition with one
serious cometitor, they go head-on with a few serious cometitors using new
(commercially for AMD) architecture.

------
danmaz74
Said like that this the title is pretty moot. But if you change it to "X86's
Days as the only Personal Computer CPU Microarchitecture are Numbered" the
author could have a point.

------
issaco
Good thing it is an 80-bit FP number.

------
georgieporgie
Also, RISC is the Next Big Thing! By the way, Thin Client computing is going
to kill the desktop, C++ and C are dead, Microsoft is killing Win32, Microsoft
is dead, etc, etc.

Flagged for no meaningful content and inflammatory title.

~~~
sliverstorm
Also 2012 is the Year of Linux on the Desktop. :)

