
The End of x86? - mjfern
http://www.fernstrategy.com/2010/10/21/the-end-of-x86/
======
nl
I think ARM is going to continue to bite into x86 market share significantly.

But this article is wrong to right off x86 so easily.

Firstly, power consumption. It's right that ARM has lower power draw than x86.
The article is wrong by how much, though. Very low power ARM chips draw much,
much less than 2-3 watts. These are mostly for embedded systems, though.

The 2-3 watts vs 5 watts for ARM vs Atom isn't too significant. The big
problem with Atom _was_ that the support systems (memory controller etc) draw
~20 watts. That situation is being improved for netbook systems atm.

For sub-netbook systems, Intel is launching it's Moorestown architecture. This
is probably still isn't dropping into the Smartphone market in this generation
(despite Intel's marketing: [http://arstechnica.com/open-
source/news/2010/01/moblin-linux...](http://arstechnica.com/open-
source/news/2010/01/moblin-linux-on-x86-smartphone-intels-small-step-
forward.ars)), but should be great for tablets:
[http://arstechnica.com/gadgets/news/2010/05/intel-fires-
open...](http://arstechnica.com/gadgets/news/2010/05/intel-fires-opening-
salvo-in-x86-vs-arm-smartphone-wars.ars)

The article also implies that Intel's foundries are a liability. That would be
true if there really was useful "competition in the foundry market". Sure, if
you want 45nm+ chips produced, there are a number of foundries that can do it.
But once you start looking for 32nm foundries they get a lot rarer, and Intel
has just announced it's building its new 22nm foundries. That's a whole
generation ahead of anyone else in the industry and is a big competitive
advantage (Smaller scale in chip foundries means more performance for the same
power, or less power for the same performance.)

~~~
lsc
_The 2-3 watts vs 5 watts for ARM vs Atom isn't too significant. The big
problem with Atom was that the support systems (memory controller etc) draw
~20 watts. That situation is being improved for netbook systems atm._

Oh my god yes. If you could actually run an atom server in less than 10 watts
for a 4GiB system, even before disk, I'd be renting those out instead of VPSs,
and probably killing the competition on it, too.

My (perhaps unreasonably cynical) theory is that intel nerfed the desktop
atoms because they don't want them to compete with their server chips in my
applications. But, on the other hand, that's irrational, because when per-core
performance doesn't matter, (and in virtualization, more, smaller cores are
better than fewer, faster cores) AMD already beats Intel by quite a lot. And
atoms certainly don't compete in applications where per-core performance
matters, so yeah, that's probably not it.

------
happybuy
As I wrote in a similar thread I think Intel and x86 is already dead - the
writing is on the wall - Intel just doesn't know it yet. Below is how I
believe a key customer has already planned to leave the x86 architecture as
nothing but a footnote in their history (alongside the remnants of PowerPC).

\---

Currently Apple relies on Intel for a major component in a key product.
Strategically, Apple doesn't like to have to rely on a single source or
supplier for key products. Apple will do whatever is possible to remove this
reliance.

Hence a prediction: within less than 5 years a Mac will be running on an Apple
designed ARM processor.

How? By slowly, step by step, providing a way towards this.

Step 1. Migrate your OS to the new architecture (e.g. iOS already, OS X not
far behind) - done

Step 2. Migrate your developer base onto developer tools which you control and
can easily change the architecture it targets (e.g. Xcode and LLVM) - done

Step 3. Provide a space where problematic applications which use other VMs or
rely directly on getting too close to the hardware are not welcome (e.g. a Mac
App Store) - announced

Step 4. Change the marketplace behaviour so that you control how the majority
of applications are distributed and can quickly provide updates without user
intervention. Such as an App store.

Step 5. Release a new Macbook with an ARM processor, absolutely killing on
form factor, price and battery performance that Intel cannot compete with.
Encourage your Mac App Store developers to flick a switch in Xcode, to
recompile and upload their new Universal (x86 & ARM) versions of their Apps to
the Mac App Store.

Result: you now control the processor direction and application distribution
mechanism for a key product and no longer rely upon the whims of Intel.

Apple is all about controlling an integrated experience for their customers.
Currently Intel is getting in the way of this for the Mac product.

~~~
tjmc
Big problem with Step 5 - people couldn't run Windows on their Macbooks except
under emulation. That would be a deal breaker for a lot of potential customers
(pretty much all the gamers for example) and Apple knows it.

Personally, I don't think the transition to ARM on the Mac will happen any
time soon. Apple will just try to convince more people to buy iOS devices
instead.

~~~
loewenskind
>That would be a deal breaker for a lot of potential customers

Today it would be, but mobile is really taking off. Who knows where we'll be
in even 2 years. Maybe by step 5 the remaining things we _have_ to have
windows for (e.g. Office) simply wont be a requirement anymore.

~~~
eru
And to be frank, something like Office should be non-demanding enough to be
fast enough emulated. (Of course software gets slower faster than hardware
gets faster..)

------
ssp
The tagline from _Innovator's Dilemma_ is that _well-managed_ companies can
get into trouble when they are being disrupted. The reason is that it's
normally good business to get rid of low-margin products and focus your
resources on the ones that make the most money. And then someone takes over
the low end and expands into the high end.

But Intel is not falling for that one. They have made the Atom, a slow, cheap,
low-power chip that competes directly with ARM. That's likely a wise move, but
Intel now has the problem that low-margin chips are still bad business. They
have to have their expensive best-in-industry fabs make low-margin Atoms, when
they would much, much rather have them make expensive Xeons.

At one point they made a deal to have Atoms manufactured at TSMC, which would
have helped a lot with this problem, but apparently that deal didn't work out.
Even if it _had_ worked out, the Atom would no longer have the process
advantage, and then backwards compatibility would be the _only_ advantage for
x86. With Windows becoming less and less relevant, that's a big problem,
considering the technical advantages ARM has over x86.

So fundamentally, Intel has a problem that CPUs are becoming commoditized,
which means they will either have to take much lower margins or retreat to the
high end. Both scenarios are unpleasant for them.

~~~
nl
The technical advantages of ARM aren't _that_ great.

Don't forget we are talking about CPUs that are getting close to Pentium 3
class performance. Intel proved back in the Pentium vs PowerPC days that x86
can compete well against superior architectures. In this fight they have a lot
more performance to work with.

I do agree with your CPU-becoming-commoditized point, but Intel is _very_
aware of that (cite: how they keep Atom performance _just_ enough higher than
ARM, but a lot slower than their more profitable higher end chips). It's a
difficult area, but Intel is aware of the balancing act they have to do. I
think their strategy is to increase performance of non-CPU components of their
chipsets (ie, make sure Atom kills ARM on I/O) in order to keep their lead in
the datacenter.

~~~
ssp
_The technical advantages of ARM aren't _that_ great._

True - see also this:

[http://codingrelic.geekhold.com/2010/08/x86-vs-arm-mobile-
cp...](http://codingrelic.geekhold.com/2010/08/x86-vs-arm-mobile-cpus.html)

However, if x86 has neither a process or a compatibility advantage, then even
a tiny technical disadvantage turns into a tiny extra cost, which can be
important for high volume chips.

~~~
nl
_However, if x86 has neither a process or a compatibility advantage, then even
a tiny technical disadvantage turns into a tiny extra cost, which can be
important for high volume chips._

Not true.

The marginal cost of pretty much anything on a chip itself is close to zero.
For example, most 3 core chips are actually 4 core chips with one disabled.
Putting the extra silicon on the chip is effectively free.

The money goes in the investment in the factory and the R&D, _NOT_ the raw
materials or production costs.

~~~
ssp
_Putting the extra silicon on the chip is effectively free._

If you need that silicon to actually work, then it's not free. Floor-sweeping
is necessary because the more stuff you put on a chip, the more likely it is
to have defects. That really is an extra cost. I don't think there is any way
around that.

~~~
nl
Ok. Yes, there is extra silicon. But it's so insignificant that it doesn't
matter.

But I think you are overestimating how low-end these chips are. See
[http://arstechnica.com/gadgets/news/2008/02/small-wonder-
ins...](http://arstechnica.com/gadgets/news/2008/02/small-wonder-inside-
intels-silverthorne-ultramobile-cpu.ars) for example, which shows that the
Silverthorne architecture (1st Gen Atom) has pretty much the same transistor
count as a Pentium 4.

No one thought that the fact the Pentium 4 had to support x86 was a
significant factor vs other architectures. For the chips we are talking about,
the cache memory takes more transistors, so the quality thing is pretty
insignificant too.

------
gamble
I'm not going to write off Intel yet, but they're in a dangerous position. The
number one reason Microsoft's stock has been moribund the past ten years is
that Linux took over the datacenter. Imagine how much more revenue they'd have
if the millions upon millions of X86 servers deployed since 2000 ran Windows.
Instead, they're stuck in a saturated, slow-growth monopoly. If non-x86 chips
take off in data centers, Intel will be in a similarly bad spot.

~~~
gridspy
... And windows will be locked out of the datacenter until they create ARM-
compatible versions of windows.

Next thing you know Windows runs on both ARM and x86 (with x86 emulation for
applications) and pushes x86 out of the commercial / domestic arena.

~~~
Hoff
Microsoft Windows on Itanium was once marketed as the path forward for x86
applications, too.

Intel hasn't had particular success with microprocessor designs outside of its
core x86 business. Examples of other designs include iAPX432, i860/i960,
StrongARM/XScale and Itanium.

These architecture transitions don't always work out. While Apple has some
experience with porting and has maked it look (relatively) easy, Windows
hasn't had particular success with its ports, whether via Itanium's x86
emulation or via translation tools such as FX!32.

------
jawee
From my experience in running Debian on ARM, I can say that the software
difference is really non-existence. Any of the normal Debian OSS software
works fine.. I did not run into any problems with software not working except
for the closed source programs that didn't have ARM packages. I really can't
imagine the architecture shift being that big of a problem, as I'm assuming
the bulk of the work for ARM with Debian was just recompiling packages.. I had
a working XFCE desktop with common packages like Iceweasel (Firefox),
OpenOffice.org, GIMP, and so on that was just as easy to set up as x86.

~~~
konad
How about ffmpeg / libdv / mplayer Last time I looked they had hand coded asm.

~~~
jawee
I only tried with VLC player which didn't present any immediate problems, but
I don't know if it uses any of these libraries.

------
neilc
Is ARM's power advantage really that significant for devices larger than the
smart phone / tablet form factor? If an Atom-based CPU consumes ~2-3 extra
watts but offers marginally better performance and (more importantly)
compatibility with an enormous base of existing applications, that doesn't
seem like a very compelling argument for switching.

~~~
pkaler
_Is ARM's power advantage really that significant for devices larger than the
smart phone / tablet form factor?_

I'll bet $5 that the MacBook Air in the future (18-24 months) will switch to
ARM once the >2GHz ARM processors start shipping en masse.

The Asus Eee PCs running Android with Snapdragon processors already embed the
Cortex-A9 MPCores.

~~~
jbarham
"4GB RAM ought to be enough for anybody."

ARMS's Achilles' heel is that it's only 32-bit, and I don't think anyone wants
to go back to a segmented address space any time soon.

~~~
jbrennan
That's a great point. Are there any 64 bit ARM chips on the horizon? I don't
know much about chips, so maybe it's not even possible given the architecture.

~~~
ryanpetrich
It's possible, but just like the x86-64 transition it won't be pretty.

~~~
mfukar
I'm curious; have you faced any problems in that transition?

(seriously, I'm not trying to troll. I've had exactly one - making sure a
project of mine would compile as PIC and that was pretty minor after the
required reading)

~~~
ryanpetrich
There haven't been any major issues, it's just been glacially slow. Most
software running on capable CPUs is still 32 bit, Visual Studio still doesn't
have first class support for 64 bit and my MacBook still runs a 32 bit kernel.

I expect ARM's transition to be quite different though--with the highest ends
of their business moving in only a couple product cycles, and the lower ends
of their business sticking with 32 bit indefinitely.

------
dstein
I ran some rough numbers once. I couldn't find exact numbers but estimated for
about $1.5 million worth of ARM-based plug servers, you would have close to
enough raw CPU power (in terms of FLOPS) compete with the 2003-version of
Google.

------
16s
ARM is new? It's been around longer than most Ruby developers have been alive
;)

------
praecipula
I think the largest hurdle for ARM to get over is the preponderance of Windows
installations with kernels only complied for x86. Linux and OS X (Mach)
already run on ARM, and I think that possibly the NT kernel runs on ARM
(Windows phone 7 is ARM, right?) I have trouble seeing Microsoft port over
Windows proper to ARM until there's a really strong market for it. That being
said, perhaps low power consumption ARM devices will provide that market.
Perhaps this is another reason that Apple has their own ARM chip - to be at
the forefront of the ARM revolution, displacing MSFT?

~~~
contextfree
NT doesn't currently run on ARM (WP7 is still based on CE), though it was
originally designed to be CPU independent. Probably the biggest obstacle for
MSFT porting Windows to ARM is not the expense of the port itself, but
reluctance to put out a version of Windows that's binary incompatible with all
the Windows software out there.

Incidentally (and speaking of breaking compatibility), MSFT is working on a
brand new kernel and operating environment (Midori) which does have ARM as a
target. But this is an incubation project with no guaranteed release, though
it's a very serious effort.

~~~
nivertech
Windows NT originally was CPU independent, but only for Little Endian
architectures.

~~~
gecko
That can't be correct, since I remember running Windows NT 4 on a PowerPC
system.

~~~
tesseract
PowerPC (except PPC970 aka G5, as I recall) is bi-endian (configurable
endianness); so are Alpha and Itanium which also had Windows NT ports.

~~~
froydnj
Alpha is most definitely not a bi-endian architecture; it's little-endian
only. PowerPC chips can be either, but the vast, vast majority are big endian.

~~~
kinetik
Incorrect. The Cray T3E used Alpha processors in big-endian mode.

------
atomly
"Indeed. RISC architecture is gonna change everything."

"Yeah. RISC is good."

~~~
Gibbon
"So Intel lobbied heavily to get us to stay with them … [but] we went with IBM
and Motorola with the PowerPC. And that was a terrible decision in hindsight.
If we could have worked with Intel, we would have gotten onto a more
commoditized component platform for Apple, which would have made a huge
difference for Apple during the 1990s. So we totally missed the boat." - John
Sculley

------
jules
What portion of x86 transistors are dedicated to supporting the bad
instruction set design?

~~~
m0th87
That's an interesting question that I don't think anyone would be able to
quantify, just because performance is so subjective to the applications that
are run. In an interview, Alan Kay remarked that bad processor architectures
degrade performance by three orders of magnitude [1]. If he's right, that's a
pretty hefty tax. But I don't see anyone being able to prove or disprove his
hypothesis. Maybe most of the tax is an inevitable byproduct of increasingly
complex chipsets.

[1] <http://queue.acm.org/detail.cfm?id=1039523> around 1/3 of the way in

------
jayphelps
Great article. As an example, it's fairly obvious Apple is building up it's
internal CPU engineering abilities to inevitably put an ARM in a Macbook. But
I don't think it will be in the near future due to breaking compatibility with
current software, especially those who use SSE instructions. But who knows,
maybe they'll create an x86 emulator for the transition like Rosetta. OS XI
maybe?

------
Symmetry
I don't think its clear that ARM would actually be that competitive with x86
at the high end. Many ARM features, like conditional execution, are great for
increasing IPC for traditional in-order designs, but make complicated OoO
designs more difficult. As you have bigger more featureful chips the relative
cost of x86's decoding stage becomes less significant too.

All of which isn't to say that the article is wrong and that ARM isn't about
to take over the mainstream (I could see it happening but wouldn't bet on it).

------
jacabado
I'm a web developer (C# full stack) with 2 years of experience, I have been
attracted to ARM development since the moment I studied it in university.

As I'll be mostly in technical functions for some time (3 years?) what should
I consider when trying an experience in ARM development? What are the pros and
cons career wise and technical challenges relatively to continue to do boring
CRUD C# applications?

------
protomyth
I do wonder, for some of these higher performance devices could Intel / AMD
build a version of their chip that was 64-bit only and removed a lot of the
legacy instructions / vector attempts? Its not like a recompile / endian shift
wouldn't be necessary going from ARM to x86, so it shouldn't add any time to
the conversion. You have to account for all the variations of the x86 in a
compiler anyway.

------
tsotha
The x86 architecture will win out for the same reason it won out decades ago.
Most of the world's software was compiled for x86. I have all sorts of
software I've been running on my Windows laptop. When I go to buy a new
laptop, it'll be another Windows x86 laptop because my software would cost
more to replace than the machine upon which it's running.

~~~
loewenskind
Visualization today is vastly better today than 10 years ago. We've also got a
big, growing market that intel isn't doing well in. I wouldn't count Intel out
just yet, but them winning is far from sure.

------
mjfern
Does anyone have any data that they can share on the following -- as a
company, if I licensed the latest generation ARM processor (e.g., Cortex-A15)
and then factored in any additional design and manufacturing costs (e.g., via
a foundry), what would be the total cost advantage of using an ARM chip versus
a comparable Intel chip (e.g., Atom)? Thanks!

------
regularfry
The rest of the article aside, one thing jumped out at me - AMD's P/E ratio.
What's the received wisdom for why they are so cheap?

~~~
DavidSJ
They have tons of debt.

------
SriniK
Just wanted to mention...

 _(e.g., A4 in a MacBook Air)._

incorrect. Intel Core 2 Duo 1.4 GHz processor in a MacBook Air
[http://www.ifixit.com/Teardown/MacBook-Air-11-Inch-
Model-A13...](http://www.ifixit.com/Teardown/MacBook-Air-11-Inch-
Model-A1370-Teardown/3745/2)

I guess author wanted to mention iPad

------
yason
Modern CPUs are pretty low-level RISC stuff internally with x86 instruction
set layered on top, right? Now, it would be interesting to see what would
happen if someone started making desktop-class cpus with the ARM instruction
set instead of the legacy x86 instruction set.

------
protomyth
In the article it states people are holding onto their PCs longer because they
are satisfied with the capabilities of their older machine. I wonder if this
is really proven or a misread caused by the XP -> Vista transition problems.

~~~
mkr-hn
I tossed Ubuntu (with ext4) on my 2005 laptop, and it's proving to be a
formidable word processor and web browsing machine. I think a lot of people
are just realizing that they don't need another core or a chip built with a
smaller process to play flash games and write e-mail.

Though I'm a little concerned about what the smaller number of hardware sales
will do to development in the server market. Can they maintain R&D budgets?

------
Geee
That will surely happen. CPUs have for some time been limited by heating
problems and the architecture with more performance with the same power
consumption will win. If ARM can deliver that, then there's no question about
it.

------
aidenn0
Intel is aware of this. Atom is the shot across the bow. They are working on
making smaller and lower-power parts. It's interesting to watch ARM race up as
Intel races down.

~~~
rbanffy
If we compare the RISC nature of ARM with the CISC-implemented-over-RISC
approach of contemporary x86s, it's easy to believe it will prove simpler to
upscale ARM than to downscale the Atom.

Transmeta-style emulation could be one possible way out of this death spiral
for Intel.

~~~
semipermeable
History has shown that intel is very good at getting out of potential death
spirals...

~~~
nextparadigms
Yes but back then they had someone who understood disruption much better
(Andry Grove was friend with the author of Innovator's Dilemma) and they never
had to change the x86 platform. They only had to create simpler chips on the
same platform (Celeron, Atom, the initial laptop chips - forgot their name).

This time however, they need to change the whole platform, because the whole
platform brings much bigger inefficiencies than ARM, and they won't be able to
improve down that much, while ARM will improve fast at a steady pace.

The only solution is to buy a big ARM maker. To have a chance to dominate,
they'd need to buy Qualcomm for Snapdragon, but they've just wasted 7 billion
on an anti-virus...

But even if they do that, I'm unsure of their potential domination, as I
believe in 2011 Nvidia will dominate the mobile market with its Tegra chips.
That's because in 2011 the battle will be over who has the best GPU not CPU.
The GPU is increasingly more important (accelerating the UI, the browser,
Flash, supporting higher resolutions, gaming, etc)

~~~
rbanffy
The GPU is increasingly more important (accelerating the UI, the browser,
Flash, supporting higher resolutions, gaming, etc)

Don't forget offloading computation from the CPU like voice/face/gesture
recognition and number crunching (there _must_ be some use for that in a
cellphone). And OpenCL is already providing a hardware-independent abstraction
layer for that.

------
soramimo
while this might be true for the mobile world, the author does not seem to
consider that people do more of their computations in the cloud or data
centers that in turn have to purchase great numbers of classical high-
performance chips. While people are happy with the performance they get on
their home machines the demand for greater computing power in the cloud will
grow further.

------
thomasfl
It's the return for Advanced RISC Machines.

