
The End of x86? An Update - jhund
http://fernstrategy.com/2012/12/21/the-end-of-x86-an-update/
======
OldSchool
After reading about great performance of newer ARM-based offerings I was
surprised when I compared real-world performance at the same clock speed
recently: ARM doesn't even come close to any recent-generation x86. This is
certainly one very important measure of architecture.

A quick sunspider test with a US Samsung Galaxy S3 1.5GHZ snapdragon on Jelly
Bean's likely highly-optimized browser shows performance very comparable to a
first generation intel 1.66GHZ Atom 230 single core on the latest Firefox.
Granted it's a mostly single-threaded test anyway but the ARM has both cores
available and the test is pretty cpu-bound after it starts.

I'd estimate the latest i7 is at least 3x faster per-GHZ on this lightweight
but fairly general (cpu-wise) test.

For heavy lifting, a recent i7 with it's cache size, memory bandwidth and
accompanying i/o would probably compare to an ARM that is running at about 5x
the clock speed.

I don't think that ARM can be suddenly declared the best at anything other
than maybe performance-per-TDP.

Performance-per-cycle is the more difficult problem to solve... ask AMD how
hard that's been since the original Intel core series appeared on the scene in
2006. Prior to this and after it wasn't just a chip clone maker, AMD dominated
this metric.

~~~
rayiner
But performance per TDP is what matters. Battery technology is getting better
much slower than CPU technology. We're already at the point where CPU speed is
"good enough," but we're not at the point where battery life is "good enough."

I've downsized from a Core 2 MBP to an iPad. It has 1/4 the RAM and runs at
half the clock speed. Do I care? No! Web browsing is fast and fluid, photos
load plenty fast, editing documents in pages is plenty fast. And it lasts
through my whole 12+ hour workday, letting me leave the charger at home and
often not even bothering to charge it every night. That's huge and much more
important to me, and I'd imagine most people, than whether it can be
imperceptibly faster.

~~~
OldSchool
I agree for typical single-user applications, but it's hard to call that the
end of x86 as in the article. If ARM were matching x86 cycle-by-cycle in
performance for less power that might be a credible claim that servers were
next. Also high-end x86 devices run higher clock speeds with more cores so the
gap in absolute performance is at least 1000% percent wide. I just don't see
ARM displacing Intel when batteries or at least small form factor aren't
involved. I don't expect to have an x86-based phone or tablet either.

~~~
amalter
My opinion (and I suspect the authors) is that the market for chips where
batteries or form factor are not a concern will no longer be large enough to
support a company of the size of Intel. RIM makes the finest high security
physical keyboard phones anywhere. That is now a market sized for a company
1/10th their current size.

------
eliben
Oh the horror. What a bunch of random, clueless and non-technical crap. I
especially like the part where he compares the operating costs of Intel (a
company owning a number of multi-billion-dollar fabs that are far above the
competition in capabilities) and ARM holdings (a comparatively tiny
intellectual property shop).

Unfortunately, it is exactly by the advice of such "strategically thinking"
MBAs our industry is often run :-(

~~~
revelation
I like how he quotes the 7.6 billion primitive chips (embedded processors just
a notch above a bunch of random logic gates) "shipped" by ARM to the high-
margin (for Intel) PC market.

~~~
makomk
Even ARM's "embedded processors just a notch above a bunch of random logic
gates" are fully-fledged 32-bit microprocessors with hardware support for
preemptive multitasking, running at fairly respectable clock speeds. And
that's the sub-dollar embedded stuff!

------
krschultz
Whenever I read about modern desktops, I think about the graphs in the
Innovator's Dilemma. [1]

The incumbent players (Intel, Microsoft, Dell, HP) are all competing on the
established metrics of performance & price, but those are no longer the
metrics that matter. ARM is pushing the power efficiency angle. Apple is
winning on industrial design.

The entire computer industry (excluding phones, which has obviously already
been disrupted) is right on the verge of being flipped on its head. There were
hints of that with the netbook wave, but they weren't quite good enough. The
iPad and subsequent high end Android tablets are close, but not 100% there.
But we are just about at the point where ARM vs x86 is equivalent for the mass
market, and that really is going to shake things up.

[1]
[http://upload.wikimedia.org/wikipedia/commons/thumb/8/8e/Dis...](http://upload.wikimedia.org/wikipedia/commons/thumb/8/8e/Disruptivetechnology.gif/450px-
Disruptivetechnology.gif)

------
bascule
Missing from this post is any sort of discussion about how modern x86 CPUs are
poorly designed for the types of programs most people are developing these
days.

Managed language runtimes represent the bulk of programs people are running on
servers (think: Java/Scala/Clojure, PHP, Python, Ruby). These environments not
only lack "mechanical sympathy" but also have requirements above and beyond
what x86 can do.

To take Cliff Click's word for it, managed language runtimes consume 1/3 of
their memory bandwidth on average zeroing out memory before handing objects to
people. If x86 supported an instruction for doing just-in-time zeroing into L1
cache, this penalty could be eliminated, and that 1/3rd of memory bandwidth
could be used for actual memory accesses instead of just zeroing out newly
allocated objects. In an age where RAM is the new disk, this would be huge.

Unfortunately the amount of time it takes to get a feature like this into an
Intel CPU is a bit mind boggling. Azul started talking to Intel about hardware
transactional memory early last decade, and Intel is _finally_ shipping
hardware transactional memory in the Haswell architecture in the form of
transactional synchronization extensions.

~~~
zvrba
> Unfortunately the amount of time it takes to get a feature like this into an
> Intel CPU is a bit mind boggling.

I think this is an unfair feature comparison. Zeroing L1 cache is way simpler
operation than TM which has been designed to support two modes of operation
[legacy -- which only speeds up traditional LOCK-based synchronization -- and
true TM], must support transaction aborts and restarts, etc. Also, 10 years
ago, TM was still a very active _research_ area -- i.e., people had no clue
about which ideas were performant and scalable and, not the least, feasible to
implement in HW.

------
luu
Working in microprocessors, I hear this a lot, but, in the long run, Intel has
a fundamental advantage over ARM, and ARM doesn't seem to have a fundamental
advantage over Intel [1].

People talk about RISC vs. CISC, and how ARM can be lower power because RISC
instructions are easier to decode, but I don't hear that from anyone who's
actually implemented both an ARM and an x86 front-end [2]. Yes, it's a PITA to
decode x86 instructions, but the ARM instruction set isn't very nice, either
(e.g., look at how they ran out of opcode space, and overlayed some of their
"new" NEON instructions on top of existing instructions by using unused
condition codes for existing opcodes). If you want to decode ARM instructions,
you'll have to deal with having register fields in different places for
different opcodes (which uses extra logic, increasing size and power),
decoding deprecated instructions which no one actually uses anymore (e.g., the
"DSP" instructions which have mostly been superseded by NEON), etc. x86 is
actually more consistent (although decoding variable length instructions isn't
easy, either, and you're also stuck with a lot of legacy instructions) [X].

On the other hand, Intel has had a process (manufacturing) advantage since I
was in high school (in the late 90s), and that advantage has only increased.
Given a comparable design, historically, Intel has had much better performance
on a process that's actually cheaper and more reliable [3]. Since Intel has
started taking power seriously, they've made huge advances in their low power
process. In a generation or two, if Intel turns out a design that's even in
the same league as ARM, it's going to be much lower power.

This reminds me of when people thought Intel was too slow moving, and was
going to be killed by AMD. In reality, they're huge and have many teams
working a large variety of different projects. One of those projects paid off
and now AMD is doomed.

ULV Haswell is supposed to have a TDP ~10W with superior performance to the
current Core iX line [4]. Arm's A15 allegedly has a TDP of ~4W, but if you
actually benchmark the parts, you'll find that the TDPs aren't measured the
same way. A15 uses a ton of power under load, just like Haswell will [5]. When
idle, it won't use much power, and will likely have worse leakage, because
Intel's process is so good. And then there's Intel's real low power line,
which keeps getting better with every generation. Will a ULV version of a
high-end Intel part provide much better performance than ARM at the same power
in a couple generations, or will a high performance version of a low-power
low-cost Intel part provide lower power at the same level of performance and
half the price? I don't know, but I bet either one of those two things will
happen, or that new project will be unveiled that does something similar.
Intel has a ton of resources, and a history of being resilient against the
threat of disruption.

I'm not saying Intel is infallible, but unlike many big companies, they're
agile. This is a company that was a dominant player in the DRAM and SRAM
industry that made the conscious decision to drop out the DRAM industry and
concentrate on SRAMs when DRAM became less profitable, and then did the same
for SRAMs in order to concentrate on microprocessors. And, by the way, they
created the first commercially available microprocessor. They're not a Kodak
or Polaroid; they're not going to stand idle while their market is disrupted.
When Toshiba invented flash memory, Intel actually realized the advantage and
quickly became the leading player in flash, leaving Toshiba with the
unprofitable DRAM market.

If you're going to claim that someone is going to disrupt Intel, you not only
have to show that there's an existing advantage, you have to explain why,
unlike in other instances, Intel isn't going to respond and use their superior
resources to pull ahead.

[1] I'm downplaying the advantage of ARM's licensing model, which may be
significant. We'll see. Due to economies of scale, there doesn't seem to be
room for more than one high performance microprocessor company [6], and yet,
there are four companies with ARM architecture licences that design their own
processors rather than just licensing IP. TI recently dropped out, and it
remains to be seen if it's sustainable for everyone else (or anyone at all).

[2] Ex-Transmeta folks, who mostly when to Nvidia, and some other people whose
project is not yet public.

[3] Remember when IBM was bragging about SOI? Intel's bulk process had
comparable power and better performance, not to mention much lower cost and
defect rates.

[4] [http://www.anandtech.com/show/6355/intels-haswell-
architectu...](http://www.anandtech.com/show/6355/intels-haswell-architecture)

[5] Haswell hasn't been released yet, but Intel parts that I've looked at have
much more conservative TDP estimates than ARM parts, and I don't see any
reason to believe that's changed.

[6] IBM seems to be losing more money on processors every year, and the people
I know at IBM have their resumes polished, because they don't expect POWER
development to continue seriously (at least in the U.S.) for more than another
generation or two, if that. Oracle is pouring money into SPARC, but it's not
clear why, because SPARC has been basically dead for years. MIPS recently
disappeared. AMD is in serious trouble. Every other major vendor was wiped out
ages ago. The economies of scale are unbelievably large.

[X] Sorry, I'm editing this and not renumbering my footnotes. ARMv8 is
supposed to address some of this, by creating a large, compatibility breaking,
change to the ISA, and having the processor switch modes to maintain
compatibility. It's a good idea, but it's not without disadvantages. The good
news is, you don't have to deal with all this baggage in the new mode. The bad
news is, you still have the legacy decoder sitting there taking up space. And
space = speed. Wires are slow, and now you're making everything else travel
farther.

~~~
mtgx
The 10W chip will be a weaker SKU, probably even weaker than current 17W IVB
CULV chips. They are not magically lowering the TDP from 17W to 10W for the
next generation.

Cortex A15 will get the benefit of pairing up with A7. ARM says on average the
energy consumption should be about half, compared to Cortex A15 alone.

Also Haswell is rumored to cost 40% more than an IVB Core. That's close to
$300 for a CULV. That's simply not sustainable when it comes to the new market
that is forming for tablets. $300 is more than the _whole_ BOM for your
typical $500 tablet. I doubt you'll see that chip in anything cheaper than
$800, at a time when you get "good enough" tablets for $200 whole. Intel's
competitor to ARM simply isn't the Core line-up. It's Atom, for better or for
worse.

And as I said, Intel will lose not because of lack of expertise in making
chips, but because of an unsustainable cost structure and business model (they
now have to compete against several ARM chip makers at once, including Apple).
The fact that they also have no momentum or market share in the mobile market
doesn't help.

~~~
cube13
>Also Haswell is rumored to cost 40% more than an IVB Core. That's close to
$300 for a CULV. That's simply not sustainable when it comes to the new market
that is forming for tablets. $300 is more than the whole BOM for your typical
$500 tablet. I doubt you'll see that chip in anything cheaper than $800, at a
time when you get "good enough" tablets for $200 whole. Intel's competitor to
ARM simply isn't the Core line-up. It's Atom, for better or for worse.

In terms of OS for an x86 tablet, you're looking at Windows 8, or Linux with a
custom shell, and that's it. There's an unofficial x86 port of Android, but I
wouldn't stake any real product on that without any support from Google.

Since MS has already established the baseline price for the Win8 tablet around
$800, and they're marketing it more as a tablet PC that can do everything you
do with a normal desktop or laptop than an iPad, is this really that much of
an issue?

~~~
SoapSeller
There is an official port of Android(done by Intel) on x86, it powers the
several Atom phones out there(Orange San Diego & etc). You can even download
an emulator build for Atom, which perform great(it has support for
virtualization extensions).

~~~
georgemcbay
The first gen Google TV platforms (Sony Bluray player, Logitech Revue) were
also x86 based.

------
programminggeek
The problem is 2-fold ARM is being supported by a lot of companies - Apple,
Samsung, Microsoft, etc.... vs Intel by itself on x86 basically.

Second, ARM chips are too cheap. Intel's biz is built on $100+ chips. ARM
chips are like $10. If intel's chip prices drop to say $25, they don't have
nearly the money for R&D.

X86 won't die, but it can't grow and over time that's going to hamstring
Intel.

Call it peak x86.

~~~
eliben
Why can't it grow? Server builders still want the fastest CPUs. For them, x86
still gives the best performance per watt and dollar. Yes, ARM servers are
starting to appear but I don't think they're up to par with Intel yet.

And the cloud needs servers. Lots and lots of servers.

True, Intel faces stiff competition here. But folks are sometimes forgetting
Intel wasn't always a monopoly in its field. It had competition, lots of it
over the years. I wouldn't bury them just yet.

~~~
st0p
But compared to notebooks, laptops, tablets and smartphones server will always
remain a small market.

~~~
ShirtlessRod
All that data needs to be stored somewhere...

------
mtgx
Ever since I've read the innovator's dilemma around 2006 or so, I've tried to
watch for other examples of disruptions happening in the tech industry,
including laptops (disrupting PC's), iPhone/Android phones (disrupting
Nokia/RIM smartphones), iOS/Android (disrupting Windows/Mac OS) and a few
others.

But while you could still find something to argue about in some of those case,
especially when the "fall off a cliff" hasn't happened yet for those companies
(disruption takes a few years before it's obvious to everyone, including the
company being disrupted), I think the ARM vs Intel/x86 one has been by far the
most _obvious_ one, and what I'd consider a "by-the-book" disruption. It's one
of the most classical disruption cases I've seen. If Clayton Christensen
decides to rewrite the book again in 2020, he'll probably include the ARM vs
Intel case study.

What will kill Intel is probably not a technical advantage that ARM has and
will have. But the pricing advantage. It's irrelevant if Intel can make a $20
chip that is just as good as an ARM one. Intel made good ARM chips a decade
ago, too. But the problem is they couldn't live off that. And they wouldn't be
able to survive off $20 Atom chips. The "cost structure" of the company is
built to support much higher margin chips.

They sell 120 mm2 Core chips for $200. But as the articles says, very soon any
type of "Core" chip will overshoot _most_ consumers. It has already overshot
plenty, because look at how many people are using iPads and Android tablets or
smartphones, and they think the performance is more than enough. In fact, as
we've seen with some of the comments for Tegra 4 here, they think even these
ARM chips are "more than enough" performance wise.

That means Intel is destined to compete more and more not against other $200
chips, but against other $20 chips, in the consumer market. So even if they
are actually able to compete at that level from a technical point of view,
they are fighting a game they can't win. They are fighting by _ARM's rules_.

Just like Innovator's Dilemma says, they will predictably move "up-market" in
servers and supercomputers, trying to chase higher-profits as ARM is forcing
them to fight with cheaper chips in the consumer market. But as we know ARM is
already very serious about the server market, and we'll see what Nvidia
intends to do in the supercomputer market eventually with ARM (Project
Denver/Boulder).

As for Microsoft, which is directly affected by Intel/x86's fate, Apple and
Google would be smart to accelerate ARM's takeover of Intel's markets. Because
if Microsoft can't use their legacy apps as an advantage against iOS and
Android, that means they'll have to start from scratch on the ARM ecosystem,
way behind both of them. Apple could do it by using future generations of
their own custom-designed ARM CPU in Macbooks, and Google by focusing more on
ARM-based Chromebooks, Google TV's, and by ignoring Intel in the mobile
market. Linux could take advantage of this, too, because most legacy apps work
on ARM by default.

~~~
bryanlarsen
The biggest difference is that Intel has consulted closely with Christenson,
and is not afraid to cannibalize their own market to retain dominance. The
Intel Celeron came directly from their consultations with Christenson. The
Celeron signficantly dented Intel's profits temporarily but was the beginning
of the end for AMD.

And certainly the price is a very significant factor. But remember that ARM
sells an order of magnitude more chips than Intel does. So if Intel is
successful, they can make it up on volume, at least to a degree.

~~~
dman
I dont recall Celeron being a major problem for AMD. The things that did hurt
were

a) Intels effectiveness at preventing AMD SKU's from hitting markets

b) The Core 2 family from Intel

c) AMD insisting on shipping 'native' dual / quad cores with worse yields -
there wasnt any advantage to the end user and I would imagine the yields were
worse

d) The TLB bug

------
sbov
I thought one of the major advantages Intel held was that it owned its own
manufacturing, allowing it to iterate more quickly. However, this article
seems to claim it needs to shed itself of that. Is it not really an advantage
in x86 then? If it is an advantage in x86, why isn't it here? Or is this just
a case of blindly copying ARM's business model?

~~~
marshray
Thinking purely objectively, it's always an advantage to have the most money
in the bank, the largest customer installed base. Owning the best
semiconductor foundaries is a big advantage for the forseeable future.

But so once was also owning the world's largest battleship. Things change. How
often do the most disruptive changes come from or favor those with the largest
physical plant?

------
tluyben2
Not sure about the end of x86 but he is right about one point I've been
shouting about for years; mainstream computers are _far_ too powerful for the
average user. My mother reads mail and watches pictures of kids/grandkids with
her computer; what is the i7/8gb/500gb with a crapload of gpu cores for? Why
pay for that kind of power while a cheap Android laptop would easily suffice?
My parents, grandparents, uncles or even my siblings and cousins have 0 need
for that power nor for Windows. None of them. They notice the difference when
they have/touch an iPad or Android pad/computer; they find it easier to wield;
they use a handful of apps anyway. So because it has manufacturing advantages,
Intel, in my eyes, doesn't have to strive for power or compatibility for the
future chips; they just need to use almost no battery. Only thing I hear non
computer savvy people talk about is battery life and 'clear screen'. So high
res (nexus 10) screens, screens you can view without squeezing your eyes in
bright sunlight, solar cells invisibly built in and a few days battery life
for a <= $500 price and you'll be selling until silicon runs out.

Even for coding you don't really need all that power most of the time; if you
are compiling big source trees, sure, but why not just do that in the cloud at
EC2 or a dedi server? So you can freely work on your laptop. Game playing and
very heavy graphical or music work I can see you need a fast computer in front
of your nose for, but further?

------
dharma1
I wonder when we'll start seeing apps running on ARM capable of matching
current x86 based content creation apps?

2 years? 3?

Intel will catch up with power consumption. The biggest thing going for ARM is
price, and because of price their user base is blowing up much faster than
Intel, on more types of devices, and in more parts of the world. Most of the
developing world's contact with computing is/will be ARM phones and tablets,
and the number of people developing software for ARM will skyrocket

------
Tichy
I was disappointed when I tried a fractal simulator on my Nexus 7 recently,
and it couldn't zoom smoothly. Perhaps not the most common task in the world,
but I think there is still demand for more computing power out there...

~~~
cageface
There's one on iOS that uses a gpu shader to render and it's plenty smooth.

~~~
Tichy
Interesting - so maybe there is still something to develop for Android.

------
jmentz
Companies are different from instruction sets, and the disruptor is the ARM
instruction set... on Qualcomm. QCOM is already called "the Intel of the
mobile world" and, as the world is going mobile, thar be the disruptor.

------
bhauer
Analyses that repeat the “post PC” mantra in its various forms may be correct
in doing so, but the mantra is getting threadbare. Since I’ve read this sort
of thinking so many times (desktops are dead, Intel is doomed, ARM is the new
hotness, etc.), I don’t find it terribly interesting to hear the same
restated. Don’t get me wrong, I appreciate the detailed analysis provided by
the author, but the thesis is unsurprising.

Here’s what I would like to read if a technology journalist could dig it up:
What kind of strategic planning is going on within the halls of Intel, Dell,
HP, Lenovo, et al with respect to keeping the desktop PC relevant? Put another
way: I find it astonishing that several years have been allowed to pass since
desktop performance became “good enough.” The key is disrupting what people
think is good enough.

The average consumer desktop and business desktop user does consider their
desktop’s performance to be good enough. But this is an artifact of the
manufacturers failing to give consumers anything to lust for.

Opinions may vary, but I strongly believe that the major failure for desktop
PCs in the past five years has been the display. I use three monitors--two 30”
and one 24”--and I want more. I want a 60” desktop display with 200dpi
resolution. I would pay dearly for such a display. I want Avatar/Minority
Report style UIs (well, a realistic and practical gesture-based UI, but these
science-fiction films provided a vision that most people will relate to).

I can’t even conceive of how frustrating it is to use a desktop PC with a
single monitor, especially something small and low-resolution like a 24 inch
1920x1080 monitor. And yet, most users would consider 24” 1920x1080 to be
large and “high definition,” or in other words, “good enough.”

That’s the problem, though. As long as users continue to conceive of the
desktop in such constrained ways, it seems like a dead-end. You only need so
much CPU and GPU horsepower to display 2D Office documents at such a low
resolution. There was a great picture CNet had in one of their reports (and I
grabbed a copy at my blog [1]) showing a user holding and using a tablet while
sitting at a desktop PC.

In the photo, the PC has two small monitors and is probably considered good
enough to get work done. But the user finds the tablet more productive. This
user should be excused for the seemingly inefficient use of resources because
it’s probably not actually inefficient at all. The tablet is probably easier
to read (crisper, brighter display) and faster, or at least feels faster than
the PC simply because it’s newer.

Had desktop displays innovated for the past decade, the PC would need to be
upgraded. Its CPU, GPU, memory, and most likely disk capacity and network
would need to be beefier to drive a large, high-resolution display. So again,
what are the PC manufacturers doing to disrupt users’ notions of “good
enough,” to make users WANT to upgrade their desktops? I say the display is
the key.

[1] <http://tiamat.tsotech.com/i-see-the-problem>

~~~
lifesavers
In general I agree with your ideas, but I think there's a missing component:
general use case. Most people only need their computers for email and web
browsing. We can further simplify that and just call it - communicating.

Sure, a developer or designer salivates at the idea of more screen real
estate, but that's because there's a practical use for it. PC Manufacturers
follow, not decide, the needs of their users.

I love my screen real estate because I actually need it. If I just browsed
Facebook, wrote a word document, and maybe planned out my finances with a
spreadsheet - I'd have a hard time justifying some giant monolith of a
monitor. I think this is a largely overlooked factor in the success of the
mobile market.

I don't know if everyone's forgot this already but when the iPad first came
out, most people were thinking: what in the actual eff is Apple thinking. Sure
we all knew it'd sell because, well, Apple is Apple. But if I recall correctly
most people were scratching their heads asking, "so, it's a big iPhone right?"

And guess what? It is just a big iPhone! And it succeeded NOT because it was
the next "cool" thing but because it was designed to do what most people
needed their computer to do, namely, send pictures of their grandkids to each
other.

------
martinced
PC sales, even with the release of Windows 8, did drop 21% compared to one
year ago...

OK. But would it even be remotely possible to consider that year-to-year China
is in recession, a lot of european countries are in recession and U.S. is not
in a great position (e.g. the manufacturing sector is firing people left and
right), Japan is in a terrible situation, etc. and that this _may_ be playing
a role on the number of PCs sold?

Year-to-year sales of cars in France has gone down by 20%.

When people enter a recession they tend to try to save money: cars and PCs are
expensive things. Smartphones not so much (especially with all the "plans"
luring people who cannot count in).

I think that smartphones and tablet did play a role in the "minus 21%" that
TFA mentions but I'm also certain that the worldwide recession is playing a
role too. People don't afford what they see as "expensive" that easily.

$100 + five-years-unlimited-plan-and-i-can-rape-your-children, they don't pay
that much attention and so smartphones tend to be more "recession proof".

