
x86: Approaching 40 and still going strong - anonymfus
https://newsroom.intel.com/editorials/x86-approaching-40-still-going-strong/
======
delsarto
Intel/x86 seems to have missed pretty much every major trend but somehow
manages to keep up.

At the time, Intel seemed to have pinned the 64-bit future on Itanium. Which,
on paper, is a much better architecture. Of course instruction parallelism
should be in the hands of the compiler -- it has so much richer information
from the source-code so it can do a better job. The MMU design was very clever
(self-mapped linear page tables, protection keys, TLB sharing, etc). Lots of
cache, minimal core. Exposing register rotation and really interesting tracing
tools. But AMD released x86/64, Intel had to copy it, and the death warrant on
Itanium was signed so we never got to really explore those cool features. Just
now we're getting things like more levels of page-tables and protection-key
type things on x86.

Intel missed virtualisation not only in x86 (understandable given legacy) but
on Itanium too, where arguably they should have thought about it during green-
field development (it was closer but still had a few instructions that didn't
trap properly for a hypervisor to work). The first versions of vmware -- doing
binary translation of x86 code the fly, was really quite amazing at the time.
Virtualisation happened despite x86, not in any way because of it. And you can
pretty much trace the entire "cloud" back to that. VMX eventually got bolted
on.

UEFI is ... interesting. It's better than a BIOS, but would anyone hold that
up as a paragon of innovation?

I'm not so familiar with the SIMD bits of the chips, but I wouldn't be
surprised if there's similar stories in that area of development.

Low power seems lost to ARM and friends. Now, with the "cloud", low power is
as much a problem in the data centre as your phone. So it will be interesting
to see how that plays out (and it's starting to).

x86's persistence is certainly remarkable, that's for sure

~~~
jackyinger
WRT Itanium and all VLIW processors for that mater, you have to compile for
the microarchitecture of the processor to fully take advantage of it. Which
sinks the x86 binary portability that we've all come to know and love. Which
leads to the fundamental ease that ubiquity brings to working with x86 as an
end user or any software dev that doesn't really care about processor
internals.

It sure would be interesting to see something else challenge x86, but since
ARM/Power/RISC-V are all within the same architectural tradition I doubt they
will provide it. SIMD as used by GPUs is useful only when you have very
structured parallelism like arithmetic on large data sets.

I think if we're going to see a challenger worth noting they'll look like
they're crazy with the hard uphill battle and novel tech they will have to
produce.

~~~
microcolonel
If you're looking for innovation on vector operations (i.e. not packed SIMD or
GPU-style SPMD), then RISC-V's vector extension (especially if you consider
extensions to it for graphics or scientific compute or machine learning)
should seem very promising. It's not technically groundbreaking (Cray did it
first), but I think it has more potential than the endless treadmill of wider
packed SIMD instructions (which dissipate an immense amount of power on
intel's hardware), and it is amenable to more general tasks than SPMD is.

~~~
CalChris
Well, that was the subject of Krste's thesis at Berkeley:

 _Vector Microprocessors_

[https://people.eecs.berkeley.edu/~krste/thesis.html](https://people.eecs.berkeley.edu/~krste/thesis.html)

~~~
microcolonel
I wasn't aware that it was his thesis, thanks for pointing that out. There's
also all the work on the Xhwacha RoCC (which I knew prof. Asanović was
involved with). I think it all looks very promising and makes me eager to see
what comes out of SiFive with Yunsup and Krste at or near the helm.

------
ohazi
Wow, what the fuck happened with their patent applications in 2014?

It's difficult to take this threatening language seriously when it's
exceedingly obvious that Intel is very suddenly (and very rightfully)
terrified of losing their monopoly position. It's easier than ever to
recompile and switch to ARM, and people are tired of paying the Intel tax.

They should have seen this coming well before Otellini retired. Lawsuits
aren't going to save them here, and Rodgers comes out looking like an asshole
for trying.

~~~
pjmlp
Which is one reason for the uptake of bytecode for executables on mainstream
devices, following mainframe footsteps.

Even LLVM bitcode might eventually be cleaned up to make it properly
architecture independent. There was a talk about it at a past LLVM days
conference.

~~~
Laaas
It would also have to be stabilised, because AFAIK it changes a lot.

~~~
pjmlp
Agreed, but in scenarios like Android and iOS/OS X, they control the version,
so they could make it stable for their purposes, or automatically update the
bitcode for the apps in the store.

~~~
Laaas
That's a good idea, but then a tool for that would also have to be built.

------
glhaynes
Not to be pedantic, but…

>Launched on June 8, 1978, the 8086 powered the first IBM Personal Computer
and literally changed the world.

It was the 8088 that powered the IBM PC. (Introduced on July 1, 1979,
according to Wikipedia.) Very similar but with an 8-bit bus instead of the
8086’s 16-bit.

~~~
davidf18
The 8088 was designed in Israel so interestingly the IBM PC CPU even back then
was "Israeli."
[https://en.m.wikipedia.org/wiki/Intel_8088](https://en.m.wikipedia.org/wiki/Intel_8088)

~~~
mixmastamyk
I read something recently about the first PC (not chip) being designed in Boca
Raton, FL. The reason was to get space from the suits in NY to do their thing.

Too bad I can't remember the source.

------
paulsutter
So Intel is now resorting to threats, is it a dying gasp for x86? Wow we'll
all need to recompile. Thank goodness for Linux.

Article is written by the Intel General Counsel...

> Only time will tell if new attempts to emulate Intel’s x86 ISA will meet a
> different fate. Intel welcomes lawful competition... However, we do not
> welcome unlawful infringement of our patents, and we fully expect other
> companies to continue to respect Intel’s intellectual property rights.

~~~
wolfgke
> Dying gasp for x86? Wow we'll all need to recompile.

ARM uses a weak memory model. x86 has a strong one. Also code containing
optimized SIMD code (e.g. via compiler intrinsics) is a lot more complicated
to port.

~~~
api
I've ported a lot of code back and forth and I've never encountered any
difficulties related to weak vs. strong memory model.

It might matter for lock-free data structures and other potentially hardware
sensitive code, but it doesn't matter for 99% of code.

~~~
nickpsecurity
Not to mention there are safe, concurrency schemes one can use that are higher
abstraction than x86 hardware.

------
tyingq
So who are they talking about? It sounds like a warning, and focuses on
emulation style approaches. They compared to Transmeta.

Edit: Seems odd to use Transmeta as a comparison if they are talking about
software os level emulation. Wasn't Transmeta's emulation all built into the
chip, no os level support/software needed?

~~~
bhouston
My bet is it is apple and they are looking for ARM emulation of x86 on new Mac
laptops slated for next year.

Oops, they are probably referring to Microsoft/Qualcomm's announced project
here: [https://www.extremetech.com/computing/249292-microsoft-
decla...](https://www.extremetech.com/computing/249292-microsoft-declares-
windows-10-arm-devices-will-run-x86-code-near-native-speed)

~~~
ams6110
Why would they need to emulate x86 they can just compile for ARM.

~~~
daxelrod
So that users can run all of their existing binaries.

Both Mac processor architecture changes so far have included emulation or
binary translation layers for this purpose.

68k -> PPC:
[https://en.wikipedia.org/wiki/Mac_68k_emulator](https://en.wikipedia.org/wiki/Mac_68k_emulator)

PPC -> x86:
[https://en.wikipedia.org/wiki/Rosetta_(software)](https://en.wikipedia.org/wiki/Rosetta_\(software\))

~~~
ams6110
With NeXT they had "fat" binaries to handle different processors. No
emulating.

~~~
rlanday
That does not solve the problem of existing binaries.

------
acd10j
This seems directed at Microsoft's attempt to emulate x86 on ARM/Qualcomm
chip. [https://www.extremetech.com/computing/249292-microsoft-
decla...](https://www.extremetech.com/computing/249292-microsoft-declares-
windows-10-arm-devices-will-run-x86-code-near-native-speed)

------
nickpsecurity
It's called First Mover advantage, monopolistic practices, lock-in,
monopolistic profits, and massive investment into custom design on shrinking
processes. The ISA itself isn't special. After the marketing win, they
basically sustained it with volume and monopoly laws. That their big
competition also played it foolish a bit did help.

~~~
emodendroket
Almost none of the computer standards people actually use are ideal, so that
hardly makes x86 unique.

~~~
nickpsecurity
The article is about x86. That's why Im focusing on it. What is somewhat
unique is how hard x86 is to run at speeds and costs markets prefer. POWER is
only other in same category but x86 dominates desktops with servers easier to
change over. Intel effectively has one competitor in that space that's barely
making money or losing it depending on the year.

------
gigatexal
I think x86 is to processors what facebook is to social networks and any
competitors seeking to eat facebooks lunch: once you hit critical mass --
consider all the applications both high and low level that have had effort
into optimizing down to the assembly level, guess what more often than not the
processor family was targeted: x86. So whilst I'd love ARM to compete or even
the openPower initiative to take off I doubt it because the cost to switch all
the legacy software won't pan out in various cost-benefit analyses.

~~~
pcwalton
Intel sold 300-400 million chips in 2015. That year the number of ARM CPUs
sold was _15 billion_.

I find it hard to frame that as "ARM can't compete with Intel", especially
since PCs aren't a lucrative market anymore. You want to be in mobile, IoT,
etc., and Intel threw in the towel there a while ago.

~~~
gigatexal
I don't think anyone was saying ARM can't compete with Intel at all. ARM,
can't currently, compete with Intel in the desktop and server space in the
same verticals that Intel does. Show me an 18-core general purpose server
grade cpu from ARM. OR a 4.4ghz 8 thread gaming cpu for desktops? My hope is
that some vendor (possibly Apple) takes the ARM IP and makes a wide
instruction general purpose desktop ARM cpu.

And until that happens my original point was: business software, operating
systems, etc., have been written and / or optimized for the x86 platform so
much so and said software is so pervasive that leaving x86 will take a
revolution or a big player to push it, again maybe Apple.

~~~
pcwalton
ARM can win even if it doesn't have the capability to penetrate the PC market
right now. As PC margins shrink more and more, Intel's situation looks more
and more precarious. Gaming CPUs are only a tiny sliver of the market, and one
that is not nearly as profitable as mobile.

This is great for consumers, by the way: we're long overdue for healthy
competition in the CPU market. ARM is not an open architecture, so it's not
ideal, but at least the various licensees compete with each other, which is an
improvement over what we have with Intel.

> And until that happens my original point was: business software, operating
> systems, etc., have been written and / or optimized for the x86 platform so
> much so and said software is so pervasive that leaving x86 will take a
> revolution or a big player to push, again maybe Apple.

The operating systems that matter are all ported to ARM. And perhaps the most
important consumer OS today--Android--has virtually no x86 penetration. In
2017, looking at the overall market, _ARM_ has the advantage in terms of
software compatibility, not x86.

~~~
pjmlp
I believe the move to bytecode based formats in mobile OSes and even on OS X
and UWP will make this more relevant.

Using such mainframe model, makes the actual processor only relevant to the OS
vendors or the developers using "down to the metal" toolchains.

Which is something that obviously weakens Intel's position.

~~~
gigatexal
Oh yeah I had forgotten that point. Apple developers ship bytecode to the app
store right? And then when it downloads it is specific to the phone or device,
I think. Yeah that would make things a lot easier to move. So if Apple ever
made another arch change like they did from PowerPC to x86 this could forgo
the need for the RosettaStone translation stuff.

~~~
pjmlp
It is still optional on the phone and tv, but it is the official way on the
watch.

------
wolfgke
> Intel welcomes lawful competition

As long as the law is in their favor...

~~~
BuckRogers
I noticed they didn't mention AMD in there and the billion dollar payout.

~~~
wolfgke
Which Intel still has not paid (cf. for example
[https://www.pcper.com/news/General-Tech/Intel-still-hasnt-
pa...](https://www.pcper.com/news/General-Tech/Intel-still-hasnt-paid-
AMD-12-billion-USD-anti-trust-fine)). An interesting version of "lawful
competition", I think...

------
mark-r
Interesting that of all the x86 advances made over the years, they didn't
mention the most important one - the move to 64 bits. Of course that was made
by AMD so they can't take any credit for that.

The whole thing is a fluff piece that is best ignored.

------
ksec
If, we were to create a clean x86-64 without all the old baggage, how much
complexity could we get rid of? Surely with many CPU in server / non PC and
closed environment recompiling should be much easier.

~~~
pcwalton
That'd just be RISC-V, essentially.

------
deepnotderp
"Intel’s dynamic x86 ISA, and Intel will maintain its vigilance to protect its
innovations and investments."

You do realize that your ISA is literally one of the worst things about your
processors, right Intel?

~~~
MBCook
It's also the best. Without it, their processors would just be also-rans. It's
the ability to run all that software that already exists that makes them more
valuable than just switching to an arm chip or something else.

~~~
andreiw
But that's hardly really true anymore.

A 64-bit x86 computer with UEFI and wihout a CSM, for example, has no way to
boot DOS or even a 32-bit OS (which could then use v86) to support the really
old legacy. You'd have to resort to using a hypervisor, but considering the
practical difference in performance, you could do the dumbest possible
emulation and still run those 16-bit tasks just fine.

But that's just details. For the massive scale-out farms that power today's
world with solutions based on OSS components, running proprietary legacy code
just doesn't matter, so the backwards compatibility is irrelevant.

~~~
nickpsecurity
"But that's hardly really true anymore."

Like hell. Just tell all Windows users they should just replace all their
software on Windows/x86 with an ARM or RISC-V box. The software will not work
or will run with crap performance. They'll ditch the alternative for Intel/AMD
x86. The End.

The only time they move is for something new that doesn't depend on their
legacy software. Also, for stuff where they _can_ transition unlike many
enterprises locked into Windows or other x86 tech.

~~~
andreiw
Thing is, that's slice of computing that gets progressively smaller, and both
the upcoming Win10 on ARM64 laptop devices (with the BT for x86) and
E5-competitive servers with Windows will take care of this enterprise segment.

~~~
nickpsecurity
"Thing is, that's slice of computing that gets progressively smaller"

Or it mostly stays the same for those locked in with the current rate being
billions a year for the companies that locked them in. Hard to tell how long
it could be before they can get off. The IBM/COBOL crowd is still locked in
after 30-40 years.

~~~
andreiw
Of course, it will stay locked for a few. Is the IBM/COBOL crowd
representative of today's compute resources? Not really. The same could be
said about OS/2-powered ATMs and point of sale terminals. The fact of their
continued existence doesn't prove the OS/2 is a widely deployed and growing
technology.

But overall, the world has changed quite a bit. We're suddenly not in a world
ruled entirely by DB2, MSSQL and Oracle, running on AIX, Windows and Solaris.
The software stack has opened up, and is no longer bound to any specific
platform. The few big software names don't get to decide what kit to run them
all...we're approaching a world where ISA won't matter. You'll buy hardware
because it fits your performance and TCO targets, not because you're locked
in.

What sucks for Intel is that they've failed to identify mobile devices as the
"next PC", in the sense of the disruptive potential...the PC killed boutique
workstations and servers and it killed them from the bottom. Along with IoT,
that's two huge opportunities missed. Itanium was a pincer attempt, but no one
targets the top and succeeds... you have to start at the bottom (this is btw
why Power needs to scale and price down if it hopes to grow beyond its niche).

~~~
nickpsecurity
" Is the IBM/COBOL crowd representative of today's compute resources? Not
really. "

They might be in terms of getting locked-in to a proprietary vendor using
proprietary language, extensions, libraries, and/or OS's that are hard to
impossible to get off of. That's exactly what Wintel has been doing with
enterprise software. Microsoft is still pulling in billions despite people
saying other stuff would kill them for some time now. It's a combination of
their market hold, lock-in, and patent suits. They're pulling $1+ billion from
Android with patents despite not contributing jack to it. Oracle and SAP are
still bringing in billions due to market hold and lock-in. This is a steady
thing.

"What sucks for Intel is that they've failed to identify mobile devices as the
"next PC", in the sense of the disruptive potential...the PC killed boutique
workstations and servers and it killed them from the bottom."

That's true. That it didn't need compatibility with Intel is exactly how it
killed the need for their ISA in those sectors.

"you have to start at the bottom (this is btw why Power needs to scale and
price down if it hopes to grow beyond its niche)."

I agree. Great as it is, it's way too expensive. The POWER/PPC model was doing
a lot better with Apple on it selling them at reasonable price. The ecosystem
was better anyway with all kinds of software made for it. It had a better ROI
for buyers. They need to do that again for POWER(number here) even scaling
down the capabilities as the price goes down if necessary. I'd say their
accelerator interconnect that can bring in stuff like FPGA's would be a
winning differentiator but Intel outsmarted IBM again with the Altera
acquisition. I'm expecting great things in offloading out of that if they
haven't shown up already.

------
andreiw
Man, whoever wrote this is tone-deaf...does Intel even understand, that
barriers to emulating x86 might have worked in 90s and the aughts, but that it
is completely irrelevant now? Does Intel understand, that considering the
penetration of OSS in the cloud and enterprise data centers, the ISA just
doesn't matter anymore? They have a competitive CPU implementation, but mostly
in spite of the front-facing instruction set, certainly not because of...

------
vanjoe
Reading the history of fixed width simd is agonizing, every generation trying
to fix mistakes. You'd think they'd learn eventually.

------
masswerk
Why not include the Intel 8080/Datapoint 2200, which introduced the basic
instruction set and architecture? This is 57 this year!

(The first Datapoint 2200 prototypes were shipped in April 1970, official
introduction in Nov 1970, Pat. 224,415 filed Nov. 27, 1970. The Texas
Instruments microchip implementation was announced in June 1971 and delivered
during the summer. The Intel version was filed for patent in Aug 1971 and the
chip presented to Datapoint/CTC in fall 1971. Intel's first assignments to the
1201 chip, which eventually became the 8008, were apparently in spring 1970
with the project temporary stopped in July. [Source: Wood, Lamont, Datapoint;
Hugo House Publishers, Ltd.; Austin, TX, 2012] — So for the architecture in
general it's 1970, for the microprocessor implementation 1971.)

~~~
dredmorbius
That's 47 years ago, not 57.

F00F bug? ;-)

~~~
wolfgke
> That's 47 years ago, not 57.

> F00F bug? ;-)

For those who are out of the loop:
[https://en.wikipedia.org/wiki/Pentium_F00F_bug](https://en.wikipedia.org/wiki/Pentium_F00F_bug)

Though since your parent was talking about the Intel 8086/8088 and 0x0F: There
is an incompatibility 8086/8088 and newer x86 processors: On Intel 8086/8088
the opcode 0x0F was "pop cs". From 80186/80188 on, 0x0F is instead an opcode
expansion prefix (cf.
[https://stackoverflow.com/a/12264515/497193](https://stackoverflow.com/a/12264515/497193)).

~~~
dredmorbius
HN delivers!

Thanks ;-)

------
faragon
"Intel carefully protects its x86 innovations, and we do not widely license
others to use them."

~~~
gkya
Funny this article contains and champions almost all tech anti-buzzwords.
Patents, proprietary, etc...

------
drakenot
Is it a patent violation to emulate x86?

~~~
mark-r
Any patents for x86 itself have surely expired. Why do you think they
emphasized their follow-on technologies? I wonder if SSE is still protected,
but they've still got SSE2, SSE3, and whatever follows that (I haven't kept
track). They're making a not-so-transparent threat that any attempt to emulate
anything that still has patent protection is liable to be prosecuted.

It's the old FUD technique, make it appear that there's some legal doubt about
new technologies to slow the uptake.

~~~
nickpsecurity
"Any patents for x86 itself have surely expired. Why do you think they
emphasized their follow-on technologies?"

It's basically equivalent if there's no alternatives in microarchitectural
details that achieve the same thing without violating a patent. There's a
crazy amount of patents on that stuff. So, whoever builds it needs to sell
enough to pay the lawyer fees on top of the catch up fees.

------
erikj
The death of x86 is long overdue.

~~~
5ilv3r
Without microsoft, they have nothing. I for one welcome our new RISC
overlords.

~~~
emodendroket
I don't see how you figure. Macs and various consoles are also x86. It doesn't
seem like they are looking at a lot of danger except in the mobile space.

~~~
cwyers
The consoles were PowerPC last generation (or, well, Xbox was, PS3 was the
Cell, which is Power but also... weird). They're not especially high-end x86
chips. Games have to be ported to them regardless, so it's not like
portability matters. I would bet on them continuing to be x86 in the future,
but if there's a viable competitor to x86 in that price/performance/heat
profile range, there's nothing to keep them from switching.

~~~
MBCook
Which is interesting because we are at the point where console generations may
be about to disappear. Nintendo, Sony, and Microsoft all seem interested in
making it so their future consoles can run the games from their current
generation without modification or custom emulation.

In other words… They're about to become a PC gaming has had for 30 years.

And that makes them LESS likely to switch to a new chip instruction set.

------
jve
More like boasting of patents than celebrating birthday.

------
vectorEQ
try parsing x86 opcodes and still say it's strong. #debuggerfrustrations :D

