Not really. 15 years ago, AMD was outperforming Intel at a lower price.
Somewhat hilariously Intel released "Coppermine" series of CPUs in 1999, but those used Alu interconnects :D and were famously unstable above 1GHz.
AMD was also first to 1GHz.
Intel was even trying to dispute that fact, but its hard to convince PC hardware journalists you are first when they are already testing 1GHz Athlons on their own desks while reading Intel press release about closed door demonstration.
Do you have a source for this claim? Not saying it's wrong, I'd just appreciate seeing a real benchmark/power usage test.
"the new iPad Pro holds its own against the MacBook Pro in single-core performance — around 3,900 on the Geekbench 4 benchmark for the iPad Pro vs. around 4,200–4,400 for the various configurations of 13- and 15-inch MacBook Pros.1 Multi-core performance has effectively doubled from the first generation of iPad Pro. That sort of year-over-year increase just doesn’t happen anymore, but here we are. The new iPad Pro gets a multi-core Geekbench 4 score of around 9200; the brand-new Core M3-based MacBook gets a multi-core score of around 6800."
The battery in the 12.9" iPad Pro is 38.8 Wh, while the 12" MacBook is 39.71 Wh, so about the same. Apple quotes about 10 hours battery life when doing lighter tasks like web browsing.
So it must be pretty close if not surpassing it now. Depends how much credibility you put on Geekbench.
If there's one lawyer in town, they drive a Chevrolet. If there are two lawyers in town, they both drive Cadillacs.
My guess is this is all just negotiation from Microsoft's point of view and they are just trying to get Intel to license the ability to emulate x86.
Another possibility is this is a way to get Intel to invest more resources ( even at a loss) into competing with ARM.
> copying someone's technology and emulating it without paying a license fee
You are contradicting yourself.
Intel built an instruction set for hardware. Emulating it on an ARM would completely negate the usefulness of it. There is no copying, since the emulator is built on software. The patents concern hardware design, not software.
The whole case should be laughable. It shouldn't even be thinkable to take something like this to court. But I'm sure some layers are going to make a lot of money.
If they're anything like the Itanium ISA patents, then Intel owns the rights to the instructions themselves and their meanings. Emulation would be infringing.
How can you even patent something like that? It goes beyond software patents, as it appears to me. But again, I'm very ignorant of this.
For example, here's Intel's (expired) patent on the CPUID instruction:
It claims the particular encoding of the return data. Good luck implementing CPUID in an emulator without infringing.
There is, however, the Doctrine of Equivalents. This says that if something uses different elements / components from what's in the actual claims, it could still be argued to infringe the patent if those elements perform a role equivalent to the elements in the claims. But I'm not quite sure how far that could be stretched.
"A computer system coupled to receive and respond to computer instructions from a program routine comprising"
In later patents, they got even more clever and just say a "method" rather than a "processor", and explicitly define registers as potentially being emulated in the description (search AVX2 patents if you're curious)
The USPTO certainly seems to think an ISA is patentable, and I haven't seen a court disagree yet.
Yes, any software patent is possible by describing it as running on a processor. See for example , which has the very common claim prefix of "A machine readable storage medium storing a computer program..." Alice doesn't invalidate these patents unless, by removing that text, the remainder of the claim is unpatentable.
Because software patents are still legal, there's no need to attempt to describe them as running on a custom hardware device - you just specify them as software. Specifying custom hardware would unnecessarily reduce the scope of your claim.
> To be fair Intel has done a lot of work to make the x86 as great as possible. Patent lawsuits are awful. I'm not sure just copying someone's technology and emulating it without paying a license fee is all that great either.
So now, if the same algorithm is implemented in software to emulate X86 platform on ARM, how is that not infringing on the patent?
maybe thats a difficult problem for hardware to solve. for software, thats just how software works.
maple is solving differential equations and that may have at some point been difficult to write software for. if they have a patent for that, then so be it. I start a company that hires professors who are really good at solving differential equations, and sell the results. basically what maple yields, except produced in a different way. am i infringing on the patent?
patents patent technology. not results. you cant have a patent for "a rocket that flies to the moon" in the sense that now nobody else can build rockets flying to the moon. you can have a patent for a way to store liquid oxygen in tanks to make it yield the energy required to get a rocket to the moon. patenting concepts of things you want to do is at least morally wrong.
a hardware patent should not be capable of preventing someone from writing software that does the same thing.
its like patenting a drug that cures cancer and then using that patent to prevent oncologists from curing cancer by applying chemotherapy.
The first nine were, "you think you can do this simple thing, this reasonable thing, or have any freedom whatsoever? SCREW YOU."
The tenth claimed it was a myth that copyright levied oppressive burdens on the consumer.
Lesson: With any type of intellectual property protection, you can usually just presume it works in the most totally disgusting way imaginable.
but laws should not contradict common sense.
The silicon-level implementation is another matter entirely, of course--but emulation has nothing to do with that. In fact, that's the definition of emulation--using a completely different implementation to offer a compatible interface.
2. The REX prefixes are a nightmare: most instructions have one and this tremendously bloats up the instruction stream size. For this reason, the i-cache efficiency is not good compared to actual compressed instruction sets such as Thumb-2 (not that Thumb-2 is wonderful either). Note that if you do extreme hand-optimization of binary size, you can get x86-64 down pretty far, but so few people do that that it doesn't matter in practice.
3. Two address code isn't necessarily a win, especially since it doubles the number of REX prefixes. In AArch64 "and x9,x10,x11" is 4 bytes; in x86-64 "mov r9,r10; and r9,r11" is 6 bytes (and clobbers the condition codes). There's a reason compilers love to emit the three-address LEA...
4. Memory operands are nice, though I think the squeeze on instruction space makes them not worth it in practice. I'd rather use that opcode space for more registers.
5. Immediate encoding on x86-64 is crazy inefficient. "mov rax,1" is a whopping 7 bytes.
There's no question that x86-64 could be improved on in terms of code density.
It's still one byte longer than the equivalent AArch64 instruction, though.
They listed these features 'strong memory ordering', '(no) branch delay slots', '(no) stack windows', 'good i-cache efficiency through the use of two-address code and memory operands'.
Every single one of those is a property of the ISA - the instruction set, its semantics and encoding - not the implementation.
Which do you think isn't part of the ISA?
i-cache efficiency: Again implementation specific. Efficiency is entirely a result of implementation, isn't it ?
no branch delay slot: Yes, this is a part of the ISA. My point though was that it is uncommon enough that I wouldn't call it a great virtue of x86 per se.
Ars Technica, as always, has the details of how that has evolved over the years. Can't remember when the article in question was written, though.
Huh? I thought the whole point of a RISC ISA is to not be bloated.
For example, Alpha AXP, one of the least blown up ISAs, did not provided non-word aligned loads and stores, providing word aligned loads and stores and a way to extract and/or combine bytes and subwords from/to the whole word. And it ended having separate instructions for loading and storing every subword type. The reason I stated above - to make program run faster and to make programs smaller.
The same is true for every RISC ISA I studied.
For example, MIPS includes an instruction to store a floating point number in the reg1+reg2*arg_size address. This can be split into two RISC instructions and fused at runtime in hardware, but still here it is!
And yet vendor-lockin is not good for competition. Is the increased incentive for research investment due to patents worth more to humanity than the resulting vendor-lockin that makes it harder to switch to AMD?
There are so many strategies and tactics and Battle maneuvers here that it's difficult to say in just one simple hacker news posting what's going to happen.
Vice versa you can cause your opponents so much trouble with lawsuits that - even though you don't win them - you are better off than them afterwards.
In this case, it appears that Cyrix came out on the other side of the battle in a better position for their win. It would seem it was their later acquisition by National Semiconductor that eventually snuffed them out.
It would be different if the lawsuit is so expensive that one party could go bankrupt due to it.
This isn't about right and wrong. It's about money. Who has to pay how much to the other party? Both parties think they are better off if they fight. Both parties think they could win money in the end. It's a gamble in the courtroom.
Basically, there are two approaches the plaintiff might take here. The simplest is to cite the doctrine of equivalents. This is basically the notion that if you do the same thing in the same way for the same purpose, then it's the same process, even though you are using digital instructions instead of logic gates. The legal theory here is pretty well settled. The problem is that you'd need to justify that digital instructions are obviously equivalent to logic gates, and a skilled professional would have equated them at the time of the patent's filing.
The other approach is to argue that an emulator actually is a processor, and therefore fits the literal claims of the patent. The explanation for this is pretty well-established: it's literally the Church-Turing Thesis. However, the viability of this argument depends on the language of the patent claims. Also, it's hard enough to explain the C-T Thesis to CS students. My undergrad had an entire 1-credit-equivalent course that basically just covered this and the decidability problem. Explaining it to a judge, who (while likely highly intelligent) probably has no CS background, over the course of litigation is likely to be really hard.
Now, Intel certainly has enough resources to do both of these things (and they may also have precedent to cite, that didn't exist back then or that wasn't relevant to that case). Don't take this as an opinion on any possible result, it's just information such as I remember it.
But this is logic bordering to philosophy, which isn't exactly what the courts love to argue about. They look at intents and damages.
My guess is that a simple 'I sell X using my patents, they also sell X but are not paying" is vastly more likely to succeed. "But, but, Church-Turing" will just piss them off.
2 of the more well-known dev teams.
It will probably stay decently high for a while too while they hammer out compatibility for other Wii U games.
See, it's still worthwhile for me to pick up an old PS4 and amass a decent library while also still being able to buy new titles, but the Wii U was an absolute dud and there is 0% incentive to buy one now that it's already been axed. Emulation will be the only saving grace for its stellar exclusive titles. And since the Switch has its own share of problems handling BotW, it's really a no-brainer to invest a little in CEMU's future. The CEMU devs are absolute demons, on par with Dolphin's team.
As for RPCS3, the PS3 has a big enough library and is cheap enough to make it worth just buying one until the devs can catch up. AFAIK there isn't a single fully working commercial game on it yet.
The only thing really broken, besides little graphical things and certain ambient shaders, is that you can't zoom in on the photographs of the memories. Not really an issue when you can just put your face closer to the TV. Oh, and yeah you can't take custom photographs yet. And, of course, FMV only works when you download a third-party plugin because x264 decoding isn't high on the CEMU dev's list right now.
But in general it runs almost flawlessly, as far as being able to complete the game is concerned. Some people report predictable crashes after encountering Ganon or any of the Blight creatures, but I have not had this problem. Just occasional crashes here and there.
Sony sued the shit out of them and lost, and then bought VGS from Connectix. Sony bankrupted Bleem withe legal cost and later hired Bleem
Unless the case is presided by Judge Alsup, who learnt to write Java programs during Oracle v. Google.
They no doubt have been filing additional patents over the years. But I'm sure MS and Qualcomm have plenty of their own patents to bargain with.
Also their warning could backfire if it gives Microsoft one more reason to finally walk away from x86 compatibility... not that this is likely to happen anytime soon.
That's under the old law. Nowadays, for patents that issue from original applications filed on or after June 8, 1995, it's 20 years from the earliest filing date upon which priority is claimed (possibly extended to account for delays in the USPTO). 
AFAIK, most foreign countries follow the same rule — which is significant, because when one big company sues another for patent infringement, it will usually file parallel lawsuits in every country where (A) the plaintiff owns a patent and (B) the defendant sells the infringing product.
+ Intel released the 8086 on June 8, 1978 . Tech companies typically file U.S. patent applications just before the first public disclosure of the new technology, so as to preserve any available rights under non-U.S. patent law . So let's assume that Intel filed one of its 8086 patent applications on June 7, 1978.
+ Let's also assume that it took exactly two years for that patent application to be issued as a patent, on June 7, 1980. Under the transition provisions of the "new" law, that patent would have expired on the later of (i) the issue date plus 17 years, that is, June 7, 1997; or (ii) the earliest filing date plus 20 years, that is, June 7, 1998.
+ Right now it's 2017 .
+ 2017 > 1998 . In fact it has been 6,944 or 6,945 days since June 7, 1998, depending on (i) your time zone and (ii) how you do your math. 
+ Moot facts are moot 
I would argue that what they don't get is a defacto monopoly on the use of vector instructions for X86. Even that specific encoding. Because it is now an issue of compatibility and interoperability. The history of closing an instruction set from the patenting of a self few instructions is atrocious. Millions of entities have expressed solutions to their problems in X86, having to pay a tax to Intel in-perpetuity because of that is bullshit.
Emulation is an important part of walking away, no? Microsoft cares about backwards compatibility above anything else.
Yes and no. If you're buying a managed service - like Azure HDinsight say - why do you care or even need to know what's under it? The volume buyers of CPUs now are the big cloud operators. If you're buying a tablet and consuming "apps", then why do you care about compatibility with old Windows desktop applications?
Because you operate some piece of enterprise software that was built in the 90s and is working just fine up until now.
Average end-users (and to a certain degree cloud customers) aren't that picky about backwards compatibility, but enterprise customers certainly are.
> AMD made SSE2 a mandatory part of its 64-bit AMD64 extension, which means that virtually every chip that's been sold over the last decade or more will include SSE2 support. [...] That's a problem, because the SSE family is also new enough—the various SSE extensions were introduced between 1999 and 2007—that any patents covering it will still be in force.
AMD64 requires SSE2 which was introduced in 2001, right? So isn't it just 1 year until Microsoft can put in what's required for the AMD64 architecture?
"only 32-bit x86 support is being offered"
Scorched earth policy will likely not be defensible under fair use law. Reverse engineering for compatibility has a few precedents.
Intel had (and has) no issue with qemu or bochs emulating everything, as long as they were niche and/or promoting the Intel platform (and grudgingly accepting compatibles).
However a move to rid Microsoft's platform from Intel altogether without compromising compatibility is something worth fighting for.
I heard that ARM is rather similar in that aspect: emulators for development are a-ok, but trying to run ARM emulation on a consumer product with no ARM components inside will drive up the legal fees until some licensing agreement is set up.
Also that they cross license x86 with competitors.
Nvidia reportedly had issues getting the necessary paperwork: https://en.wikipedia.org/wiki/Project_Denver#History
I would love for a real lawyer to explain this tortured logic.
Intel is gonna get its shit pushed in one way or the other on this, and I'm gonna watch it happen with a big smile on my face.
Patents will not stop the march of progress.
I mean, Apple and Samsung had a billion dollar lawsuit while Samsung chips were still in iPhones. It's certainly precedented to sue a corporation you're actively doing business with.
QEMU emulates X86 chips as does other emulators. I wonder how those are effected?
Edit: The legal term appears to be "the doctrine of latches"
However, the Supreme Court has recently (March) said that laches is no defense to patent infringement. 
"if WinARM can run Wintel software but still offer lower prices, better battery life, lower weight, or similar, Intel's dominance of the laptop space is no longer assured."
Peter. My man. I laughed. I cried.
For the millionth time, the ARM ISA does not magically confer any sort of performance or efficiency advantage, at least not that matters in the billion+ transistor SoC regime. (I will include some relevant links to ancient articles of mine about magical ARM performance elves later.) ARM processors are more power efficient because they do less work per unit time. Once they're as performant as x86, they'll be operating in roughly the same power envelope. (Spare the Geekbench scores... I can't even. I have ancient published rants about that, too).
Anyway, given that all of this is the case, it is preposterous to imagine that an ARM processor that's running emulated(!!!) x86 code will be at anything but a serious performance/watt disadvantage over a comparable x86 part.
This brings me to another point: Transmeta didn't die because of patents. Transmeta died because "let's run x86 in emulation" is not a long-term business plan, for anybody. It sucks. I have ancient published rants on this topic, too, but the nutshell is that when you run code in emulation, you have to take up a bunch of cache space and bus bandwidth with the translated code, and those two things are extremely important for performance. You just can't be translating code and then stashing it in valuable close-to-the-decoder memory and/or shuffling it around the memory hierarchy without taking a major hit.
So to recap, x86 emulation on ARM is not a threat to Intel's performance/watt proposition -- not even a little teensy bit in any universe where the present laws of physics apply. To think otherwise is to believe untrue and magical things about ISAs.
HOWEVER, x86-on-ARM via emulation could still be a threat to Intel in a world where, despite its disadvantages, it's still Good Enough to be worth doing for systems integrators who would love to stop propping up Intel's fat fat fat margins and jump over to the much cheaper (i.e. non-monopoly) ARM world. Microsoft, Apple, and pretty much anybody who's sick of paying Intel's markup on CPUs (by which I mean, they'd rather charge the same price and pocket that money themselves) would like to be able to say sayonara to x86.
The ARM smart device world looks mighty good, because there are a bunch of places where you can buy ARM parts, and prices (and ARM vendor margins) are low. It's paradise compared to x86 land, from a unit cost perspective.
Finally, I'll end on a political note. It has been an eternity since there was a real anti-trust action taken against a major industry. Look at the amount of consolidation across various industries that has gone totally uncontested in the past 20 years. In our present political environment, an anti-trust action over x86 lock-in just isn't a realistic possibility, no matter how egregious the situation gets.
So Intel is very much in a position to fight as dirty as they need to in order to prevent systems integrators from moving to ARM and using emulation as a bridge. I read this blog post of theirs in that light -- they're putting everyone on notice that the old days of antitrust fears are long gone (for airlines, pharma, telecom... everybody, really), so they're going to move to protect their business accordingly.
Edit: forgot the links. In previous comments on exactly this issue I've included multiple, but here's a good one and I'll leave it at that: https://arstechnica.com/business/2011/02/nvidia-30-and-the-r...
But let me address this specifically:
"It's the 20% case, and if that 20% uses as much (or even more) power as a native Intel device, that's perfectly acceptable."
There's no need to speculate, here: the emulated code either will use significantly more power, or it will perform significantly worse.
In fact, as for the native ARM code on the ARM chip, it also will either use more power or perform worse than comparable x86 code running natively on an Intel chip within the same power envelope, because Intel has thrown a massive amount of engineering at their microarchitectures and manufacturing process, they have vertical integration that they can use to their advantage, and their stuff is just very good.
Again, there are no magical ARM performance elves lurking under the hood -- the ARM ISA by itself doesn't confer any real advantages in performance (and hence performance/watt) in the billion+ transistor regime. I truly don't understand why this is so hard for people to accept, but I blame Apple for spreading years of FUD about x86 (before bailing on "RISC" for it, of course).
Back to the topic of the emulated code, though: I can't say is whether "significantly worse" or "significantly more power" will still be Good Enough, but I'm assuming it will for most apps people care about.
There are so many things that matter for performance now, and ARM vs. x86 ISA just isn't anywhere on that list, and hasn't been on it for a very long time.
As well as weighing less and being cheaper. So there's no call for laughing and crying.
I think this theory of infringement has to run into various thought-experiment problems such as : can I auto-translate that binary into some other instruction set, then execute the translated binary, without infringing Intel patents? (yes, surely) Is the translator now infringing Intel patents because it has to understand their ISA? (no, surely).
Now, can I incorporate that translator into my OS such that it can now execute i386 binaries by translating them to my new instruction set which I can execute either directly or by emulation? If so then I am now not infringing. Or did infringement suddenly manifest because I combined two non-infringing things (translator + emulator for my own translated ISA)?
If you execute it in an emulator, though, all bets are off...
(I'm pretty sure it's a no, but an Aereo-esque lawsuit arguing the opposite would be fun to watch)
Intel's strategy of going after other hardware companies may not translate neatly to emulators.
x86 is an old and very well understood architecture at this point. The difficulty thus isn't in writing a working emulator, it's in figuring out which features you can support from a business perspective without treading on still active patents. Microsoft is one of the few companies that can probably absorb a patent fight here and come out on top, and if they succeed, it will counter-intuitively threaten the continued dominance of x86. Once they can release an ARM-based version of Windows that sports backwards compatibility (the primary missing feature that caused Windows RT to fail spectacularly) more mobile machines will be free to use ARM chips, and software developers will have incentives to natively support ARM targets for power efficiency reasons.
Since Intel banks on x86 continuing to be the dominant architecture in the desktop and laptop space, they must feel threatened by this move, so the suit doesn't surprise me. They'll now be fighting an uphill battle. On the one hand, Intel processors are still pretty much king in raw performance, but on the other hand, very few consumers actually need the kind of performance that you can only get on Intel chips anymore. A decent web browser runs on just about anything, so an architecture shift in the consumer space is quite plausible. Some could argue that it's already happened with tablets.
Search for "Game of Thrones" right now on the Windows Store. Or WinRAR. Microsoft doesn't even verify their "publishers" are actual companies, have valid URLs for the store listing (most of these junk apps just go to "http://"). It's been and remains a joke.
Virtual PC for Macs was a full-blown JIT though.
One hell of a run.
That's probably due to Microsoft's choice of keeping both "int" and "long" as 32 bits while pointers increased to 64 bits, unlike everyone else which kept "int" as 32 bits and increased "long" to 64 bits. If any part of your program stored a pointer in a "long", it would break when the memory allocator gave you an address above 4G. You have to carefully comb your code to change the relevant variables to things like LONG_PTR (which isn't a "long" on 64-bit Windows) instead.
Storing a pointer in a long is probably common in 32-bit Windows, since window messages have a pair of parameters, WPARAM (which, despite its name, was an "int") and LPARAM (which was a "long"); pointers are often passed in LPARAM.
The main reason these days is that there's simply no strong incentive to go 64-bit for most desktop software. The 2Gb memory limit is a non-issue for most scenarios, and other than that, why bother? If you compile and test for 32-bit, it works for anyone who is still on 32-bit Windows and it works for 64-bit. And recompiling for 64-bit is usually easy, but it doubles your test matrix - so "here's a build, but there's no official support" is not an unpopular approach.
Okay, got it. I'll make sure to account for that in my next CPU/device purchase.
x86 isn't going away for the next 10 years or longer.
It's quite possible I'm missing something vital here, of course.
However, if you're interested in embedded hardware, you can buy a RISC-V chip from SiFive now.
x86 is dominant for desktops/laptops. ARM is starting to make inroads with Chromebooks and covers the low end. RISC V is starting from scratch, both in terms of available hardware and software support. If RISC V can quickly prove popular in the embedded space then perhaps we'd see a desktop/laptop earlier than in a decade, but it's a competitive market.
To give a comparison, MIPS is popular in the embedded space, but how many MIPS-based laptops have their been? Very few.
By the way, I'm not trying to stop you dreaming, I'm hinting the fact that if you want to speed things along you should get involved in the embedded space. If you're waiting as a passive consumer you'll more than likely get disappointed, but if you're actively contributing to the platform you may find the wait more bearable as you're helping to speed it along.
And unless Qualcomm and Microsoft are working on a Hardware assisteed X86 emulation, this warning shot may be directed at somebody else.
My guess: Apple.
AMD licenses x86 patents to Qualcomm/MS to make x86 emulator better patent troll proof. In return, Qualcomm and AMD team up for better ARM server based processors. MS can sell more Windows/Windows Sever (sad).
I would love to see Dell, Lenovo and HP to switch exclusivly to Ryzen processors,
And switch to the new Naples CPU in all their Server/Storage systems