This marks a distinct shift for Intel. Historically, Intel's IP approach has focused on trade secrets, because they had a huge advantage in manufacturing and implementation techniques that are not easily reverse-engineered. Patent-protecting x86 didn't make much sense during the long period where nobody could make a general-purpose CPU as fast as Intel running native code, much less while emulating x86. As Moore's law has run its course, Intel's lead on that front has been shrinking. Apple's A10 is shockingly close to matching Kaby Lake on performance within a similar power envelope. And Ryzen is within spitting distance of Broadwell at the high end. All on non-Intel foundry processes. That was unimaginable 10 years ago.
That's the opposite of my impression of their history. Intel's approach to x86 historically has been asserting that it's basically impossible to implement a clone due to their patents, which they aggressively asserted to keep any competitors out of the market entirely if possible, and restricted to older ISA features if not possible. With the main exception of AMD, which was initially given patent licenses and tolerated in order to meet 2nd-source requirements. But even AMD they tried to strongarm out of the market later, leading to litigation where AMD eventually won a bunch of continued patent licenses only as part of the lawsuit settlement. Intel has also used patent lawsuits over the past 30 years to attempt to limit competition from Cyrix, Via, NVidia, and Transmeta, among others. Though they did lose or unfavorably settle a number of those lawsuits.
It's kind of difficult to believe this when the relevant architecture of today (x64) was developed by AMD. X86 is ancient, any patents that can't be avoided would have expired years ago.
AMD included a number of Intel ISA components like SSE2 in AMD64, which was ok for them because AMD has a patent cross-license with Intel, but means it's still a patent minefield to implement it.
No. Patents are generally valid for 20 years from the initial filing. The idea is that you get a limited term monopoly in exchange for making the details of an invention public.
Intel has engaged in some strategic patent litigation over the years, but if you look at the chart in the article, it looks like there has been a big push to get more ISA patents in the last several years.
That was not because of foundry process, but because of better architecture. Intel was leading or tied at process width and clock speed the whole time.
Intel was even trying to dispute that fact, but its hard to convince PC hardware journalists you are first when they are already testing 1GHz Athlons on their own desks while reading Intel press release about closed door demonstration.
"the new iPad Pro holds its own against the MacBook Pro in single-core performance — around 3,900 on the Geekbench 4 benchmark for the iPad Pro vs. around 4,200–4,400 for the various configurations of 13- and 15-inch MacBook Pros.1 Multi-core performance has effectively doubled from the first generation of iPad Pro. That sort of year-over-year increase just doesn’t happen anymore, but here we are. The new iPad Pro gets a multi-core Geekbench 4 score of around 9200; the brand-new Core M3-based MacBook gets a multi-core score of around 6800."
The battery in the 12.9" iPad Pro is 38.8 Wh, while the 12" MacBook is 39.71 Wh, so about the same. Apple quotes about 10 hours battery life when doing lighter tasks like web browsing.
So it must be pretty close if not surpassing it now. Depends how much credibility you put on Geekbench.
Attorneys on both sides must be excited on some level about the potential number of billable hours it'd take to litigate a case like this. Reminds me of a something an entrepreneurship professor told me...
If there's one lawyer in town, they drive a Chevrolet. If there are two lawyers in town, they both drive Cadillacs.
To be fair Intel has done a lot of work to make the x86 as great as possible. Patent lawsuits are awful. I'm not sure just copying someone's technology and emulating it without paying a license fee is all that great either.
My guess is this is all just negotiation from Microsoft's point of view and they are just trying to get Intel to license the ability to emulate x86.
Another possibility is this is a way to get Intel to invest more resources ( even at a loss) into competing with ARM.
> copying someone's technology and emulating it without paying a license fee
You are contradicting yourself.
Intel built an instruction set for hardware. Emulating it on an ARM would completely negate the usefulness of it. There is no copying, since the emulator is built on software. The patents concern hardware design, not software.
The whole case should be laughable. It shouldn't even be thinkable to take something like this to court. But I'm sure some layers are going to make a lot of money.
If they're anything like the Itanium ISA patents, then Intel owns the rights to the instructions themselves and their meanings. Emulation would be infringing.
I'm a little ignorant on this, but how can you own the rights to instructions? Isn't that the same as owning the rights to an API, or the naming and operation of functions? I mean, would I be infringing if I re-implemented the C standard library, or the standard library of some language with a copyrighted spec?
How can you even patent something like that? It goes beyond software patents, as it appears to me. But again, I'm very ignorant of this.
Patents are entirely unlike copyright in this, in that you can come up with a totally independent implementation, and still infringe the patent. The patents are very similar to audio or video codec patents - for example, they patent the process of executing a particular instruction.
For example, here's Intel's (expired) patent on the CPUID instruction:
I read the claims of that patent, and they are all hardware claims. As in, the claims are literally in terms of registers in a CPU. I would imagine in an emulator would have data structures in memory to represent these registers, but it won't directly infringe these claims as written.
There is, however, the Doctrine of Equivalents. This says that if something uses different elements / components from what's in the actual claims, it could still be argued to infringe the patent if those elements perform a role equivalent to the elements in the claims. But I'm not quite sure how far that could be stretched.
Yeah, it'll depend on your jury whether they consider an emulated register to still be a register as described. However, what's super common in these sort of patents is to duplicate the same claim language several times with slight variations to cover all implementation types. If you look at claim 9, it gives a much more broad heading:
"A computer system coupled to receive and respond to computer instructions from a program routine comprising"
In later patents, they got even more clever and just say a "method" rather than a "processor", and explicitly define registers as potentially being emulated in the description (search AVX2 patents if you're curious)
If you don't have a hardware implementation, you are trying to patent an abstract idea, which Alice Corp. v. CLS Bank International found to be invalid.
Nope, an abstract idea has always been invalid. Alice vs CLS Bank found that "on a computer" wasn't a sufficient inventive step to transform an abstract idea into something patentable [1]. This can be used to invalidate a claim, but won't shrink the scope of a claim to hardware only (as then, if software was the only inventive step, it would be a pure abstract idea).
The USPTO certainly seems to think an ISA is patentable, and I haven't seen a court disagree yet.
I'm not entirely sure what you mean. Do you mean if you have a patent that includes hardware, then that patent would prevent emulating the hardware in software? Wouldn't any software patent then be possible, by the simple expedient of describing it in the patent application as running on a custom single-purpose hardware device?
If you come up with a patentable idea, you can specify its implementation in either hardware or software in the patent (or just be ambiguous). However, if your idea alone is unpatentable, you can't add "implement it in software" to make it patentable, according to Alice vs CLS. Basically, Alice vs CLS removes a certain class of software patents, but certainly not all.
Yes, any software patent is possible by describing it as running on a processor. See for example [1], which has the very common claim prefix of "A machine readable storage medium storing a computer program..." Alice doesn't invalidate these patents unless, by removing that text, the remainder of the claim is unpatentable.
Because software patents are still legal, there's no need to attempt to describe them as running on a custom hardware device - you just specify them as software. Specifying custom hardware would unnecessarily reduce the scope of your claim.
We're not talking here about what is right, or even what is correct under the law. We're talking about what a set of lawyers think they can plausibly use to bring an infringement case and not be laughed out of court. A broad spectrum of possibilities.
> To be fair Intel has done a lot of work to make the x86 as great as possible. Patent lawsuits are awful. I'm not sure just copying someone's technology and emulating it without paying a license fee is all that great either.
Hmmm. So for example, a strength of X86 is that all reads and writes are synchronized by design, guaranteed by hardware. So if two threads call a write instruction, those instructions are guaranteed to happen in the order they were written in. You have no such guarantee on ARM. What's more, Intel has a patent for this.
So now, if the same algorithm is implemented in software to emulate X86 platform on ARM, how is that not infringing on the patent?
how can it possibly infringe on any kind of patent to "promise that commands are executed in order".
maybe thats a difficult problem for hardware to solve. for software, thats just how software works.
maple is solving differential equations and that may have at some point been difficult to write software for. if they have a patent for that, then so be it. I start a company that hires professors who are really good at solving differential equations, and sell the results. basically what maple yields, except produced in a different way. am i infringing on the patent?
patents patent technology. not results. you cant have a patent for "a rocket that flies to the moon" in the sense that now nobody else can build rockets flying to the moon. you can have a patent for a way to store liquid oxygen in tanks to make it yield the energy required to get a rocket to the moon. patenting concepts of things you want to do is at least morally wrong.
a hardware patent should not be capable of preventing someone from writing software that does the same thing.
its like patenting a drug that cures cancer and then using that patent to prevent oncologists from curing cancer by applying chemotherapy.
Yes; compatibility necessities are functional rather than expressive and therefore an exception to copyright protection. But patents are about making monopolies on functionality; a compatibility exception would undermine the whole point.
Emulation is only an implementation of the ISA. The x86 ISA is hardly "as great as possible". In fact, it's downright crummy: x86-64 is as bloated as RISC architectures usually are without any of RISC's benefits. At best you can make the argument that x86 has done well in spite of the ISA, not because of it.
The silicon-level implementation is another matter entirely, of course--but emulation has nothing to do with that. In fact, that's the definition of emulation--using a completely different implementation to offer a compatible interface.
x86-64 is a perfectly acceptable ISA. Strong memory ordering, no architectural optimizations leaking out like branch delay slots or stack windows. Pretty good i-cache efficiency through the use of two-address code and memory operands. Of course, Intel didn't have much to do with it. Most of it is either an accident of history or the work of AMD, who a lot of work regularizing the ISA in the 64-bit transition.
1. Yeah, ARM has one nasty architectural optimization leaking out: the program counter register being 8 bytes ahead of where it should be due to pipelining. Thankfully that got fixed up in AArch64, and if 32-bit mode gets dropped down the line (which is allowed by the architecture) it'll be a thing of the past. x86 has some architectural leaks too, though: the aliasing of the MMX and FP stacks as a hack for compatibility with early versions of Windows comes to mind. This one hasn't been fixed.
2. The REX prefixes are a nightmare: most instructions have one and this tremendously bloats up the instruction stream size. For this reason, the i-cache efficiency is not good compared to actual compressed instruction sets such as Thumb-2 (not that Thumb-2 is wonderful either). Note that if you do extreme hand-optimization of binary size, you can get x86-64 down pretty far, but so few people do that that it doesn't matter in practice.
3. Two address code isn't necessarily a win, especially since it doubles the number of REX prefixes. In AArch64 "and x9,x10,x11" is 4 bytes; in x86-64 "mov r9,r10; and r9,r11" is 6 bytes (and clobbers the condition codes). There's a reason compilers love to emit the three-address LEA...
4. Memory operands are nice, though I think the squeeze on instruction space makes them not worth it in practice. I'd rather use that opcode space for more registers.
5. Immediate encoding on x86-64 is crazy inefficient. "mov rax,1" is a whopping 7 bytes.
Regarding 5, no, it's five bytes (b8 01 00 00 00) for movl $1,%eax. If you actually have a 64-bit immediate, just the immediate itself would be 8 bytes, and the actual instruction is 10 bytes.
Like, implementation details that leak out into the architecture for optimization reasons. Classic example is branch delay slots: https://en.wikipedia.org/wiki/Delay_slot.
> Almost everything that you described is microarchitectural, and not tied to the ISA.
They listed these features 'strong memory ordering', '(no) branch delay slots', '(no) stack windows', 'good i-cache efficiency through the use of two-address code and memory operands'.
Every single one of those is a property of the ISA - the instruction set, its semantics and encoding - not the implementation.
strong memory ordering is a contract. There is nothing intrinsic about the ISA, or its virtues, that dictates the ordering one way or the other. It is entirely guided by what the vendor wants to support. x86's ordering is similar to TSO in SPARC, which uses a RISC like ISA. The ordering is described as a part of the ISA, but any ISA can implement a strong ordering (at the risk of performance losses) if they want to.
i-cache efficiency: Again implementation specific. Efficiency is entirely a result of implementation, isn't it ?
no branch delay slot: Yes, this is a part of the ISA. My point though was that it is uncommon enough that I wouldn't call it a great virtue of x86 per se.
Intel chips have been RISC-like internally for years. They have an instruction decode stage that converts x86 instructions into an internal ISA that's more RISC-ish.
Ars Technica, as always, has the details of how that has evolved over the years. Can't remember when the article in question was written, though.
Bloated in terms of instruction encoding. All instructions on RISC architectures usually have a uniform size, as opposed to CISC architectures which are usually variable length. (Tons of exceptions exist in both directions of course.)
To add, in the case of RISC-V, the base integer ISA and most of the core extensions use fixed length 32-bit encoding (RV32/64 E/IMFAD). The basic encoding, however, allows for shorter and longer instructions in 16 bit increments. There is also the compressed ISA extension that encodes a subset of IMFAD into 16 bit instructions. The per-byte dynamic code size of the compressed extension ends up being on par with x86/x64 and Thumb2.
They are necessarily - they have to to make programs run faster.
For example, Alpha AXP, one of the least blown up ISAs, did not provided non-word aligned loads and stores, providing word aligned loads and stores and a way to extract and/or combine bytes and subwords from/to the whole word. And it ended having separate instructions for loading and storing every subword type. The reason I stated above - to make program run faster and to make programs smaller.
The same is true for every RISC ISA I studied.
For example, MIPS includes an instruction to store a floating point number in the reg1+reg2*arg_size address. This can be split into two RISC instructions and fused at runtime in hardware, but still here it is!
> I'm not sure just copying someone's technology and emulating it without paying a license fee is all that great either.
And yet vendor-lockin is not good for competition. Is the increased incentive for research investment due to patents worth more to humanity than the resulting vendor-lockin that makes it harder to switch to AMD?
Well there are so many boundary conditions here . As noted below it's possible that Microsoft could emulate only older technology that isn't covered by patent. Or emulate x86 technology which has weak patent protection and then use some of their own patents to sue Intel into agreeing to license.
There are so many strategies and tactics and Battle maneuvers here that it's difficult to say in just one simple hacker news posting what's going to happen.
The article mentions Cyrix as a "victim" of Intel patent defense; however, Cyrix not only won their lawsuits, but they also went after Intel for patent violations in the Pentium Pro and Pentium II processors.
Very true. Often a larger actor with even larger pockets can turn a winnable battle into a losing war of attrition.
In this case, it appears that Cyrix came out on the other side of the battle in a better position for their win. It would seem it was their later acquisition by National Semiconductor that eventually snuffed them out.
Creative vs Aureal Semiconductor is a good example. Creative sued them multiple times while at the same time stealing their patented technology! Creative lost every single lawsuit, but legal costs forced Aureal into bankruptcy. Creative bought assets on the cheap.
Both can afford to spend hundreds of millions on a lawsuit. The lawsuit will be only a part of the budget for both and non-existential. So I don't think that the size matters here.
It would be different if the lawsuit is so expensive that one party could go bankrupt due to it.
This isn't about right and wrong. It's about money. Who has to pay how much to the other party? Both parties think they are better off if they fight. Both parties think they could win money in the end. It's a gamble in the courtroom.
The stakes could potentially present an existential risk to Intel. Imagine if Microsoft won and transitioned all Windows customers to ARM. That would be a huge blow to Intel's market!
Years ago, I spoke with an attorney with a CS background. He had once worked on a case like this. Sharp guy. He didn't tell me the parties involved, and I didn't ask, though I assume he wouldn't speak openly about it while it was ongoing. I therefore don't know how it turned out. It was many years ago, so I might be remembering wrong. I'm not a lawyer, this is not legal advice (neither mine nor his).
Basically, there are two approaches the plaintiff might take here. The simplest is to cite the doctrine of equivalents[1]. This is basically the notion that if you do the same thing in the same way for the same purpose, then it's the same process, even though you are using digital instructions instead of logic gates. The legal theory here is pretty well settled. The problem is that you'd need to justify that digital instructions are obviously equivalent to logic gates, and a skilled professional would have equated them at the time of the patent's filing.
The other approach is to argue that an emulator actually is a processor, and therefore fits the literal claims of the patent. The explanation for this is pretty well-established: it's literally the Church-Turing Thesis[2]. However, the viability of this argument depends on the language of the patent claims. Also, it's hard enough to explain the C-T Thesis to CS students. My undergrad had an entire 1-credit-equivalent course that basically just covered this and the decidability problem. Explaining it to a judge, who (while likely highly intelligent) probably has no CS background, over the course of litigation is likely to be really hard.
Now, Intel certainly has enough resources to do both of these things (and they may also have precedent to cite, that didn't exist back then or that wasn't relevant to that case). Don't take this as an opinion on any possible result, it's just information such as I remember it.
The argument is that anything with that equivalence is a processor, which means an emulator is a processor, which means that any patent thats mentions the word "processor" (or equivalent) covers the emulator if it would cover the emulated device. The argument is not that the patent covers all processors.
Wouldn't that make it essentially a mathematical patent, which is not allowed? Or at best a software patent, which is a gray area? Patenting an equivalence class of logical operations, without specificity about their implementation, is on a lot shakier footing than patenting a hardware invention, at least in U.S. patent law. It does lead to wider potential applicability, but at the risk of weaker foundations.
I guess you could also try to use the Church-Turing teorem in a claim that no instruction level patent are enforceable, since they are all equivalent...
But this is logic bordering to philosophy, which isn't exactly what the courts love to argue about. They look at intents and damages.
My guess is that a simple 'I sell X using my patents, they also sell X but are not paying" is vastly more likely to succeed. "But, but, Church-Turing" will just piss them off.
You're right that it expands upon the C-T thesis a bit, so my use of the word "literally" was incorrect. It does depend on the C-T thesis though, because it relies on a definition of "processor" that references it. You'd still need to explain it to a judge.
Heh. They should go with the Strong CT Thesis and end up proving that every patent on a computer is a patent on the universe. As a follow up, they could petition the Supreme Court to grant personhood to any process that can pass the Turing Test. If money is speech in the US...
What if it doesn't emulate x86 on the fly, but reassembles the binary for the target architecture? This obviously won't work in many cases, but should be fine for run-of-the-mill business software, which is what Microsoft is mostly targeting.
How many suits against console emulators have gone to trial? I would think nobody who makes or sells console hardware actually wants to risk such a case going to trial, because the best case is that they win and can't recover any meaningful damages, but the worst case outcome is that they establish precedent that the emulator is legal.
Well, that's exactly it. It has gone to trial before and emulator devs win every time. The precedent has already been set. That's what spawned my comment, because I don't understand the differences between x86 emulation and, say, Cell processor emulation, from a legal standpoint.
Interesting. I wonder why there is such a disparity between CEMU (building a Wii U emulator) getting $28k/mo and Nekotekina getting $1.2k/mo (building a PS3 emulator).
Easy. Breath of the Wild. I bought the game so I could emulate it on my PC and it runs almost flawlessly. Between the launch of the game and now, their monthly donations have just been going up and up and up. Just a few months after release, it runs at a higher (more consistent, anyway, because it's still capped at 30) FPS and resolution than either the Wii U or Switch could provide. The PC BotW experience is actually already the definitive experience.
It will probably stay decently high for a while too while they hammer out compatibility for other Wii U games.
See, it's still worthwhile for me to pick up an old PS4 and amass a decent library while also still being able to buy new titles, but the Wii U was an absolute dud and there is 0% incentive to buy one now that it's already been axed. Emulation will be the only saving grace for its stellar exclusive titles. And since the Switch has its own share of problems handling BotW, it's really a no-brainer to invest a little in CEMU's future. The CEMU devs are absolute demons, on par with Dolphin's team.
As for RPCS3, the PS3 has a big enough library and is cheap enough to make it worth just buying one until the devs can catch up. AFAIK there isn't a single fully working commercial game on it yet.
If you copy the pictbook folder from your extracted game files into the appropriate memory card folder and make it read-only, when you snap pictures of objects it will register the default image for the item.
The only thing really broken, besides little graphical things and certain ambient shaders, is that you can't zoom in on the photographs of the memories. Not really an issue when you can just put your face closer to the TV. Oh, and yeah you can't take custom photographs yet. And, of course, FMV only works when you download a third-party plugin because x264 decoding isn't high on the CEMU dev's list right now.
But in general it runs almost flawlessly, as far as being able to complete the game is concerned. Some people report predictable crashes after encountering Ganon or any of the Blight creatures, but I have not had this problem. Just occasional crashes here and there.
Sony sued the shit out of them and lost, and then bought VGS from Connectix. Sony bankrupted Bleem withe legal cost and later hired Bleem
programmers :/
Most of the classic consoles didn't have patents covering their architecture. For example, the NES had a patent covering the security chip on every game cartridge, but since the chip was a purely hardware thing that didn't affect the operation of the game software, NES emulators happily ignore the whole thing.
> Explaining it to a judge, who (while likely highly intelligent) probably has no CS background, over the course of litigation is likely to be really hard.
Unless the case is presided by Judge Alsup, who learnt to write Java programs during Oracle v. Google.
Patents expire after 17 years and x86 is 39 years old, so any of the original patents must have expired twice over already.
They no doubt have been filing additional patents over the years. But I'm sure MS and Qualcomm have plenty of their own patents to bargain with.
Also their warning could backfire if it gives Microsoft one more reason to finally walk away from x86 compatibility... not that this is likely to happen anytime soon.
That's under the old law. Nowadays, for patents that issue from original applications filed on or after June 8, 1995, it's 20 years from the earliest filing date upon which priority is claimed (possibly extended to account for delays in the USPTO). [0]
AFAIK, most foreign countries follow the same rule — which is significant, because when one big company sues another for patent infringement, it will usually file parallel lawsuits in every country where (A) the plaintiff owns a patent and (B) the defendant sells the infringing product.
For the benefit of anyone still reading, I'll work through an example.
+ Intel released the 8086 on June 8, 1978 [0]. Tech companies typically file U.S. patent applications just before the first public disclosure of the new technology, so as to preserve any available rights under non-U.S. patent law [1]. So let's assume that Intel filed one of its 8086 patent applications on June 7, 1978.
+ Let's also assume that it took exactly two years for that patent application to be issued as a patent, on June 7, 1980. Under the transition provisions of the "new" law, that patent would have expired on the later of (i) the issue date plus 17 years, that is, June 7, 1997; or (ii) the earliest filing date plus 20 years, that is, June 7, 1998.
Intel implements a thing, say SSE and AVX which were pioneered by Cray 20+ years earlier. So arguably, they took out of patent protection technology and applied it to X86. This is exactly how the patent system was designed to work.
I would argue that what they don't get is a defacto monopoly on the use of vector instructions for X86. Even that specific encoding. Because it is now an issue of compatibility and interoperability. The history of closing an instruction set from the patenting of a self few instructions is atrocious. Millions of entities have expressed solutions to their problems in X86, having to pay a tax to Intel in-perpetuity because of that is bullshit.
Of these I think only SSE is really important. AVX is relatively new (2011) and most software likely can handle CPUs/emulators which lack it. TSX and SGX are Intel-only so any software that can run on AMD doesn't need them.
The point is interesting though Microsoft might be working carefully only to emulate older technology. It's not like they really have to support x86 as well as they support it on the x86 version of the OS. In fact they can get to market faster by not providing deep emulation.
Microsoft cares about backwards compatibility above anything else.
Yes and no. If you're buying a managed service - like Azure HDinsight say - why do you care or even need to know what's under it? The volume buyers of CPUs now are the big cloud operators. If you're buying a tablet and consuming "apps", then why do you care about compatibility with old Windows desktop applications?
> AMD made SSE2 a mandatory part of its 64-bit AMD64 extension, which means that virtually every chip that's been sold over the last decade or more will include SSE2 support. [...] That's a problem, because the SSE family is also new enough—the various SSE extensions were introduced between 1999 and 2007—that any patents covering it will still be in force.
AMD64 requires SSE2 which was introduced in 2001, right? So isn't it just 1 year until Microsoft can put in what's required for the AMD64 architecture?
Yeah, it's been mentioned in all the posts. I assume it was enough work just to get x86 working, and there's almost no 64-bit-only Windows software (certainly nothing you would want to run emulated).
Intel will not threat Microsoft, not even indirectly, in my opinion. Rationale: once Apple starts shipping desktops and laptops with ARM chips, the only safe port for the expensive x86 chips would be Microsoft (desktop and server market) and big iron on Linux/Unix/Hypervisors.
Patents may be selectively enforced, there's no forfeiture as there is with trademarks.
Intel had (and has) no issue with qemu or bochs emulating everything, as long as they were niche and/or promoting the Intel platform (and grudgingly accepting compatibles).
However a move to rid Microsoft's platform from Intel altogether without compromising compatibility is something worth fighting for.
I heard that ARM is rather similar in that aspect: emulators for development are a-ok, but trying to run ARM emulation on a consumer product with no ARM components inside will drive up the legal fees until some licensing agreement is set up.
I know that there are some laws preventing businesses with a market monopoly from abusing that monopoly through patents. This is one reason why patent trolling entities exist: you can sell your patents to a 3rd party who will then go after your competitors.
I would love for a real lawyer to explain this tortured logic.
> And Intel's business health continues to have a strong dependence on Microsoft's business, which has to make the chip firm a little wary of taking the software company (or its customers) to court.
I mean, Apple and Samsung had a billion dollar lawsuit while Samsung chips were still in iPhones. It's certainly precedented to sue a corporation you're actively doing business with.
They had contracts so Samsung was bound and Apple did start using their own chips, so it's not at all certain that the law suit didn't further disturb the business relation.
AFAIK white box emulation has been around forever. I'd be surprised if it isn't already worked out in the courts, since this has been happening since before computers were around.
I don't think Intel wants to waste resources (and get bad PR) if it isn't a significant threat to their bottom line. Microsoft saying they are going to emulate x86 on ARM with low overhead (and thus making it possible to switch to ARM and still use tons of legacy software) is a much bigger danger to them.
Isn't there some notion that you have to actively defend a patent to enforce it? That is, something where selective enforcement puts you in a weaker legal position?
Edit: The legal term appears to be "the doctrine of latches"
Alright, I'll come out of retirement to hit this dead horse another lick.
"if WinARM can run Wintel software but still offer lower prices, better battery life, lower weight, or similar, Intel's dominance of the laptop space is no longer assured."
Peter. My man. I laughed. I cried.
For the millionth time, the ARM ISA does not magically confer any sort of performance or efficiency advantage, at least not that matters in the billion+ transistor SoC regime. (I will include some relevant links to ancient articles of mine about magical ARM performance elves later.) ARM processors are more power efficient because they do less work per unit time. Once they're as performant as x86, they'll be operating in roughly the same power envelope. (Spare the Geekbench scores... I can't even. I have ancient published rants about that, too).
Anyway, given that all of this is the case, it is preposterous to imagine that an ARM processor that's running emulated(!!!) x86 code will be at anything but a serious performance/watt disadvantage over a comparable x86 part.
This brings me to another point: Transmeta didn't die because of patents. Transmeta died because "let's run x86 in emulation" is not a long-term business plan, for anybody. It sucks. I have ancient published rants on this topic, too, but the nutshell is that when you run code in emulation, you have to take up a bunch of cache space and bus bandwidth with the translated code, and those two things are extremely important for performance. You just can't be translating code and then stashing it in valuable close-to-the-decoder memory and/or shuffling it around the memory hierarchy without taking a major hit.
So to recap, x86 emulation on ARM is not a threat to Intel's performance/watt proposition -- not even a little teensy bit in any universe where the present laws of physics apply. To think otherwise is to believe untrue and magical things about ISAs.
HOWEVER, x86-on-ARM via emulation could still be a threat to Intel in a world where, despite its disadvantages, it's still Good Enough to be worth doing for systems integrators who would love to stop propping up Intel's fat fat fat margins and jump over to the much cheaper (i.e. non-monopoly) ARM world. Microsoft, Apple, and pretty much anybody who's sick of paying Intel's markup on CPUs (by which I mean, they'd rather charge the same price and pocket that money themselves) would like to be able to say sayonara to x86.
The ARM smart device world looks mighty good, because there are a bunch of places where you can buy ARM parts, and prices (and ARM vendor margins) are low. It's paradise compared to x86 land, from a unit cost perspective.
Finally, I'll end on a political note. It has been an eternity since there was a real anti-trust action taken against a major industry. Look at the amount of consolidation across various industries that has gone totally uncontested in the past 20 years. In our present political environment, an anti-trust action over x86 lock-in just isn't a realistic possibility, no matter how egregious the situation gets.
So Intel is very much in a position to fight as dirty as they need to in order to prevent systems integrators from moving to ARM and using emulation as a bridge. I read this blog post of theirs in that light -- they're putting everyone on notice that the old days of antitrust fears are long gone (for airlines, pharma, telecom... everybody, really), so they're going to move to protect their business accordingly.
You seem to be assuming that the scenario here is that the system will mostly run x86 code in emulation. But it's not the case - on mobile devices, most of the time it will run native ARM code, and most of the time that code will be the browser. Then, of course, most apps from Windows Store will also be native ARM. Emulation is there for that occasional desktop app that users need - and which made Windows RT non-viable - but which they don't actually use all the time. It's the 20% case, and if that 20% uses as much (or even more) power as a native Intel device, that's perfectly acceptable.
Nope, I'm not assuming that in the slightest. In fact, I'm assuming exactly what you're assuming -- some fraction of time way less than 50% spent running emulated code. I'm also coming to the same conclusion you are -- that emulation of legacy x86 code with Good Enough performance could be used as a bridge off of x86. Reread the post.
But let me address this specifically:
"It's the 20% case, and if that 20% uses as much (or even more) power as a native Intel device, that's perfectly acceptable."
There's no need to speculate, here: the emulated code either will use significantly more power, or it will perform significantly worse.
In fact, as for the native ARM code on the ARM chip, it also will either use more power or perform worse than comparable x86 code running natively on an Intel chip within the same power envelope, because Intel has thrown a massive amount of engineering at their microarchitectures and manufacturing process, they have vertical integration that they can use to their advantage, and their stuff is just very good.
Again, there are no magical ARM performance elves lurking under the hood -- the ARM ISA by itself doesn't confer any real advantages in performance (and hence performance/watt) in the billion+ transistor regime. I truly don't understand why this is so hard for people to accept, but I blame Apple for spreading years of FUD about x86 (before bailing on "RISC" for it, of course).
Back to the topic of the emulated code, though: I can't say is whether "significantly worse" or "significantly more power" will still be Good Enough, but I'm assuming it will for most apps people care about.
Again, it has been over 20 years since ISA has mattered for performance in a head-to-head matchup between comparable CPUs. Moore's Law has thoroughly done away with it as a factor in performance. Relative code size (and cache/memory space + bus bandwidth) of RISC vs CISC, "x86 tax" of translating into micro-ps as a percentage of the die area, register file size, load/store, and everything else you can think of have all fallen away as real performance factors as transistor counts have soared and compilers have improved.
There are so many things that matter for performance now, and ARM vs. x86 ISA just isn't anywhere on that list, and hasn't been on it for a very long time.
Nobody said anything about performance/watt. It doesn't have to be more efficient. It only has to draw less power, most of the time. And most of the time you're staring at a computer screen, you're not asking it to do anything - you're reading, or thinking about what to type next. So idle power is the most relevant predictor of battery life for a mobile chipset. And ARM definitely has the advantage there.
As well as weighing less and being cheaper. So there's no call for laughing and crying.
I would try to engage with this but you don't seem to have even a basic grasp of sleep modes, background processes, idle power, power efficiency measurements, or pretty much anything that I wrote about in my comment. You haven't given me much to work with, here.
Logically this implies that I can't execute some i386 binary that I possess without infringing Intel patents.
I think this theory of infringement has to run into various thought-experiment problems such as : can I auto-translate that binary into some other instruction set, then execute the translated binary, without infringing Intel patents? (yes, surely) Is the translator now infringing Intel patents because it has to understand their ISA? (no, surely).
Now, can I incorporate that translator into my OS such that it can now execute i386 binaries by translating them to my new instruction set which I can execute either directly or by emulation? If so then I am now not infringing. Or did infringement suddenly manifest because I combined two non-infringing things (translator + emulator for my own translated ISA)?
Here's an insane idea--what if every Win10 ARM laptop also included a Pentium 4 or Athlon XP chip (from a junked PC) glued inside the case? Would the x86 patent rights from that chip cover an emulator running on the ARM?
(I'm pretty sure it's a no, but an Aereo-esque lawsuit arguing the opposite would be fun to watch)
It will be interesting to see how this strategy fares in the US, given the Alice ruling which made it much harder to patent methods that were purely software.
Intel's strategy of going after other hardware companies may not translate neatly to emulators.
How did I not already know Microsoft had a working x86 emulator.. this is a massive game changer for the laptop space if it's fast and reliable enough, as afaik ARM chips are so much more power efficient for similar perf
There are several existing and excellent x86 emulators floating around. In the open source space, QEMU is quite fantastic and well supported. I'm not surprised at all that Microsoft managed to either develop one in house, or acquire one.
x86 is an old and very well understood architecture at this point. The difficulty thus isn't in writing a working emulator, it's in figuring out which features you can support from a business perspective without treading on still active patents. Microsoft is one of the few companies that can probably absorb a patent fight here and come out on top, and if they succeed, it will counter-intuitively threaten the continued dominance of x86. Once they can release an ARM-based version of Windows that sports backwards compatibility (the primary missing feature that caused Windows RT to fail spectacularly) more mobile machines will be free to use ARM chips, and software developers will have incentives to natively support ARM targets for power efficiency reasons.
Since Intel banks on x86 continuing to be the dominant architecture in the desktop and laptop space, they must feel threatened by this move, so the suit doesn't surprise me. They'll now be fighting an uphill battle. On the one hand, Intel processors are still pretty much king in raw performance, but on the other hand, very few consumers actually need the kind of performance that you can only get on Intel chips anymore. A decent web browser runs on just about anything, so an architecture shift in the consumer space is quite plausible. Some could argue that it's already happened with tablets.
The huge tech thing here is that running Windows-on-ARM you can run x86 programs which can then call into the Windows-on-ARM OS. So no need to virtualize the OS
I thought the primary reason Windows RT sucked was it's reliance on the horrendously terrible Windows Store? You couldn't even recompile popular apps if you wanted to, you had to port them to the store system. Unless you were MS, then you could grant yourself an exception and let stuff run direct (Office).
You make it sound so evil while the fact is that the store is for a lot more secure and sandboxed apps so you can't just put any kind in the store. The reason why they let their own apps in like office is because they are ok for guaranteeing for their own products while they do not want to discriminate other app developers among themselves on who can and who can't put an unsecure app into the store
In theory. In practise, the store was, and is, full of shit. Junk apps, because MS paid people to push apps out. Junk apps and scams, because MS does no review or policing. Even Netflix had to go to MS 3 times in order for them to block fake Netflix apps. Other companies selling well-known programs have told me they simply cannot get MS to respond.
Search for "Game of Thrones" right now on the Windows Store. Or WinRAR. Microsoft doesn't even verify their "publishers" are actual companies, have valid URLs for the store listing (most of these junk apps just go to "http://"). It's been and remains a joke.
Every time I read anything about Connectix, I become more convinced that they were run by some combination of time travelers and genetically modified super-hackers.
Keep in mind the most relevant instruction set is the X86-64 instruction set (32 bit code is not very relevant these days). The x86-64 ISA was created by AMD, not Intel. Intel was busy trying to milk the enterprise market with the Itanium, trying to reserve 64 bit as an enterprise feature.
That is not true on Windows, where most software is either 32 bit only, or both 32 and 64 bit. In fact, for many software on Windows, the 64 bit port is considered beta versions.
> In fact, for many software on Windows, the 64 bit port is considered beta versions.
That's probably due to Microsoft's choice of keeping both "int" and "long" as 32 bits while pointers increased to 64 bits, unlike everyone else which kept "int" as 32 bits and increased "long" to 64 bits. If any part of your program stored a pointer in a "long", it would break when the memory allocator gave you an address above 4G. You have to carefully comb your code to change the relevant variables to things like LONG_PTR (which isn't a "long" on 64-bit Windows) instead.
Storing a pointer in a long is probably common in 32-bit Windows, since window messages have a pair of parameters, WPARAM (which, despite its name, was an "int") and LPARAM (which was a "long"); pointers are often passed in LPARAM.
Most Windows apps don't deal directly with things line LONG_PTR and window procs - they just use a framework, and frameworks have been updated to do this all correctly a long time ago (this migration started in early 00s, when Windows on IA-64 appeared).
The main reason these days is that there's simply no strong incentive to go 64-bit for most desktop software. The 2Gb memory limit is a non-issue for most scenarios, and other than that, why bother? If you compile and test for 32-bit, it works for anyone who is still on 32-bit Windows and it works for 64-bit. And recompiling for 64-bit is usually easy, but it doubles your test matrix - so "here's a build, but there's no official support" is not an unpopular approach.
Another component of Microsoft getting off Intel is that the antitrust settlement only applied to x86 hardware, so MS getting off x86 would let them lock down the platform and do all their dirty tricks all over again.
The difference, as it pertains to the patent issue, is nothing technical - it's simply that Intel feels threatened by Microsoft's emulator and not by qemu. (Though if Intel has ever redistributed qemu, there may be an additional GPL wrinkle...)
Intel contributes to qemu (or at least, developers with @intel.com email addresses contribute). However AFAIK they don't contribute to the emulation code (TCG).
I quickly watched this and they seem to be running a jit with disk caching. Also they had League of Legends on the desktop so i'm guessing they support sse instructions.
Market size. Intel can choose to sue or not sue. There's no obligation to chase after every infringement, so the ones they do chase are picked by bean counters.
That is odd. IBM bought Transitive and owns the tech (so no ongoing license fees to pay) and I would thimk that x86 binary support for POWER would still be a useful migration tool.
x86 is dominant for desktops/laptops. ARM is starting to make inroads with Chromebooks and covers the low end. RISC V is starting from scratch, both in terms of available hardware and software support. If RISC V can quickly prove popular in the embedded space then perhaps we'd see a desktop/laptop earlier than in a decade, but it's a competitive market.
To give a comparison, MIPS is popular in the embedded space, but how many MIPS-based laptops have their been? Very few.
but, lowRISC says they are going to crowdfund a SOC this year... let's say it takes them 2 years to have one I can buy, I should be able to have a RISC V based computer that can run linux within 5 years! Am I dreaming?
lowRISC are currently on v0.4 of their design, and we're already half way through the year. I'd be very surprised if they decided to commit to building an ASIC before they reached v1.0.
By the way, I'm not trying to stop you dreaming, I'm hinting the fact that if you want to speed things along you should get involved in the embedded space. If you're waiting as a passive consumer you'll more than likely get disappointed, but if you're actively contributing to the platform you may find the wait more bearable as you're helping to speed it along.
I was just watning about fhis on anothet thread. It's not competition if it requires compatibility with patdnt-protected ISA or microarchitectures. It's coercion.
So Intel is so scared of little ol' ARM (compare their revenues) that it's willing to use patents to take it out of the PC market, rather than compete on technical grounds?
Okay, got it. I'll make sure to account for that in my next CPU/device purchase.
AMD licenses x86 patents to Qualcomm/MS to make x86 emulator better patent troll proof. In return, Qualcomm and AMD team up for better ARM server based processors. MS can sell more Windows/Windows Sever (sad).