Hacker News new | past | comments | ask | show | jobs | submit login
An Apple ISA is coming (adriansampson.net)
278 points by samps on Sept 9, 2015 | hide | past | favorite | 165 comments



This has been stated before: LLVM Bitcode is architecture specific. It's dependent on the architectures ABI and can contain inline assembly. There are LLVM targets with corresponding ABIs that are nonspecific to architectures like PNaCL but Apple is not using them

Things Apple can do with bitcode: Produce binaries optimized for different ARM processors. Reoptimize with new versions of LLVM as soon as they are available. Use a big database of LLVM IR to tweak optimizations. Keep some optimizations secret and not even publish binaries doing them.

The biggest argument IMO that speaks against an Apple ISA is that they would have to rewrite tons of hand tuned assembly code.


Another thing they can do: it allows them to introduce new non-ARM-standard ops and immediately recompile the app store to take advantage of them.

Every now and then I see a new comparison of the Swift/ObjC dispatch assembly. Now imagine — that could be a single opcode. Or the dispatch table could get pinned to a particular cache. Or it could be other micro-optimizations that they are in a unique position to exploit with full control of the language, compiler and chip.

I have no idea how feasible these things might be or how much of a gain it would create, but they can look through their corpus of LLVM-IR and identify hot-spots where a new opcode would speed everything up by some margin.


Good point, but remember Apple license the ARM architecture from ARM (the company), and they probably aren't allowed to deliberately ship partially-compliant ARM hardware.


Apple is much bigger than arm and could easily change the rules.


They may be smaller, but they have IP Apple needs now. Apple just can't do whatever it wants to just because it's big.


They could buy arm, or offer enough cash until they capitulate.


Since all modern CPUs have a ton of microcode, would it be hard to emulate the x86 when needed, and retain backwards compatibility like that?


Not particlarly; in fact, apple has basically done this exact thing twice in the past (https://en.wikipedia.org/wiki/Mac_68k_emulator), (https://en.wikipedia.org/wiki/Rosetta_(software).

But I believe we're (hypothetically) talking about ARM, not x86, code.


Aside: Apple's 68k-on-PowerPC emulator was an amazing piece of work. It didn't run 68k code in a separate environment; it allowed 68k code to run alongside PowerPC code, to nearly the same level of transparency. Applications could mix ISAs in libraries and code resources, and system extensions could patch PowerPC system functions using 68k code or vice versa. It was surprisingly efficient, too: the first PowerPC systems used system software consisting mostly of 68k code, and still ran faster than the native 68k systems they replaced.

I've never seen anything quite like it since. About the closest I've seen is the way that Thumb code can be integrated with native ARM code, and that's explicitly just a different instruction encoding, rather than a separate ISA.


A big part of the success with the 68k switch over was that the 68k line was getting long at the tooth. Clock for clock 601 would trounce 68040 (even the 486 was faster per clock). Plus the base 601 was at 60Mhz, while the fastest 040 based mac was 40Mhz (the quadra 840av, of which I picked up one a few years later for $40, and still have it).

So apple was working with something north of 3x the raw performance, its no wonder that most people with older macs that weren't anywhere near the performance of the 840AV thought the powermac was incredible.


The problem with that, at least in the case of Rosetta, is that the QuickTransit technology underlying it got bought by IBM and disappeared from the open market, so doing that trick again would be...a moderate time+cost sink.

Wiki for QuickTransit seems to think that a number of prominent people from Transitive hopped to ARM and Apple (which might be telling, given the claims of the original post), but has no citations.


Eh, JITs aren't the most complicated things on the planet. There's plenty of people who can do it outside of QuickTransit. Like, look inside most emulators.


Given that Apple is writing compilers and designing chips, on-the-fly recompilation shouldn't be a big problem for them.


> There are LLVM targets with corresponding ABIs that are nonspecific to architectures like PNaCL but Apple is not using them

They do indeed support LLVM IR target(s) that are, in fact, much more architecturally-nonspecific than PNaCl, namely the spir[64]-unknown-unknown target. See https://github.com/KhronosGroup/SPIR and /System/Library/Frameworks/OpenCL.framework/Versions/A/Libraries/openclc (on a Mac, since 10.7).

I don't know if bitcode with this target triple is necessarily suitable for the Apple Watch, however.


SPIR is basically portable gpu assembly. I wouldn't count it as an architecturally-nonspecific target.


It's really not. It's barely an abstraction over LLVM IR


From what I understand (which is very little), the RISC-V is very similar to ARM. Could it be possible that Apple is planning a move to RISC-V?


RISC-V is not that close to ARM, and it's not mature enough for Apple to move to it. RISC-V is interesting but I think we'll see it adopted in commercial embedded systems first.


To understand Apple today you have to look to the past. Apple ended up being stuck with Motorola's inablity to deliver faster PowerPC chips. Whole product lines were delayed, or not possible. Effectively they gave control on when they could ship new product to a third party.

10 years later they are now in the same position with Intel. If Intel delays the next version of its product line by six months then Apple has to put things on hold. This is bad for a company like Apple as it could cause them to miss out on potentially lucrative periods (back to School, the Holiday season etc).

Ultimately I suspect in the very near term we will see Apple move off Intel, first for the laptops. LLVM IR would fit this strategy better than fat binaries as Apple would not have to wait until developers recompile. They can have the entire App Store available on day 1 of a product release.


> 10 years later they are now in the same position with Intel. If Intel delays the next version of its product line by six months then Apple has to put things on hold. This is bad for a company like Apple as it could cause them to miss out on potentially lucrative periods (back to School, the Holiday season etc).

This article isn't about laptops, though. It's about ARM. Apple isn't dependent on anyone for ship dates in the ARM space -- they license the ISA, but they design their own chips based on the license. Yes, they're reliant on Samsung to fab the things, but Apple doesn't need their own ISA if they want to use their own fabs. I don't see how replacing ARM with a custom ISA helps Apple any.

(As for Intel -- it's not like Intel's other customers don't have the exact same sales periods that Apple does. So they're motivated. As others have said, if Intel slips deadlines like that, who's to say Apple has the ability to meet them where Intel couldn't?)


Sort of ignorant and at least a little tangential here, but the notion of 'licensing' an instruction set seems a lot like licensing an API (cf the Java API copyright controversy) - are there parallels/precedents here or is it an unrelated issue?


The Federal Circuit's idiocy in the Oracle case aside, it's well established that copyrights don't cover methods of operation such as APIs or instruction sets. It's also well established that patents do, and that's what ARM licenses. Also they bundle mask works and copyrightable HDL code.


Cool, thanks for responding - what is it about an ISA that makes it patentable where an interface might not be?


Interfaces are also patentable.


It's a curious thing in business. People will do all sorts of things to keep from losing a big customer to a competitor. Some big companies keep multiple vendors and play them off each other. If I don't like you this week I place an order with the other guys.

Intel's weak spot has always (or at least often) been mips per watt. Apple has a whole slew of products in the ultraportable niche and given their focus could easily have more.

If a custom ARM-like chip gives them the option to stop using Intel chips on everything but the MacBook Pro line, that would be worth a lot to them, even if they continued to use Intel chips on those devices. Just the threat gets them concessions.


Intel CPUs have the highest performance per watt when running performance intensive code. The problem was that they could not scale down energy usage enough when performance requirements were very low, such as on mobile devices, and thats what they have been trying to fix for some years.


> Apple isn't dependent on anyone for ship dates in the ARM space -- they license the ISA, but they design their own chips based on the license. [...] I don't see how replacing ARM with a custom ISA helps Apple any.

Well, what if the time comes when ARM cannot deliver processor designs with necessary improvements to power consumption or processing speed? If they hit that wall, a new processor architecture might be the only way forward.

Given Apple's resources and position, that kind of contingency planning makes sense to me. You can't exactly spin up a architecture design team overnight.


Apple doesn't use ARM's processor designs. They just license the ISA and design their own CPUs.


True, but the basic point still stands. ISA heavily constrains the final design on-die. ARM might not be able to deliver an ISA that fits Apple's needs.

Apple already has a tweaked version of ARM (ARMv7s) [1]. There may come a day where tweaks no longer cut it. At the end of another excellent post, from the same author [2]:

> If compiler–architecture co-design is on the table, much more radical opportunities are available.

Apple has both compiler and architecture teams. And if it sees ties to a specific architecture hurting its product development goals again? It seems like exactly the kind of company that would drop the ARM ISA completely.

Or at least like the kind of company that wants to keep that option on the table.

[1] http://www.linleygroup.com/newsletters/newsletter_detail.php... [2] http://adriansampson.net/blog/macroscalar.html


And if a company with as much experience and expertise as Intel misses their ship dates, what on earth makes Apple think they can do better?

I could see an argument for cutting cost by eliminating the margin that Intel dictates... but even that assumes that you can find a third party to manufacture the chips at a rate that is low enough to both cover the IP Intel has in their chips, and the manufacturing cost itself.


Well, first of all Apple could do it and fail. They're done that before. They're confident/arrogant/audacious enough to try things that they can't pull off.

Also, they could take a two-pronged approach -- cherry-pick talent from Intel, nVidia, etc. and offer them the opportunity to leapfrog baggage from the past. How much of the difficulty of moving the x86 platform forward is a result of the ludicrous amount of cruft? Grabbing a few of the smartest people and narrowing your focus to an easier problem gets you a long way.

Apple has managed to outpace the entire industry (including Intel) with its customized ARM cores (getting to 64-bit over a year ahead of everyone, and take a look at benchmarks between Apple's Ax CPUs and rivals usually running at far higher clocks with more RAM and sucking more power), and it got there by cherry-picking talent, omitting stuff it didn't need, focusing on design, and treating fabs as a commodity.

Looks like interesting times ahead.


Broadwell slipped by several months meaning that Apple couldn't do their traditional pre Holiday refresh in 2014. I believe the Skylake chipsets that would allow Apple to switch to using USB-C for the Pros aren't available until next quarter meaning it's possible that Apple could miss another fall refresh.

Given that the holiday quarter is Apples best season you can probably make bets that they are at least exploring their options.


> And if a company with as much experience and expertise as Intel misses their ship dates, what on earth makes Apple think they can do better?

Intel has other clients who may be just as noisy about their own ship dates. They make custom processors for AWS, as an example. An Apple-owned subsidiary/division would have Apple as its sole priority.


[deleted]


http://www.datacenterknowledge.com/archives/2014/11/13/intel...

> In June, Bryant said Intel had designed 15 custom CPUs in 2013 for different customers, including Facebook and eBay. More than double that amount was in the pipeline for 2014, she said.

https://gigaom.com/2014/11/13/intel-rolls-out-custom-chip-th...

> Intel announced an exclusive Haswell processor designed specifically for Amazon Thursday during AWS re:Invent 2014 conference in Las Vegas. Amazon said the new processors are the backbone of its new EC2 instances in Amazon Web Services.

> Intel’s senior vice president and general manager Diane Bryant took to the stage during Amazon vice president and CTO Werner Vogels’s keynote to drop the news. Bryant didn’t share a lot of details, but she said that both Intel and Amazon engineers collaborated on the chip to make sure its built to handle Amazon’s vast cloud infrastructure.


I'm wondering if your sources really refute OP. If Intel truly does build families of processors, cherry picking processors that match Amazon's specs could still be considered collaboration (Amazon worked with Intel to determine the specs of the chips they need cherry picked for example). "Designed specifically for Amazon" sounds like marketing speak to me, but I've no evidence. Just wondering how much real engineering collaboration occurred when the information is pulled from press a release. (Update: I see OP has deleted the comment so I am probably way off base)


> I'm wondering if your sources really refute OP.

I honestly don't know. It's definitely possible it's marketing speak that just means they're doing binning OP mentioned, but given the volume, it's also probably entirely possible AWS, Facebook, etc. get custom chips you can't buy on the market.

Regardless, in either situation, Apple being able to do their own chips in-house with only their priorities driving decisions is a possible boon to them.


I wonder what kind of agreements they have in place when the servers are EoL'd.

Sometimes you can find really great deals on obscure custom designs built for a large client (e.g. HP SE316M1 G6/SE1120).

I don't think Amazon runs too many data centers with old hardware, the electricity and cooling costs would not be economical. So what will happen to these supposedly custom chips manufactured for AWS by Intel?

Surely their value is too much to just turf them all in a landfill (err, I mean, responsibly recycled of course) when their service life is over. Some of them surely must end up on the refurbished server market...


I don't think Intel sees anyone on the PC space as a major threat, so they aren't in a hurry to make everything perfect all the time.


Except their bottom line. Anyone remember the cost to Intel for their faulty FPU in the P5?


> Except their bottom line. Anyone remember the cost to Intel for their faulty FPU in the P5?

I don't think that's as big of an issue with modern processors:

https://wiki.debian.org/Microcode:

> Processors from Intel and AMD may need updates to their microcode to operate correctly. These updates fix bugs/errata that can cause anything from incorrect processing, to code and data corruption, and system lockups.


Oh, there's still plenty of things beneath that level that can go wrong. Just look at Intel disabling HTM (through microcode, admittedly, but disabling a whole feature isn't fixing it to operate correctly) in Haswell/early Broadwell.


I was thinking that too. But I wonder if Apple sees an opportunity to become the Intel of ARM chips - mobile and desktop, at least.


I don't believe that Apple has a tendency to sell their chips to other companies.


>And if a company with as much experience and expertise as Intel misses their ship dates, what on earth makes Apple think they can do better?

A decade of shipping millions of products that depend on very complicated supply chains on time and to great profit?


Yes, because Intel doesn't do the same thing. /s

Granted, Intel's supply chains probably aren't nearly as difficult as Apple's, but the supply chain is probably the least difficult part of shipping the next generation of a chip.


I wouldn't say that. From a materials and complexity standpoint, Apple has a much more difficult supply chain. But from a capital and planning standpoint, Intel has one of the most difficult supply chains out there. Every year they start building fabs that won't produce a single production chip for another 3 years, while planning on stuffing it full of capital intensive technology that isn't technologically feasible yet and might not even work. Practically speaking, Intel has to accurately forecast what Apple's, Oracle's, HP's, and Dell's sales are going to be in 5 years, a good 3 years before those companies even start their strategic and capital plans. A five year forecast for a yearly financial report at Apple might take a couple of economists a week to ballpark, whereas at Intel they probably have teams of economists, statisticians, etc. working around the year to improve their sales forecasts on horizons ranging from a week to a decade.


I think the point is that Apple's in the same league or better, and it doesn't have to compete with Intel in a fair fight. Intel has to work with every x86 program ever made (more or less) and in dozens of different environments. Apple only has to optimize its own stuff, and can change its mind on a whim.


There's a big difference between Motorola and Intel: This time around all their competition is also stuck with Intel.

Switching to Intel guaranteed that they worst-case will always be on par with their competition. They'd lose that if they switch away to custom built silicon.

Now if ARM would start to make serious inroads in the Laptop/Desktop market that might change. But then again, thanks to P.A. Semi Apple has more than enough ARM knowledge already.


If Apple wants to move off Intel for desktop CPUs in the near future, good luck to them - the laws of physics apply to Apple, too. Intel has invested tens of billions of dollars in process technology, owns key IP and assets, and (after a long delay) is pushing the power envelope of their CPUs down aggressively, with their latest generation providing desktop-grade performance at 4.5W and more to come. Their delays may affect aggressive product roadmaps, but have more to do with the fact that Intel is not one but two process shrinks ahead of the competition, and are dealing with problems nobody else has seen before.

Intel is better than ever at the power-performance game, despite suffering badly from ARM. The performance gap is huge. Their current position is the result of a continuous sequence of design success and R&D reinvestment that started with Pentium M more than 10 years ago.

If anything, I expect Intel to finally start breaking into the phone market as they catch up on SoCs.


Apple always want to keep all options open – unlike in the past – but that doesn’t necessarily mean Apple will move away from Intel. They probably have plans to do it and teams working on it, but as long as Intel can deliver what they need they won’t.

I think they are well prepared for every eventuality, but currently Intel’s and Apple’s interests are pretty well aligned.

Basically we are seeing a race towards a common goal from two directions here, though: Can Intel hit the low power goals (with sufficient performance) before Apple hits their performance goals (at low power) through some alternate route? Honestly, I don’t see Apple being able to touch Intel’s performance and with regards to power use Intel is getting better all the time. The space for ARM-like laptop chips is getting smaller by the day.

And Intel actually wants to hit that low power! It’s not like they are disinterested in that (like IBM being disinterested in making the CPUs Apple needs back in the day because they were making their money elsewhere and Apple was just a small, unimportant customer of them, not worth all the effort), they are working on that all the time.

That’s what I mean when I say their interests are aligned. I mean, I obviously think that Apple is always on the lookout for alternatives (and they have been through many such transitions by now, so I’m very confident that they would be able to pull it off), but I honestly think they would prefer their and Intel’s interests to remain aligned and Intel just making some kick-ass low-power CPUs, exactly what they need for their future retina resolution, light-weight, super-thin, all-day battery life, fanless MacBooks.


I really doubt Apple is going to do anything radical like that with it's Intel based products. To do so would be massively expensive, dwarfing the revenue from it's Intel-based products, let alone any benefit. Apple's Intel based products are a very small part it's revenue and have almost no growth.

If they do this, it will be for it's ARM products, iPhone, iPad, iPod touch & AppleTV.


Yep, and a victim not just the PowerPC nightmare but two other minority architectures _before_ that, both the 65xx and the 680x0. Arguably they didn't care as much about the 6502/65816 because they were already moving to the Mac/68k across their whole platform (people often forget that the Apple II line was Apple's major product and revenue source for their first 10 years), but in the early 90s Apple, Atari, and Commodore were all left in the lurch by the decline of the 68000 architecture. The PowerPC was the nominated successor, but it dragged Apple through 10 years of transition, with buggy backwards compatibility, porting an OS that was never designed to be portable, with major sections of it running as emulated 68k for years after they stopped shipping 68k machines...


M68000 wasn't a minority architecture at the time Apple adopted it. It was a workstation workhorse. That's part of what justified the Lisa's $10,000 price in 1983. But a lot of it was margin and hence a year later 68000 Macs could be sold for less than a quarter that price but still offer incredible performance on a personal computer. By adopting a *nix based OS, Apple reduced most hardware compatibility issues to recompilation, performance of course being another matter.

As for the dark days of Apple in the PC market and the deaths of Commodore and Atari, my recollection is that it had more to do with the stock market crash of 1987 tightening access to capital and the S&L crisis that followed it creating a recession where downsizing was what corporations not empty-nesters did.


At the time they adopted it but soon after. The moment the 386 became affordable the 68k platform was in decline.


If the 65816 (or something very similar to it) was available in 1978, the computing world would probably be very different now. 1978 was the year that both the 8086 and 6809 were released, which were similar 8/16 bit chips. 1979 saw the release of the 68000, which was a much more powerful chip than 1983's 65c816.


68000 was more powerful for addressing large amounts of RAM that nobody could afford. In terms of actual workhorse performance there's a good argument to be made that a similarly clocked 65816 could outperform a 68000, unless your benchmark is heavy on 32-bit integer calculations. Instructions per cycle and interrupt responsiveness are higher for the 65xx (and 6809)

I'd prefer a 6809 with a wider address bus over a 68000. I wish Motorola had improved on the 6809 rather than having two competing architectures. The 6809 is a joy to code for.


I understand the business logic, but as a non-hardware guy, I've often wondered at how this would look in reality:

- Apple has proven they can switch out hardware stacks, so that part seems straight forward enough. Especially given how mature XCode is.

- But: can they really get to a point where they complete with Intel directly? Would they try and use ARM or roll their own ADM64?


At this point do they HAVE to compete with Intel? The other thing Apple is REALLY good at is telling people what they want. Apple would likely be able to convince enough people that an ARM base laptop is perfect for them.


Most Apple customers have no idea what ARM or ISA mean. Apple's not big on publishing detailed specs, so this doesn't even seem like a marketing issue. Apple will just put it in and people will buy it based on Apple's reputation alone.


Excellent point.


I wonder how many Apple customers were even aware of the PPC->X86 switch several years ago, let alone how many customers still remember it?


> At this point do they HAVE to compete with Intel? The other thing Apple is REALLY good at is telling people what they want. Apple would likely be able to convince enough people that an ARM base laptop is perfect for them.

In my opinion it was mainly Steve Jobs who was brilliant telling people what they want. So I personally doubt whether this strategy still works with Tim Cook as CEO.


And in all honestly, they are probably correct in telling people that.

People are already using incredibly low power/perf laptops (new MacBook).

I'm not convinced anyone doing content consumption would notice if the chip swapped out for an ARM instead. After all, ARM's already power their content consumption devices (all of iOS)


That would be a bummer for me and a lot of other developers I'm sure. I know we're not the target market, but I like using Apple hardware and OS X, but for work I am always running multiple VMs with Windows and Linux.

I'd be sad if my upgrade path became some big, noisy, inelegant HP Xeon box.


I would add that the NeXT Computer was faced with similar problems, and had to switch between different processors several times.

Apple/NeXt has much experience with technical challenge of porting entire platforms to different processors and with the business challenges of being tied to critical technologies you do not control.


NeXT used the 68k from inception to purchase by Apple. While they tinkered with 88k and PPC, they never shipped those.


They shipped NeXTstep for x86, SPARC, and PA RISC too.


Very true, but the post I responded to claimed NeXT "had to switch between different processors several times", which isn't correct.


It is a common theme in computer architecture for someone to say, 'look I have an awesome new architecture all you need is a clever compiler to make it work'. None of them really pan out (other than in specific applications, you could argue GPGPU is as an example, though that appeared more by accident than by design).

Apple would need a very good reason to produce their own ISA. Sure they like to do many things in house but they don't do everything themselves. The resources required to produce and support a whole new ISA are a major investment, they're only going to do it if in the long run is cheaper than paying for an ARM architecture license. I just don't see a solid argument that a custom ISA and shiny new compiler would give them much (if indeed anything).

Whilst they may be shipping LLVM IR for the watch apps rather than ARM code I think this is just so they can target the compile for a specific processor. Each one has its own performance quirks and especially in such a power sensitive environment it would make sense to do specific targetting.


I don't see why a new compiler would be necessary. It's unlikely they'd completely throw out the whole ARM ISA and replace it with something completely different in one go. More likely, they'll gradually add new instructions and mutate the characteristics of existing instructions over time. LLVM IR isn't completely ISA agnostic, a lot of ISA specific assumptions are still baked into it, but gradual iterative improvement is Apple's speciality. A couple of new instructions this year, deprecate a few old one next, and in 5 years time you've got a new ISA that's barely recognizably ARM any more yet each step along the way is just an incremental and largely backwards compatible change.


> I don't see why a new compiler would be necessary. It's unlikely they'd completely throw out the whole ARM ISA and replace it with something completely different in one go. More likely, they'll gradually add new instructions and mutate the characteristics of existing instructions over time

Well this is exactly the point. They cannot do this the license will not permit them to alter the architecture it's all or nothing (indeed if the new ISA was too ARM like the lawyers would be sure to come knocking).

I'm sure the could reuse quite a lot of compiler technology but the entire point of the article is by doing clean slate ISA design you can do something radical and get gains. Whether this is true is unclear, but it would require some serious work on the compiler.

Producing their own conventional ISA would seem to be pointless as it wouldn't give them anything vs ARM (or indeed x86).


Apple has an Architecture License with ARM. Basically the broadest license that ARMS sells. If you didn't know this, ARM was founded as a joint venture between Apple and Acorn. Their history goes back to the day the company was created. (ARM Chips powered the first PDA, the newton.)

Your assertion that their license will not permit them to alter the architecture is wrong. This is true of the vast majority of ARM licenses, but not Apple's.

They can take the ARM ISA and extend it in any way they want, and they can take ARM cores and adjust them, or design their own-- they have already done all of this (though to a small degree, not enough to be called a "new ISA".)


> Your assertion that their license will not permit them to alter the architecture is wrong. This is true of the vast majority of ARM licenses, but not Apple's.

> They can take the ARM ISA and extend it in any way they want, and they can take ARM cores and adjust them, or design their own-- they have already done all of this (though to a small degree, not enough to be called a "new ISA".)

What is your source for this? As far as I know ARM do not permit modification of the designs they sell or alterations to the architecture. After all allowing such things could lead to the errosion of their business (e.g. by letting apple slowly slide to a non ARM architecture).


Apple does not use ARM designs, they use certain principles present in the ARM architecture and the ISA.

Wikipedia:

"Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture."

https://en.wikipedia.org/wiki/ARM_architecture#Licensing


That's about the best description I could find, too. It all boils down to what 'comply fully' means.

Given the lego-like structure of the ARM instruction set (the 32-bit variant), with zillions of extensions (Jazelle, DSP instructions, Neon, Thumb, Thumb-2, various revisions of vector floating point instructions) and explicit support for "coprocessors" (https://en.wikipedia.org/wiki/ARM_architecture#Coprocessors), I suspect (based on common sense and nothing else) that the license allows expanding the instruction set and dropping whole modules.

But as I said elsewhere: concrete proof for that is lacking.


Which do you think makes more business sense. Allow a highly profitable and influential customer to gradually move away from your product over a period of many years or just force them to leave immediately.


Apple engineers were heavily involved in the design of the ARM 610. I can imagine they are equally involved now.


"the license will not permit them to alter the architecture it's all or nothing"

Can you give a reference for that? As far as I know, Apple has an architectural license that allows them to ship anything that passes a test suite. Depending on the way the test suite is specified, that may not rule out shipping a superset of ARM, for example with a few extra instructions.

One that I _think_may_ be useful is one to load values from tagged pointers (caveat: I know little of CPU architecture)


> Can you give a reference for that? As far as I know, Apple has an architectural license that allows them to ship anything that passes a test suite.

I can't find one actually, so perhaps I am wrong. If it strictly based upon the test suite then maybe it's sufficiently thorough that interesting alterations are effectively impossible (e.g. it could test you get invalid instruction exceptions when you feed in unused bits of instruction encoding space, preventing you from adding any).


Turns out I'm wrong. The A6 used a variant named ARMv7s: http://www.primatelabs.com/blog/2012/09/apple-a6/

It has two extra integer division instructions and some extra floating point. Though the blog says that the extra floating point instructions are also present in ARMv7 (in XCode) but unused.

So perhaps they're allowed to add whatever they want? I suspect only the lawyers truely know...

What is pretty certain is they couldn't break compatability with the ARM architecture.


Those were the instructions (integer divide and VFPv4) added in Cortex-A15 chips.


You make some good points. I don't think Licensing is likely to be an issue. ARM offer a wide range of licensing deals, and since Apple wouldn't be selling their chips I don't see why ARM would feel the need to enforce compatibility. It's going to be interesting to see if this ever happens.


ARM's market cap is 20B. If Apple sees enough benefit to do this, they can buy their way out of any legal questions that might arise.


It's not the money that's the hurdle for that, buying ARM would be a massive antitrust headache for Apple.


Is owning the ISA worth 20 billion dollars? You could buy half the music industry for that


If Whatsapp was worth 19B it's easy to see the ISA being worth at least that much.

edit: for what possible reason can this be downvoted? It's just a datapoint.


> Apple would need a very good reason to produce their own ISA

I think it's pretty much a no-brainer for them to at least try. They have so much cash available that spending a few billion dollars on a moonshot has little downside and crazy potential upside.

When they push Intel to advance on some front, those benefits are enjoyed by Apple competitors as well. If Apple brings it all in-house, they can try to stay a generation ahead of Intel. With the amount of money they have to throw at the problem, they could starve Intel of some key talent and hurting Intel will hurts Apple's competitors.


They already did that in the 90s when they collaborated with ARM on their first chip as a company split off from Acorn. This chip was the ARM610 for the Newton. It is pretty much the predecessor to every ARM chip sold today, with the possible exception of the Cortex-Ms.


New 15-second Apple spot: We'll hurt everyone but ourselves.

Edit:

We'll hurt everyone, maybe even ourselves.


I think you are missing one thing. This would not be a general purpose ISA designed for 3rd parties to develop and run things on it, this would be an ISA specific to Apple.

Apple has absolutely control on anything and everything that would run on this thing, I don't think it is about paying licenses to Arm. When you have almost infinite money, in long term having a leaner/ faster / more efficient custom ISA is better and lucrative than depending ARM or Intel for innovation. They can make it run existing applications with a transformation layer (already done by Nvidia/Transmeta mentioned elsewhere in this thread) So, I don't see why not.


Leaner? Faster? More efficient? Hahaha, that's great! Really funny stuff...


Why don't you humor us and explain why it can not be?


Why don't you point out some of the inefficiencies they are hobbled by? I'll assert that there aren't any at the isa layer, none that are measurable.


So, 40 year old x86 and 30 year old ARM is not full of legacy issues and layers over layers of hacks? A good fresh start has always possibilities, real question is if Apple is competent enough to pull it off.


64 bit ARM is a mostly new ISA.


x86 is rough, but ARM is still a fairly clean ISA.


What about ia32e x86?

Other than booting it, and EFI is addressing that, it's a fairly clean and nice architecture. Some of the registers have legacy names. Are there any really odd legacy behaviors?


Pretty much everything still exists in AMD64. You can still set up a 16-bit process while running under long mode.


x86 is a variable-length encoding, which is a good thing for cache use, but it doesn't help these days because the most common instructions are not the shortest, and the shortest ones are things like BCD and BOUND that nobody uses.

x86-32 doesn't have nearly enough register names, causing totally unnecessary memory spills, which the hardware was never good enough to hide, and having to pass arguments on the stack. That's why x86-64 is faster than -32, even though 64bit wastes so much cache space.

And some instructions are just randomly slow because of handling the weird encoding, like 16-bit math is slower than either 8-bit or 32-bit math.

Then there's eflags, but that's a minor complaint.


Need a good reason? OK here's one - they need an order of magnitude better battery life for their watch, without sacrificing capabilities. Since Apple has their own chip designers, they have probably already investigated the effects a different ISA could have on power consumption.


> The resources required to produce and support a whole new ISA are a major investment

It's not that hard, really. With such powerful tools as LLVM, a new ISA can even be designed and implemented with all the tooling by a very small team.

What is hard in all this ISA business is the backward compatibility concerns. Once you're liberated from this, you're free to do whatever you like. They may want to experiment with a family of ISAs, maybe entirely incompatible, targeting different device classes (instead of a single ISA across the range).

Source: experience in the mobile GPUs design.


Apple is basically free of backwards compatilbiitly concerns, since they use LLVM and Xcode. They have previously required all new apps to be recompiled for the current version.

It won't be too long before Apps that have not been updated in 2 years start disappearing from the store-- that's all they'd have to do and the app makers would comply.


And Apple has a long and reasonably successful history when it comes to forcing developers to adopt new approaches e.g. PowerPC, Intel, 64-bit, LLVM, OSX.


> With such powerful tools as LLVM, a new ISA can even be designed and implemented with all the tooling by a very small team.

The Mill team defines their ISA and encoding manually, and then programmatically generates the compiler, assembler, linker, and entire toolchain.


And the other guys are even generating a hardware (plus a compiler toolchain) out of an ISA spec: https://en.wikipedia.org/wiki/LISA_(Language_for_Instruction...


They might want to merge ARM and x86 and create a super-ISA that allows them to run binaries from either platform in the new super-CPU, which might incidentally be fast enough to make emulating an OSX system quite feasible .. while letting us run iApps, &etc. Maybe, when Apple want to merge their products all into one base architecture, and create the SuperPad, it'll have a CPU that lets OSX+iOS apps run, side-by-side, with ease ..


I don't think this holds water at all.

The crux of the article is:

> ...one last gap remains in the middle of this stack of system exclusivity: Apple licenses the instruction set architecture for its mobile devices from ARM.

But Apple already not only designs the own SoCs independently already, they regularly add their own opcodes to the ARM instruction sets they license, as they see fit.

The alternative to not licensing from ARM, even if they "invented their own ISA", would be to pay an exorbitant sum in royalties to ARM and every other patent-holder whose technology they might dare use in their chip. So paying ARM for their technology in one go just makes the most economical/legal sense.


It's widely disputed that LLVM bitcode is actually ISA-agnostic. There is a HN comment quoting Chris Lattner as saying that CPU independence isn't really the point of bitcode apps (I'd find it now but I'm on mobile). The thought is that it has a lot more to do with the ability to re-optimize.



Though the arguments are interesting, I'm not convinced

Sure, the semantic gap exists. But ARM and x86 have evolved and have overcome a lot of difficulties.

People like to bash x86 but it has a big advantage: it's compact. ARM Thumb is compact but not so much.

Also remember how big the last 'new ISA' (Itanium) success was?

Compiler and processor front-end beat "perfect ISA" today


The reason other ISAs have failed to get traction is compatibility. They couldn't run any existing binaries, and everyone wanting to use it would have to develop and adopt new tools. But Apple don't care about interoperability and completely control the entire software tools and distribution stack from top to bottom. They don't have to ask anyone else to do anything at all that they aren't doing already. No previous ISA vendor has even been in that position. Not even IBM with their mainframes. Furthermore the architecture of LLVM decouples it from particular processor architectures in ways that previous compiler architectures didn't, greatly easing the migration process at a technical level.

But what's the benefit? Maybe a few percent speed/power efficiency boost? We've already seen that Apple will go to extraordinary lengths for a few percent improved performance, particularly when it comes to power efficiency. A few percent here, a few more there and soon you're talking an hour+ extra battery life.

I think this is highly plausible. There just doesn't seem to be any particular reason why they wouldn't do it, and plenty of reasons why they would.


> Furthermore the architecture of LLVM decouples it from particular processor architectures in ways that previous compiler architectures didn't, greatly easing the migration process at a technical level.

How is LLVM IR (which I am told is arch specific) different from GCC with RTL?


> No previous ISA vendor has even been in that position.

Remember the transition from VAX to Alpha? And they kept the binary and source compatibility in quite an inventive way (with both binary translation and by making MACRO32 just another high level language).


Itanium is 20 year old technology, A lot of things changed since. I wouldn't be surprised if they come up with a completely new ISA and produce their own chips now. If they can squeeze a few more dollars per device because of a such move, with the added bonus of more efficient and faster operation they will do it.

Margins is the only important thing for Apple, nothing else matters.


New ISAs keep appearing - in the GPU world nobody cares about any backward compatibility. Domain-specific ISAs make a lot of sense, and given the growing diversity of the Apple devices needs, they may really be ready to explore this possibility.


> People like to bash x86 but it has a big advantage: it's compact.

x86-64 isn't. I've measured this: the average instruction length is just about 4 bytes. REX prefixes add up quick.


Is that really the definitive metric though? Seems like you'd want to compare overall program-text size between equivalent binaries for x86[-64] and $otherarch (i.e. x86 may be able to get away with fewer of those ~same-size instructions due to each of them doing slightly more on average).


> a traditional von Neumann ISA like ARM incurs a semantic gap; the architecture wastes time and energy rediscovering facts that the compiler already knew.

So thought the designers of Itanium too; it turned out that the compiler doesn't know sufficiently much for an architecture like Itanium.

Were there any significant advances in compiler technology since then that would make it worthwhile to experiment with a new ISA?


This seems a bit like click bait. It holds no substantial information. They might as well have summed up the article with, One day, apple will make their own processor. That's all it says.


To be fair, It says a wee bit more, it says apple has an llvm IR layer and a semantic gap exists with arm -> so it'll be both easier and necessary to create their own ISA layer.

I'd really like to know the details behind the semantic gap, though. That was annoyingly vague and really is the premise of the entire argument.


Yeah, like "War And Peace" is about Russia in the times of Napoleon, and all "The Art of Computer Programming" says is "here is a bunch of algorithms".


"One day, apple will make their own processor." -> one day in the past. Apple already make their own processors.


I don't think they'll produce their own ISA. They have an architectural license from ARM, so why bother.

But, Intel doesn't exactly produce chips that are helpful to Apple. Since Apple switch, Intel has gotten rid of third-party chipsets. This removed a lot of customization options and basically made life easier for Intel since they produce a fixed set of chips and you have to take them. Also, Intel's market differentiation of chip features probably doesn't help.

Apple wants to provide a custom experience, and Apple building their own PC-class ARM chips will allow that.

[edit: also Intel's paying people to produce Macbook Air clones probably didn't help]


It will never happen. There isn't an ARM chip out there that could handle the x86 emulation needed to move off of the platform. The only reason Apple was able to move off of PowerPC is because they were able emulate the ISA on x86.


I don't think that will be the limiting factor. I can see Apple saying the new machines will run only newly compiled software.

On that note, I get the feeling Bootcamp for ARM Macs would be interesting since it would be an ARM version of Windows.


The last time Microsoft made an ARM version of interactive/desktop Windows (Windows RT for the Surface/Surface 2), it didn't support legacy Win32 apps (native or x86 emulated). It was a huge flop; Win32 apps are still the overriding reason why anyone runs Windows.

And now that Intel has gotten its act together with low-power SoCs, I don't see desktop Windows coming back to ARM any time soon.

(yes, there is Windows 10 for ARM but only for IoT platforms, it doesn't support graphical apps)


The Windows RT Surface was a big flop for a lot of reasons. Telling developers it had an iPad store model when the x86 version didn't was probably a bigger problem.


I'm read this wondering if the author had any remembrance or understanding of the original CHRP/PReP fiasco (https://en.wikipedia.org/wiki/AIM_alliance). That was a painful time for Apple. Given the nature of how ARM operates, and the fact that Apple has has a full ARM license (so could, at their leisure add special sauce in the instruction set if needed) that they would do a new ISA, they already did, x86-64 => ARM. The question for me is whether or not their desktop/laptop series moves that way or not. I've said for years a 12" MacBook Air running IOS would put a huge dent in the Chromebook market, and a 12" ARM based Air running IOS? Well that would be a pretty obvious move to me.


This article points to another bit of the technology stack that Apple doesn't own - LLVM IR.

LLVM is open source and therefore doesn't require licensing unlike the ARM Instruction Set, but it's another thing they don't perfectly control and they're happy with that.

Developing a new ISA would be extremely expensive, and they'd have to have a really good reason for doing it. The post doesn't suggest why it would be beneficial, merely extrapolates a pattern.


According to Wikipedia Apple hired Chris Lattner, one of the original authors of LLVM, in 2005. (https://en.wikipedia.org/wiki/LLVM)

So they don't own it, but they do employ some of the most knowledgeable developers of the project. They almost certainly have control over the direction it takes, and could easily develop features behind closed doors.


> This article points to another bit of the technology stack that Apple doesn't own - LLVM IR.

Hmm, I saw the "LLVM owned since ~2005". What exactly does the article mean by "owned"?

Perhaps, Apple owns it in the same way that all of us do, or perhaps the article just needs a little more work.


Apple can also fork LLVM if they want to assert more control.


Apple has owned part of ARM holdings as part of a joint venture between them and Acorn for decades now. That was one of the first pieces of the widget they had. Joke is on everyone making phones and servers: you are paying Apple already.

From Wikipedia:

"The company was founded in November 1990 as Advanced RISC. Machines Ltd and structured as a joint venture between Acorn Computers, Apple Computer (now Apple Inc.) and VLSI Technology."


Apple sold their remaining stake in ARM back in 2003.

http://appleinsider.com/articles/13/08/12/iphone-patent-wars...


I can't see this as a definitive sign that Apple are going to introduce a new ISA. Instead, this gives them flexibility. They can switch ISAs (e.g. to a new ARM revision, it doesn't have to be an Apple-specific one though) without authors having to recompile their code for all platforms. It also allows Apple to support multiple ISAs without code bloat - the app store can download the correct binary to your phone, just like they now can send only the correct size images.

I don't know how abstract the LLVM IR is - can you take IR and compile it to two wildly different ISAs, (say) x86 and ARM, and get full optimisation on both? Or is it more limited, e.g. allowing you to compile to ARM version x and ARM version y (e.g. if version 'y' supports some new SIMD instructions).


LLVM converts your input language into the intermediate representation (LLVM IR) with a frontend. Then, a backend takes IR and spits out code for an ISA.

IR is ISA agnostic. Compiling the same code to x86 and ARM will have the same IR, it's only the backend step that's different. Thus, of course you can take IR "meant for ARM" and compile it to "x86" and get full optimisation. The IR isn't meant for ARM anyway, IR is just IR.


> Compiling the same code to x86 and ARM will have the same IR

This is incorrect. Clang has specific code paths (producing different IR) for x86 and ARM, specifically around calling conventions but I'm sure there are more.


> IR is ISA agnostic

It's absolutely ISA specific and this target independence pipe dream keeps being repeated on HN. Just search the LLVM documentation for "target specific."


I think the Mill team is a good acquisition target for Apple... Apple could swiftly switch their whole stack over and gain remarkable benefits. I can't think of another company that would even be able to switch to the mill line.


Given the effort put into engineering support for bitcode in iOS 9, it's clear that the processor's instruction set is definitely going to change at some point. The only question is when. I wouldn't be surprised if these new processors were unveiled about 8 and a half hours from now.


>I wouldn't be surprised if these new processors were unveiled about 8 and a half hours from now.

I believe that we will see the new processor architecture in the next Apple Watch, not the next iPhone. I think A9, that will be unveiled later today, will be an improvement over A8 but it will not be a radical improvement and will not contain a new ISA. It will have improved ARM cores.

Apple also doesn't require bitcode for iOS apps right now. They suggest you add them but don't require them. They're required for watchOS apps.

Once Apple starts requiring bitcode for iOS, within a year or two you can expect some radical Apple CPUs for iPhone/iPad.


Apple doesn't want their own ISA. MIPS was going for a song just a couple years ago and Apple decided not to buy. That was the best shot they'll have at their own ISA (they could go with RISCV, but they wouldn't own it).It is my belief that they got what they wanted by forcing ARM to switch to the ARMv8 ISA.

New microarchitectures take 4 years or so to design. ARM announced the new ISA in 2011 and didn't have a shippable product until 2015 which is very typical. All the other implementers (eg. Qualcomm) have also not been able to ship until now (Qualcomm's custom Kryo doesn't hit until later this year). Apple shipped a better product in 2013 than A57 is today (ARM doesn't catch up until A72 later this year). To my knowledge, a licensee had never shipped a new ISA before the ISA designer up to this point. How did they get a chip designed, validated, tapped out, produced, integrated, and shipped out in 2 years?

I believe that Apple looked into purchasing MIPS or designing a custom ISA, but was put off by the costs and headaches associated with moving ISAs (having already done this with the change from POWER to x86). Instead, they design an ISA that is incredibly close to MIPS and start implementing a micro-architecture. Once they reach the stage where they must make a decision about which ISA, they tell ARM to use their ISA or they will move to MIPS. This head-start also

ARM is already somewhat threatened by Android having first-class support for MIPS. Having such a big player switch would be extremely threatening to them. The result would be an immediate caving. ARM would need to publish the ISA, but Apple would have a couple year head-start on implementing it (this head-start also puts Apple in a good competitive position relative to Android phone manufacturers). The rest is observable history.

This may not accurately represent what really caused this series of events, but it does explain why Apple got a good chip out before ARM could release a bad one (ARM couldn't even get a smaller, easier chip out the door). It also explains why all the other chip companies hint at their surprise at Apple's early launch.


Getting rid of accumulated cruft would simplify decode logic. Customize it for iOS. Siliconize common functions.

Apple doesn't have to worry about standards, backcompatibility or adoption. If they have noticed unneccesary inefficiencies, they can fix them.


Very insightful little post. In theory Apple could start evolving their ISA over time alongside everything else because LLVM gives them an abstraction layer. (Everyone else could do this too of course.)


I'm pretty certain that lots of folks here are looking at this from an unproductive perspective.

Robert Colwell from DARPA presented a talk at HotChips 2013 which was focused on post 'Moore's Law' technologies and he brings up specialized ISAs.

Looking at the potential of Apple releasing chips with new ISAs from this perspective seems to me to make a lot more sense (to me at least).

https://www.youtube.com/watch?v=JpgV6rCn5-g


Thought provoking, but I disagree with pretty much every conclusion. If there is anything they really need they can likely get ARM or Intel to put it in there, so building an ISA from scratch isn't going to gain them much. It's the same reason why they aren't going to become a cell-phone carrier even though that's another part of the vertical stack that they could try to get into: the cost/benefit makes it not worth it.


This "get freedom from binary compatibility by shipping binaries as compiler IR" concept is a venerable one and has been done many times with VLIW machines:

See here, https://scholar.google.com/scholar?hl=en&q=A+Technique+for+O...

(Not to mention earlier examples like AS/400, P-code etc)

I think the Mill was doing this too?


The key sentence is: "the [current] architecture wastes time and energy rediscovering facts that the compiler already knew."

If Apple does something here it's going to be for the watch, not Macs. Pushing the envelope of efficiency for the watch is where it becomes worth it to make this kind of (otherwise insane) investment. It's also a relatively simpler stack, so more feasible.


In case you were wondering, like I was: http://google.com/search?q=what+is+an+isa


trying to understand a bit more about bitcode concept. Since a developer submits a bitcode for apple watch, does that mean he can't optimize his own app for performance?

FWIK, on android you can still optimize your app at assembly level and i think that's what motivates developers at times. Remember the iphone camera high speed shot app, that was hand coded to be fast on iphone.


Bitcode can contain inline assembly still. Additionally, bitcode is significantly more tied to the target architecture than the author is assuming.


We'll see an Apple GPU IP in iPhones/iPads before a new ISA.


Why didnt Apple just ask authors to submit source code?


Many developers see Apple as a competitor, and Apple has had a long history of putting their platform's developers out of business. For example, iTunes destroyed the market for music library software.


Because some developers prefer their own build system, which Apple really does not feel like to set up?


LLVM "owned" by Apple? Can't really "own" open-source.


To some extent, it does own it. Ever tried to push even a mildly controversial patch to LLVM or Clang upstream? If Apple guys are against it, it's a full stop, no matter how many other parties are interested.

And their reasons to be against a particular patch are often as simple as that it'll break compatibility with their internal, unpublished changes to LLVM.

Learned it the hard way...


Chris Lattner is the "Apple guys" now, so..


What's the relevant definition of "owned" here? As long as Apple can modify LLVM however it wants, and as long as it's sure no corporate actor can restrict access rights to it in the future, it effectively owns it, altough not exclusively.

Having exclusive rights on a strategic business brick is important when this brick is enough to give someone else a stronghold on the whole vertical market. Although having a good compilers is one of the many requirements in Apple's business model, it's neither the defining feature, nor what competitors miss in order to threaten Apple.

So yes, in this context, Apple non-exclusively owns LLVM.


owns is the wrong word. No need to redefine it to fit the sentence, they should have chosen a different word.


Perhaps the word is 'pwn'?


Of course it can.

Open source doesn't mean the process is collaborative. It just means the code is available for you to use, modify and fork. There are numerous open source projects (mostly those sponsored by companies) for which you would never be able to change the direction of.

Look at Node/IO for how things would need to work.


I reckon the authors point is that they control how they use it. They might not own it all (but they sure hold the copyright to much of it), but no other entity outside of their control is going to pull the rug from under their feet.


For Apple to develop an ISA they would have to develop a CPU. Something they don't do inhouse.

Up until now the CPUs have been designed by ARM. Cores like Swift for example were not developed at Apple, but rather at ARM.

The likelihood of ARM releasing a chip into the wild with a non-ARM ISA is not all that great, since the ARM ISA is what ARM makes all of its living from.

Until the day where Apple is capable of creating a CPU by themselves there will be no Apple ISA.


For Apple to develop an ISA they would have to develop a CPU. Something they don't do inhouse.

Apple owns PA Semiconductors which employs some of the smartest and most experienced CPU designers in the business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: