Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Apple Will Switch to ARM-Based Macs (2014) (mattrichman.net)
221 points by ctippett on Aug 23, 2022 | hide | past | favorite | 210 comments



I think it's more interesting to see HN's past discussion from 2015: https://news.ycombinator.com/item?id=9244240

Some key take away:

- A lot of people got it wrong with the bet on Intel.

- 2 people that said it will take 5 years were right.

- Suggestion to use Rosetta was right.

- Indicator for fan less Macbooks was right.

It's easy to doubt but it actually takes effort to form educated guesses about the future.


2015 was a pivotal year to indicate Apples future direction, here’s why …

2015 is the year both the (Intel) 12” MacBook and the first iPad Pro were released.

These we also both entirely new form factors for Apple (and both roughly the same size).

On the Intel front, Apple saw how underperforming, short lasting battery and hot the 12” MacBook was.

Then they saw how performant, long battery lasting and cool their iPad (ARM chip) was.

This became an easy decision for Apple to ditch Intel when they saw how much better their own iPad Pro was relative to the 12” Intel MacBook.


>Apple saw how underperforming, short lasting battery and hot the 12” MacBook was

I sure hope they saw. They designed, built and sold the things.

Would be hard to miss those issues during their development and testing, wouldn't it?

It's not like Apple didn't know what they were selling.


And they could do what, given that Intel was their only option at the time? Not built it?

Not to mention tons of people love them...


They could have used lower power Intel chips. Nobody forced them to put 45W chips in a paper thin chassis, but their marketing guys really wanted to have the top of the line Intel chips in them despite the wattage.

What did you expect would happen?

It would be like Apple putting the chip from the Mac Studio in the Air.


They did use low power chips. They had core M CPUs (later rebranded i5-i7 because OEMs). These were 4.5W TDP chips, even the best i7 in the 2017 model (i7-7Y75). The ONLY chips Intel had with lower power usage at the time were the significantly slower Atom and Atom derivatives.


>They could have used lower power Intel chips

They could, but those were crap.

>What did you expect would happen?

That it would be a niche due to size, but otherwise much beloved model, whose 2022 absense is still lamented? (See comments below)


Arguably the customers forced Apple to use a reasonably performing chip vs one that didn’t perform even reasonably?


There's seeing at a superficial level and seeing at a deeper level. Plenty of companies were designing, building and selling hardware using the same components and making many of the same tradeoffs. The point is Apple realised what this meant, and saw what the implications would be to try and actually address those issues.


I loved my 12" macbook and would buy an Mx one in a heartbeat. It was so small I'd forget if I had it in my bag and would have to look. I traveled all over the planet and it was super convenient. I even wrote code on it.

Yes it was slow but really, not terrible.


The 12” MacBook was amazing?

It ran windows with battery life better than macOS. It was a solid form factor. Never over heated. Great for traveling with.

I miss mine.


The 12” will be (in my personal memory) the best Apple product I have ever owned.


12 had amazing form factor, but it was great only for native apps, web apps were running horrible on it. But it was beautiful at that time, it was first macbook with various colors? Even once one lady at coffee shop approached me and asked what version of MacBook it was. No no... that's end of the story :)


I almost bought one - I had, if Apple had had one in stock when I tried to buy one. A truly faszinating device. Probably the smallest "full size" laptop, depending how you could, ever made.

For sure, the weakeast point of it was the processor, which had to sacrifice too much performance to stay in the thermal budget. I guess the basic design decisions for the 12" MacbBook were made when Intel still seemed to be on track for their 10nm process. But it wasn't the only caveat with its design. It had of course the horrible butterfly keybord which would plague the Apple laptops for years and 12" is the absolute minimum screen size for doing work.

I think Apple has another laptop design < 13" coming, that could be 12". The ARM processors pretty much solve the power/performance problem. The keyboard could actually be the biggest hurdle, as it increases the body thickness. But the keyboard of the new M2 Air is just wonderful, well worth whatever it adds to the device thickness. I wouldn't even be surprised, if they don't make an ultra-thin 12" MacBook but rather a 12" MB Pro, which might be slightly thicker than the Air.


Same. It certainly had its flaws, but it was ahead of its time in many ways. At the time I was traveling a lot and didn't need much power so it was perfect for me. It seems I got extremely lucky as I never had keyboard issues with mine.

Nowadays you could stick a (maybe throttled or fewer cores to get even more battery life) M1/M2 in there, add a second USB-C port, and it would be an excellent device without any of the downsides that the original version had.


I also never had keyboard issues but mine was 2015 Gen 1. I know a couple of people who got the 2016? Or 2017? Model and their keyboard had some issues with keys not registering.

That form factor with m1/2 would be awesome! Especially to use in bed cos the screen was so nice and it was so light.


It’s also when Apple’s laptop speaker game left the earth. To this day no one even came near Apple.


Yeah the speaker on my 2015 12” is still better speaker than my 2021 Lenovo legion 7 and 2022 dell XPS 15.

I kinda wanna get an M2 Pro but I personally dislike macOS. :(


The 12” MacBook (2015) was the last Apple laptop I bought before the current MacBook Air M2. I opted for a BTO with the i7 processor but it was still stultifying and slow, a constant exercise in frustration. Eventually the motherboard died requiring an out-of-warranty repair, which I conceded to, but shortly after that the batteries died… and then I set it aside. An absolutely stunning piece of design work but so unusably slow it put me off laptops for a full seven years. What did I use in the meantime? A 2012 Mac Mini upgraded to the hilt and more recently a Mac Studio, and various releases of iPads and iPad Pros throughout the years.


Macs always run windows better than macOS, which I find hilarious


I’m talking about battery. If there’s a discrete gpu it drains the battery so quick because switching between integrated and descrete sucks on windows.


>>On the Intel front, Apple saw how underperforming, short lasting battery and hot the 12” MacBook was.

Then they saw how performant, long battery lasting and cool their iPad (ARM chip) was.

This became an easy decision for Apple to ditch Intel when they saw how much better their own iPad Pro was relative to the 12” Intel MacBook.

For those who have being using Mac since the PowerPC days all have used this argument against X86 chips. As you all know, Apple actually had to switched from RISC chips to Intel X86 chips, before they switched back to ISAs chips now. Just saying performance is not the main reason to switch chip sets.


IBM's PowerPC chips were high performance workstation chips, which means you're going to get performance and heat. In a desktop computer, you can add robust cooling, in an ultra-portable laptop, you cannot.

Apple abandoned PPC after it became obvious that laptops were going to become more popular than desktops, and the G5 cheese grater Mac Pro required liquid cooling for the dual CPU chip versions.

Apple moved from Intel to ARM for the exact same reason. Intel no longer cares about pursuing high performance with a low power draw. They have returned to the Pentium 4 strategy of performance via clock speed and power increases.


I believe the specific reason (as far as Apple disclosed) was that lower performance / higher efficiency PowerPC CPUs just weren't on IBM's roadmap and whatever quantity of CPUs Apple was buying and/or willing to commit to buying wasn't enough for IBM to consider it. Intel was focusing on power efficiency after the whole Netburst disaster.


Have they really gone back to the pentium 4 strategy or are they behind in process node tech and can only compete with AMD’s performance by pumping power into their cpus? I think Intel’s top priority right now is to catch up to Tsmc’s node tech to have as close to performance per watt (or beyond) parity as they can against AMD (and Apple)


>Raptor Lake to Offer ‘Unlimited Power’ Mode for Those Who Don’t Care About Heat, Electric Bills

https://www.extremetech.com/computing/338748-raptor-lake-to-...

If that isn't a return to the Pentium 4 strategy, I don't know what is.


Stop feeding into hyperbole. The 13900k is for maybe 5% of the market, and the non k will give crazy enough performance for most buyers (if they even exist at this point). Giving the 13900 a high heat mode is just for the chance to keep up/beat AMD for bragging rights.

Intel isn’t relying on an architecture thats about to run them into a wall like the Pentium 4’s architecture was about to, what keeping them is second is being behind on their process, and beyond that, execution.


Sorry, but Intel is pretty obviously chasing performance via higher and higher clock speeds and ridiculously high power draws just like they did previously with their Pentium 4 strategy.

You can argue that it's not what they "want" to be doing, but it's certainly what they are doing.


I'm obviously not going to change your mind so you do you


>Intel Raptor Lake boosts performance, but the [power] requirements are staggering

https://www.digitaltrends.com/computing/intel-raptor-lake-ma...

What you can't change is the reality of Intel's actions.


> This became an easy decision [in 2015] for Apple to ditch Intel when they saw how much better their own iPad Pro was relative to the 12” Intel MacBook.

[1] implies that the M1 development started around 2008 - although you could also read it that the M1 was 5 years in development but that sounds a bit quick and also doesn't fit with [3] in 2014 - "Apple Testing ARM Based Mac Prototypes".

But there doesn't seem to be any other direct corroboration of the 2008 date that I can find.

[1] https://www.youtube.com/watch?v=4oDZyOf6CW4 via [2] [2] https://news.ycombinator.com/item?id=31778257 [3] https://www.macrumors.com/2014/05/25/arm-mac-magic-trackpad/


2008 is probably about the time Apple started serious work on in house CPU cores, rather than specifically the M1? The A6 chip was the first with an Apple designed core, and was released in 2012, so they would have been working on it for several years before.


2008 is the year Apple bought PA Semi, an independent chip design house, which formed the core of their semiconductor design team for Apple Silicon.


You pretty much nailed it. They likely started work when they hired Johny srouji.

https://www.apple.com/leadership/johny-srouji/


Also, it was around this time that the chips in ipad pros were strangely getting close to intel cpu performance in benchmarks while running in thin enclosures without fans.


> Also, it was around this time that the chips in ipad pros were strangely getting close to intel cpu performance

Around what time? 2015?

Maybe I could have been more clear in my original post but 2015 was the year the iPad Pro launched.

How could the “iPad Pro were getting close to Intel performance” happen in thr first version?

The very first iPad Pro was already more performant than the MacBook 12” (which also launched that same year).


Oh Im pretty sure since the ipad pro launched or maybe the gen after, their scores in various benchmarks were getting scary close to the intel benchmarks. It mightve started maybe 30% off but with each generation the gap closed. It was increasingly clear that apples arm chips weren’t just toy chips limited to tablets and tech media talked about how it was only a matter of time before there would be an arm mac or a mac/ios convergence.

Whenever Tim Cook was asked, why pretty much up until maybe a few months before the apple silicon announcement, he said there weren’t any plans to switch the mac to arm.


and yet today the wonderful 12" Macbook is dead


I personally like the 12" form factor, and I own one, but I saw approximately zero of them in the wild. It's significantly lighter, I can still get work done on it, it's great for travel... and yet I can't remember seeing other people using them.


I want a 12" netbook, but most options are garbage or overpriced. A 12" M1 OSX netbook would be a gamechanger.


you may not have seen them, but they are out there (raises hand)


It's hilarious how HN is consistently wrong, even on tech-heavy subjects. No bright minds popping up here tbh. You can glean some interesting stuff from this. The hive mind was wrong on dropbox, seemingly wrong on this, and they are likely wrong on blockchain today.


It's been over 10 years and we still don't have a decent usecase for blockchain outside of crypto.


the global username by ethereum name service (ENS)

see fallon.eth


there has been a huge use-case the whole time: decentralized accounting / banking

some people see value in this some people perhaps not


For most people this has little value. The centralised banking model is cost efficient and for most people trust isn't an issue. The modern world runs on trust.

Also if I want to trade anonymously there is always cash ( as least for now ), which is a simple, well understood, near universally accepted mechanism.

There are niches where decentralized banking is useful - but they are niches - and many of them are associated with dubious activity.

On top of that the current tech plaforms simply don't scale - the cost of replacing simple trusted parties with technology is huge.


> For most people this has little value

for most people in the US or the world? i think everywhere banks rank amongst the most disliked institutions


Blockchain asks the question: what if you couldn't regulate the banks through democracy?

Which would imply that people love banks and want them to be even more powerful.


ok


That doesn't make being "not a bank" valuable.


Not to mention NFTs, but it seems impossible to convince the HN crowd that there is a real art market in them, with real buyers and real artists using it. Any attempt to demonstrate this is met with wild leaps of logic - it's wash trading, it's not big enough, it's all a scam. I point to Beeple, Wes Cockx, DeeWay sales and all they tell me is it must all be a fraud. I show them markets with vibrant activity - Versum, FXhash, Foundation - and all I get is repetitions of memes around how NFTs are dead.


Successful scams have customers - the presence of people putting in money isn't a defining trait.

However I do think it's a bit harsh to call these things a scam - I mean take the art market itself - value is entirely subjective - doesn't mean it's a scam ( though scamming things happen in the normal art market to try and inflation prices ).

The question you have to ask yourself is:

Are you buying the NFT as an investment - because you think other people will value it, or are you spending the money because you are quite happy to own that thing forever and never sell - ie the value is to you.

I'd argue if you are doing the former you are more likely to be 'scammed' than the latter.


NFT’s are fungible in practice, is the root of the issue.


I buy art for my art collection. They happen to be NFTs. I also purchase physical art.


It just shows that predicting the future is really difficult. It's also not logical why HN being wrong in one case means it will be wrong in another.


It shows that there is groupthink at play and that the overall HN commentary on subjects, even when they should know better (tech), is not particularly correct.

Being wrong once is one thing, but HN commentary seems consistently wrong on a whole litany of topics. The biggest source of bad hot takes seems to be something new/different. HN seems to be consistently conservative.

It’s a shame because I used to think that i would gain some insight on future trends from the fact that a lot of the people who comment here are in tech, but now I’m not so convinced.


That's because it is hard to discern what is shilling and what is a real / expert opinion. Shilling happens here on HN, like it does on every social media platform / discussion forum. It doesn't help that we can't call out suspected shilling on HN as it is against the rule. But that rule makes sense because suspicion is not proof, and it would just bring down the quality of discussion here if everyone of us accuses the other of shilling :). (And as someone once pointed out to me, sometimes these shillings are not necessarily from the marketing team but from employees and shareholders here, who have a vested interest in seeing the company do well.)


> No bright minds popping up here tbh

i think that's harsh. why would you even expect an aggregate sentiment of hn to indicate where to invest one's money. on the other hand it might be worth pointing out that y combinator supports plenty of blockchain projects


Also, it was until 2015 where Intel was on a great streak with their processors and didn't show signs of stagnation until around 2017/2018

I5/i7's were great at that time


The 12” MacBook came out in 2015 and had terrible performance and a problem with overheating. Insider reports say that Intel had promised a lower power chip with better performance that Apple designed the MacBook for but then Intel killed that chip and Apple had to use another Intel mobile chip. Some people feel that was when the problem got real.


Apple would still have a lot of people who relived the same situation during the IBM days.

If you're getting hints that your chip vendor is not aligned then you better have a backup plan.


Not entirely true, people watching and in the field knew. Back in 2015 I made a massive bet on AMD because people on HN working in the field explained the arch shift. Similar with this move by Apple. There are people in those rooms, making the decisions, sharpening their ideas on HN — if we care to listen.


>Back in 2015 I made a massive bet on AMD because people on HN working in the field explained the arch shift.

I think you got really lucky. Zen1 didn't ship until 2017 and it lagged severely behind in single thread. You had no idea what AMD was going to have.

Even AMD would tell you that they were surprise that Intel fell so far behind. They've been quoted saying this a few times.

Intel's 10nm Node (equivalent to TSMC 7nm) was suppose to ship in 2016! It didn't ship anything on the desktop using 10nm until Alder Lake in 2021. Five year delay.

Intel would have been well ahead of Zen2 in node technology. Instead, it was around 1.5 node behind.

If you made your bet purely on what was said inside AMD in 2015, you just got lucky. No one knew that Intel would be stuck on 14nm for 7 years when they were planning for 2 years.


There were a ton of Intel engineers at the time complaining about management in a different thread.

I’m sure luck was involved (so many things could go wrong). But i tend to make money on bets based on what I hear on the fringe. AMD, Bitcoin, etc


The problem is how do you separate the gold from the cruft? Seems impossible. Also, so much cruft makes you miss the gold as well. :/


I actually wrote software to do that lol

Built this: https://insideropinion.com/

But use it for investments.

You can’t completely remove risk, but I invest in areas where insiders discuss publicly about their work. It provides insight often fundamentals lack; leaving massive potential upside.


Stalking as a Service? How could that possibly go wrong?

I’m equal parts horrified and amazed. And curious what The Algorithm thinks about my ramblings.


From my understanding, it is way more costly to miss the gold than to get some cruft.


They had already plateaued in 2014/2015 with Haswell/Broadwell. They've basically been releasing that same CPU with minor tweaks to power consumption and codec support for 8 years now.

At the time, it was hard to notice, but reviews at the time absolutely noticed the minor CPU update (https://www.theverge.com/2015/4/9/8375735/apple-macbook-pro-..., search "Broadwell"). Another funny aspect of that review: it mentions out 10+ hour battery life for the MBP as a nice, but hardly astonishing spec. 9 hours 45 minutes with Chrome was the worst case. It's amazing to think how bad the 2016-2019 MBPs were in comparison, to the point where getting back to 10 hour battery is an amazing Apple Silicon feature!


I don't think my 2019 MBP has ever laster more than 3 hours on battery.

My M1 Max is amazing by comparison.


They were bad for thin and light laptops with good battery life (mostly atom shit and underpowered core cpu's like in the 12inch macbook).


It’s funny to see how confident some OPs are in their claims, all while pointing out how the article is “speculating” the future.


Nice catch.

Phew. I’m glad I didn’t comment on that thread.


It's easy to doubt but it actually takes effort to form educated guesses about the future.

I didn't see this particular article but I would have agreed with it. I certainly have written many comments on similar articles (search the comments for alwillis ARM Mac for starters) explaining why such a switch to ARM was completely doable, having lived through the switch from PowerPC to Intel while working at MIT.


2030 predictions? Anyone?


- Apple will have created two additional "multitasking" systems for iPadOS on top of the failed stage manager. People still continue to ask for a traditional Mac-like window management system and Apple still will be like "lol no".

- Apple arcade and Apple TV will still be around but Apple has no plan/vision for gaming.


Apple will have an AR or VR headset, and we will start ditching traditional displays for VR.

Meta will have recovered from their current issues and will be their main competitor.

I think Apple could have a big advantage because their processors could allow more powerful things in a standalone VR headset compared to the current Generation where you need a external PC for most CPU and GPU intensive tasks.


I don’t know - mobile phones were a huge huge success because they are first and foremost, practical. They fit in your pocket, can take stellar images, have access to literally everything on the internet and are fully capable general purpose computers in your hands - a sci-fi product turned into reality. And on top of that, we control them with our hands, which are arguably our most capable part for this job.

I don’t see any practicality to VRs outside some tiny niches. They make a few games more fun, some niche workloads can be done more efficiently, but they are hard to equip and first and foremost block your interactions with the real world. Sure, some contact lens futuresque thingy could improve on this as well, but how would you control that? Voice control is slow and troublesome.

So I don’t predict a huge success to VR, it will be at most something akin to an xbox’s kinect, or some wii accessory.


I see a huge VR market in misguided companies with too much money to burn on "team building" projects who see VR as a good way to extend middle management BS to WFH employees. Likely subsidized by MetaFacebook desperately pushing their crappy VR projects to keep them on life support.

As far as I can tell, we have a lot of progress to make with display resolution and GPU quality before VR becomes competitive for work environments with a modern hiDPI dual display setup. Maybe it's more appealing to folks with crappy home desk setups, or people who live in cities who don't want a full desk? Ergonomics still feels like an obstacle, though.


Rumors of the Apple car have died down (I think ...) but they may come back to the transportation market, maybe out of left field. Repurposing the iPod trademark?

(E.g. this for the XXI century: https://en.wikipedia.org/wiki/Isetta )


Apple has been focused on health lately. So we might see more sensors and apps related to that, to the point where it would monitor your overall health continuously.


Funny to see an old comment of mine there. Looks like I wasn't wildly off base, fewf :)


I think there were a good number of us who recognized that Intel was a bad fit for Apple. The lackadaisical progress of PowerPC simply forced the issue, and Intel was the only real option at the time.

Intel is like Mike Brady with his architectural designs that all look suspiciously like his own house.

They refused to compete with themselves and kept x86 as 32 bit so they could promote Itanic, and therefore lost the lead to AMD for years (it wasn't just 64 bit - actively REDUCING instructions per clock with Netburst was... well, legendary - just in a bad way).

It just so happened that the kick in the ass Intel got from AMD came a few years before Apple needed Intel, so Intel had finally started trying enough that they had a product line that would work for Apple.

But really, is Intel suitable for low power? Could anyone seriously imagine an x86-based phone? Their one-hit wonder is only barely keeping up with AMD and ARM when Intel throws hundreds of watts at their chips and turbo-clocks the heck out of them. Even though Ryzen has been showing up Intel for years, they've floundered so long that there was practically zero chance Apple could stay with them long term.

But even in 2005, Intel wasn't necessarily good - they just happened to be the least bad right then.


I think the move to Intel was critical for the revival of the Mac because it gave Windows users an off-ramp if they wanted to try out the Mac hardware without going all-in with OSX. Boot Camp made it safe for Windows users to switch, and many did.

It got me to buy my first Mac, at least (iMac). I figured if OSX didn’t work out I could just run Windows on it.

Maybe ironically, the new ARM Macs make me a little hesitant to buy a new MacBook because I’d be pretty much locked in to OSX (with all due respect to the Asahi folks who are doing great work - I fear Apple is going to pull the rug out from under them though.)


> I fear Apple is going to pull the rug out from under them though.

I don’t think so. There’s been support for “other OSs” from the start with the M line. I was actually pleasantly surprised by this.

I don’t think they will change their minds. It brings value to the platform while offering little threat.

I think we’ll be seeing a boot camp version of ARM Windows as soon as Microsoft solves its licensing issues.


What support does apple provide for other OSs? I thought that work was entirely community driven?


The people on the Asahi team explained this into great depth, but basically everything before macOS (bootloader and etc.) is capable of handling a different OS. However i would dispute their and OP's interpretation that Apple "supports" other OS. It's more of a "fitted for but not with", where Apple leave the technical possibility but don't do anything actively to help anyone trying to do so. The Asahi team has to reverse engineer pretty much everything from scratch with pretty much no documentation, which is an amazing feat and hats off to them. However claiming Apple supports what they're doing is a stretch, and there's no reason why Apple wouldn't just pull the rug under them - it's not like they've said it's OK (like with Bootcamp) to run Linux on Macs.


https://twitter.com/marcan42/status/1554395184473190400

> "Okay, it's been over a year, and it's time to end the nonsense speculation."

> "I have heard from several Apple employees that:"

> "1. The boot method we use is for 3rd-party OSes, and Apple only use it to test that it works, because"

> "2. It is policy that it works."

> "Hacker News peanut gallery, you can drop the BS now. It's not an "assumption" that this stuff exists for 3rd-party OSes. It couldn't "be something internal Apple uses that could go away any minute". That is not how it works, it never was, and now I'm telling you it's official."

> "And this isn't even news because @XenoKovah (who invented and designed this entire Boot Policy 3rd party OS mechanism) already tweeted about this whole thing a long time ago, but apparently it needs to be restated."


This is conjecture with some wishful thinking. Apple providing the possibility of other OSes, and "inviting" Microsoft to port Windows, does not mean they want anyone to run any OS possible on Macbooks. And they don't do anything to help anyone write drivers for the Mac's numerous proprietary devices, everything has to be reverse engineered.

If Microsoft don't port Windows ARM to Macs, Apple might decommission the "core technologies". Even if they do, there's nothing stopping Apple from changing their mind down the lane, like they have already done on other topics, whatever the intentions of developers that developed them were.


Apple has implemented per OS security. Meaning that you could have complete chain-of-trust with one OS and have an untrusted second install of another OS. No PC has that, and it’s the core idea that allows Linux to be ported. I don’t see how such a useful technology for testing insecure versions of macOS would be removed. Apple backs few technologies, but they don’t often change their mind when they do. The fact of the matter is that Apple benefits from the access it gives Linux. It won’t remove it thoughtlessly.

Ultimately it leads to a discussion about the competition. When you buy an ARM based Surface, can you put Linux on it? Is Microsoft clear you can? Are they providing drivers?


This is literally from the horses mouth. You are exactly who marcan was referring to in the “peanut gallery” comment.


The horse would be Apple putting out a public statement they want to have Linux on Macs. A few comments from developers that they made it on purpose so that other OSes could be booted on Macs is similar, but not even close.


Marcan, in case you are not aware, is one of the people behind Asahi. The only speculation here is yours. Parties directly involved have said otherwise. Hector Martin appears to have given up posting here exactly because of this kind of bullshit. Besides, even if Apple did come out and say it, you'd still trot this nonsense out.


Marcan is directly involved in the reverse engineering. He is not involved in Apple's decision making on what they want to allow on their precious platform, which they tend to really lock down in every possible way.

> Besides, even if Apple did come out and say it, you'd still trot this nonsense out.

No. Apple saying, unofficially, they welcome Microsoft, and Apple coming out and saying they love Linux and want it, would be different. That wouldn't be any guarantee that they won't change their mind or are just hypocrites, but it would still be more meaningful than "technically it's possible, there is no official anything (only an official welcome to MS Windows, but sure, Apple absolutely want Linux on Mac to be a thing".


Nothing. It's just bullshit naviety because apparently they haven't locked some aspect of the Mac yet, like they did with the iDevices, and so we are supposed to believe that this magnanimity from Apple is in "support" of other OS.


I owned the first gen white MacBook and the presence of that off ramp was a critical selling point for me. It turned out I never ended up using Boot Camp, and it wasn't until years later that I even installed Parallels. But knowing that I could if I needed to was key.


I've done this a few times.

Between my big Linux desktop and M1 Macbook Air I'm not even sure what to do in Windows anymore. I don't need to run any Windows applications per say, and everything I do really need runs on MacOS. I still have a VirtualBox Win10 install on the Linux box, but honestly I'm not sure why, besides habit.


It also created the "Hackintosh" problem where people could take normal Intel PC hardware and put OSX on it, although Apple didn't try to do much against that except to sue a couple of companies providing dongles that supposedly made making Hackintoshes even easier than it was.


I think geeks overestimate how few Mac users actually care about Windows compatibility. Even then, as long as users had a browser, MS Office and Adobe products, Windows didn’t matter.


> But even in 2005, Intel wasn't necessarily good - they just happened to be the least bad right then.

I don't think it's a coincidence that Intel Macs coincided with the release of Core/Core 2 Duo cpus. At the time there was nothing close to them by any metric. Remember Intel enjoyed a generational lead in foundry tech for decades.


> Could anyone seriously imagine an x86-based phone?

You forget about Atom?


> Could anyone seriously imagine an x86-based phone?

> ... Atom?

Indeed! The Motorola RAZR i was Atom-powered.

https://en.wikipedia.org/wiki/Motorola_RAZR_i


I only remembered it because recently I pulled an old android tablet out of a junk drawer, installed CPU-Z on it, and scratched my head for a moment about why it said “x86” as the architecture…

I then remembered why it was in the junk drawer.


I was actually pleasantly surprised with the performance of a $100 x86 HP tablet (running Windows 8.1 of all things) I got back in 2014 or so. It booted and launched apps really fast thanks to the SSD, faster than my regular computer that had an HDD at the time. The Atom processor and single gig of RAM didn’t hamstring it too much for basic web browsing.

The strangest part of that setup was that it used a 32 bit UEFI and a 64 bit operating system, so I couldn’t use regular Linux images. But hey, you could configure the UEFI settings using only the touchscreen!

Naturally the battery life was terrible and it lasted fewer than 12 hours in sleep mode, so it spent little time outside the junk drawer too.


I had a similar experience with a Tesco (U.K. Supermarket) Hudl (their short lived tablet brand) bought in 2015. Not a bad experience and attractive price but awful battery life and got quite hot too!


I didn't forget about Atom. They give shit performance per watt, they have all sorts of issues (remember the Atom C2000 debacle?), and they're still quite expensive, relative to ARM.

Atom based phones exist, but they're not good at anything in particular.


The last few percentage points of performance take an insane amount of power. If you gave up 10% perf you'd probably halve power consumption.

I don't think there's any reason X86 has to use more power than ARM - it's simply not the focus of most implementations, however. As I understand it, most processors at this point are an interpreter on top of a bespoke core. Intel used to get quite a lot of praise for low power consumption back in 2012-2015 with Ivy Bridge and so on - rather coincidentally, that was also when they had a process advantage (rather like AMD and Apple today enjoy).


Yes and no. After the CISC vs RISC war was over I also though ISAs where implementation details.

But from what I’ve read, having different length instructions makes extracting parallelism way harder. That’s why Apple can make such crazy wide machines.


Oh yeah, isn't ARM fixed-size instructions and x86_64 is variable-size? So decoding x86_64 requires clever pipelining, whereas ARM is just "Every X bytes is an instruction" and you can parallelize easily.

I wonder if we'll see Intel or AMD try to make another somewhat-backwards-compatible ISA jump to keep up with ARM.

x86_32 --> x86_64 --> x86_512?


If I’m not mistaken, based on similar threads on HN decoding is never the bottleneck, so I would be hesitant to write x86 off for mobile devices. It probably does make transition to smaller scale harder, and that is where most efficiency wins happen.


We should never write x86 off when there are billions behind it and variable length instructions have their advantages as well, such as code density, which may come to play an important role again in the future.

But it is much easier to simply chop off a stream of instructions at every X bits than to evaluate a portion and decide what to do later and that difference get larger the wider you go.


> variable length instructions have their advantages as well, such as code density

Variable length instructions in general do have a code density advantage, but x86 is a particularly poor example. For historical reasons, it wastes short encodings with rarely used things like BCD adjustment instructions, and on 64 bits often requires an extra prefix byte. The RISC-V developers did a size comparison when designing their own compressed ISA, and the variable-length x86-64 used more space than the fixed-length 64-bit ARM; for 32 bits, ARM's variable-length Thumb2 was the winner (see page 14 of https://riscv.org/wp-content/uploads/2015/06/riscv-compresse...).


Nice, thanks. I didn’t know x86 was that bad in this regard.


For many years, Intel had quite a process advantage over the competition. That of course helped them a lot with making low power processors vs. what AMD could achieve. And the non-x86 competition had basically stopped making processors in this domain. However, there was a reason that RISC designs were used in most low power applications like embedded and of course smart phones.

Yes, with the complexity and transistor budgets, the disadvantages of x86 can be somewhat glossed over, otherwise they would have vanished from the market long ago, but they add a certain overhead which cannot be ignored when looking at low power applications. The efforts the CPU needs to take until it can execute the commands is higher and x86 requires more optimizations done by the CPU than RISC designs. Which today also contain a translation layer, but a way simpler one tha x86, as the assembly instructions match modern CPU structures better.

It is probably no coincidence that Intel, which had to work around the issues of executing CISC code on a modern CPU chose the EPIC design for the Itanium. Which goes beyond RISC in putting compexity towards the code generation vs. on-cpu optimizations. Too bad it didn't work out - it might have, if AMD had not added 64bit extensions to x86. While there were certainly a lot of technical challenges which were never completely solved, the processors seemed to perform quite well when run with well optimized code. Perhaps they were just one or two process generations to early. While considered large for that time, their transistor count was small compared to a modern iPhone processor. I wonder how they would perform if just ported to 7nm (the latest CPUs were 32nm).


Even though Intel is known for putting tremendous work and effort in to their compilers, and therefore have compilers that put out excellent results (even on AMD), the compilers never delivered on the promises they made with Itanic.

If you'd like to see some first-hand observations about modern-ish compilers on Itanic, check out this person on Twitter who does lots of development on Itanic:

https://twitter.com/jhamby


For me the pivotal point was when iPhone XS outperformed 7700K with number crunching. It was some weird benchmark and everyone could find issues with it. But it did show tremendous progress happened with mobile chips performance and desktop stagnation.

https://www.cs.utexas.edu/~bornholt/post/z3-iphone.html


This is a hidden gem for me. Thank you for sharing the article.


I think it's important to draw a distinction between 'can' and 'will' here.

With the the A7 SoC and the A64 ISA it became clear that the ISA and Apple's Silicon design capabilities were sufficient to build SoC's that would compete with the best that Intel could offer.

However, the costs of making a transition would still be significant, maybe not in cash terms for Apple, but certainly in manpower and focus.

I suspect that it was Intel's process stumbles that led to the decision being made in the end. How many times, I wonder, did Intel promise something to Apple behind the scenes, only to fail to deliver? With the success of TSMC the opportunity for Apple's management to take more control of their own destiny would have been too compelling.


When you look at the laser focus and clear talking points of Johny Srouji, compared to the way Intel leadership talk about themselves, it’s no wonder why Apple chose to deal with him and his team instead of Intel.


IIRC the whole relationship got off to a bad start when Intel failed to deliver 64 bit (x86-64) in time for the first MacBook Pros. Apple had to ship OSX on 32 bit only to replace it a year later when Core 2 Duo came out.

Given Tim Cook's known zeal for delivery I hesitate to think how bad things must have become later on.


Got any examples of comparison?


You could argue it was inevitably going to happen since Apple started designing their own silicon. iOS is macOS, more or less (modulo a few system libraries and apps), so since the beginning of the iPhone project, Apple has had the software in place to make an ARM-based Mac if they wanted to.


> iOS is macOS

Could you please expand on that? I heard they were quite distinct, e.g. sandboxing/security is done completely differently, with ios having a much more modern approach.

While pedantically it can be included in your modulo, but are they really that similar?


What I mean is that iOS is a fork (or, I suspect, maybe just a build configuration) of macOS. Since it was a new platform, Apple could try some new things and enforce new restrictions (like sandboxing), but they're not completely different codebases in the way that Windows CE and Windows NT are. The core technologies are the same at most levels of the OS and, if they're not maintained a single codebase, clearly many components are.

Therefore I strongly suspect that Apple had macOS running on the iPhone and iPad from very early on. They just did not want to release it because the UI is not suited to a phone.


When presenting the original iPhone Steve Jobs announced that it ran OS X. I'm not sure how far apart they've grown but at that point they were probably very similar.


While we know that the current chips are ARM-based, Apple is very deliberately calling it "Apple Silicon," perhaps to avoid committing to any particular ISA. An interesting question is how far Apple Silicon will diverge from ARM64. We know there are extensions like AMX already.


I think Apple call it that simply for brand reasons. They may also avoid committing to any particular ISA as you said, but that's in parallel.


There are no extensions for developers to directly program against.


I think some of that may depend on external factors.

If AWS Graviton or Microsoft ARM significantly grows then it might force Apple to be aligned with ARM64 but just with their own extensions. Because one of the big blockers for developers during the transition to M1 was having all of the CLI tools being available. And they were only available on day one because of demand from AWS users etc.


Standard disclaimer: I work at AWS in Professional Services, all opinions are my own.

It really doesn’t matter. Most applications are written in non native languages like Node, C#, Java, and Python.

If you have native dependencies, even if you are on the same architecture, you’re going to run into issues developing and building packages if if isn’t the same operating system.

Simple case of building Lambdas on none Amazon Linux operating system is just add —use-containers when using the Serverless Application Model.

Other cases, I’ve had to build on Cloud 9 instances.


Lots of big applications including the Adobe Suite use assembly optimizations for performance.


I am curious. Is that allowed as part of ARM's licensing? It would be surprising that ARM is okay with that, unless Apple bullied them into allowing them to call and market it as Apple silicon.


Qualcomm calls their ARM chips Snapdragon. Samsung calls their's Exynos. Mediatek calls their's Dimensity.

ARM doesn't care what you call them.


And since everyone calls them something other than ARM, there's not much brand value in calling something "ARM". Might as well call it "Apple silicon".


That's a good point. Sort of interesting.


Apple co-founded ARM, so they almost certainly have a custom deal/licensing terms.



Also hugged to death at the moment: https://archive.ph/NtapO



Since it seems hugged to death: https://archive.ph/6pjS1


> Somewhere on Apple’s campus, ARM-based Macs are already running OS X.

Okay.

NeXTSTEP originally ran on the Motorola 68030, then the 040. Then NeXT fully ported the operating system to run on the HP PA-RISC, Sun Sparc, Intel 486, Motorola 88000, and at least tentatively to the IBM RS/6000. This was as diverse a collection of processors as one could imagine.

When NeXTSTEP was transformed into MacOS X, it was just one more minor step to include PowerPC in that list. When iOS was developed on the iPhone, it was obvious that it was little more than a modified version of MacOS X, and primarily NeXTSTEP. Surely that's how they developed it: Apple just ported MacOS X to ARM and did development on top of it.

I guess what I'm saying is that it's not at all interesting or surprising that someone in 2014 prognosticated that MacOS X had been ported to ARM. It was clear as of 2006 that Apple had already done that. And given that NeXTSTEP had been ported to at least seven processor families prior to that, it was hardly a big deal.


The porting was obvious, the switching was not.


The porting was obvious, the switching was not.

It was obvious to me, knowing that MacOS is based on NeXTStep and how that ran on a bunch of different architectures—68000, Intel, Sparc, etc.

I even wrote 3 years ago here on HN (before Apple announced their transition to ARM) that I wouldn't be surprised if there were ARM-based Macs in the lab [1].

And also knowing that Apple always wants to be in control of as much of the technology they rely on as possible. After Motorola/IBM dropped the ball with PowerPC and Intel couldn't deliver the performance per watt they needed, they weren't going to fooled a third time.

[1]: https://news.ycombinator.com/item?id=21235236


No, the porting is obvious - I guarantee Apple has test rigs running Darwin on AMD processors and RISC-V, for example - but switching, as in, actually doing something with that portability, is less obvious.


Agreed, but that's not what I'm annoyed by here.


I guess what I'm saying is that it's not at all interesting or surprising that someone in 2014 prognosticated that MacOS X had been ported to ARM.

I literally wrote the same thing about NeXTSTEP here on HN 3 years ago: https://news.ycombinator.com/item?id=21235236


> When iOS was developed on the iPhone, it was obvious that it was little more than a modified version of MacOS X

They said exactly this at the time. The OS on the original iPhone was explicitly “OS X”, it only gained its own name several iterations later.


What made NeXT (and then OS X) so portable? NeXT never sold a whole lot, but I would like to see a breakdown of sales by CPU. And if certain platforms were missing things, etc.

I've heard many answers over the years:

Mach is a microkernel, they are easier to port

NeXT never made their own hardware so they designed it that way from the start

It contained 'very little assembly'


> NeXT never made their own hardware so they designed it that way from the start

When I think NeXT, I think the NeXTcube (1990-1993) [0], which apparently also had a predecessor in the NeXT Computer (1988-1991) [1]. NeXT made their own hardware for a solid chunk (nearly half) of their lifespan, and NeXTSTEP was originally released with the NeXT Computer [2]. It didn't even support CPU architectures other than the Motorola 68k series until 1993, around the demise of the NeXTcube! I think its ultimate portability probably has more to do with being a UNIX at its core (so a battle-tested design written in somewhat portable C) plus a user environment written in a higher-level, easier to port language (Objective-C).

[0]: https://en.wikipedia.org/wiki/NeXTcube [1]: https://en.wikipedia.org/wiki/NeXT_Computer [2]: https://en.wikipedia.org/wiki/NeXTSTEP


The "NeXT Cube" was essentially the same thing as the "NeXT Computer" with an updated CPU. Basically, they renamed it after they released the "NeXTstation"[0] to avoid confusion. Otherwise, they'd have had two NeXT "Computers." Heh.

Though fine in 1988, NeXT's own hardware was rather under-powered for the early 90's. Their main competitor, Sun, was transitioning to their own RISC architecture (SPARC) after originally being on 68K. DEC was coming out with the Alpha. HP had moved to PA RISC. Though I digress, it's too bad NeXT never ran on the Alpha! That would've been a killer 90's setup.

[0]: https://en.wikipedia.org/wiki/NeXTstation


I meant their own CPU


I don't think running on a bunch of CPU architectures is really unique to NeXT. Windows NT ran/runs on x86, Arm, Alpha, MIPS, Itanium, and PowerPC, all with a desktop as a full-blown product. Forms of desktop Linux will run on all of those and more.


Super good question. I owned and used NeXT slabs, 1992 - 1999.

The Mach-as-microkernel thing couldn't have helped portability. The Mach task, port, thread and pager abstractions seem like they'd be difficult to port, but I could be wrong.

I read at the time that NeXTStep the GUI ran faster on Solaris than on Mach, because pipes and sockets were faster than Mach ports. So I'll believe the very little assembly.

I suspect that Mach-O file format had something to do with the portability, you can put arbitrary sections in it. Some NeXT games kept image and sound "files" in a single huge Mach-O executable.


Mach more than helped:

"Mach 3 (Figure B.1) moves the BSD code outside of the kernel, leaving a much smaller microkernel. This system implements only basic Mach features in the kernel; all UNIX-specific code has been evicted to run in user-mode servers. Excluding UNIX-specific code from the kernel allows replacement of BSD with another operating system or the simultaneous execution of multiple operating-system interfaces on top of the microkernel. In addition to BSD, user-mode implementations have been developed for DOS, the Macintosh operating system, and OSF/1. This approach has similarities to the virtual-machine concept, but the virtual machine is defined by software (the Mach kernel interface), rather than by hardware."

https://web.eecs.utk.edu/~qcao1/cs560/papers/mach.pdf


NeXT ran a Mach 2.0 variant. I forget which Mach features were absent, but NeXT unmistakeably ran 2.0

The BSD stuff was kind of bolted on. Very uneasy marriage there.

At one point DEC had a 2 Mach task system that could run VMS processes, but I'm pretty sure that was Mach 3.0 based.

That is to say, Mach 2.0 wasn't as advanced in portability, and the "portability" targeted was different.


Early in that timeline Solaris was heavily into being a "pure" System V r4. The pipes and sockets implementations were based on System V Streams -- the in joke was that streams were much more like sewers and they weren't pleasant. Numbers wise, they were perhaps 2 orders of magnitude slower than on SunOS (BSD based). At one state university, where a very "beefy" SPARCcenter 2000 was purchased to support 500 simultaneous users, Solaris on this system could barely support 40 users (streams performance was the reason - telnet sessions pushed several streams modules). Sun ended up having to provide 10 SPARCstation 10 systems running SunOS which could easily support 70 simultaneous users each in order to fulfill the contractual obligations. The SPARCcenter 2000 was turned into an NFS file server.

After the early days, Solaris did improve, but I think that was because ideas / pieces of BSD (from SunOS) were put back into the OS.

I don't think Solaris pipes / sockets performed better than Mach.


Early Solaris was definitely not very good. I briefly ran Solaris 2.4 on a SparcStation 5, and reinstalled SunOS 4.1.x because Solaris 2.4 was so bad. 2.5 was decent though!


The impression I always got from those abstractions were that they were precisely there to support portability. Pagers, for example, are quite flexible about what can back them and what the geometry of that backing can be.


Yes, but it was explicitly a different kind of portability: running MSDOS or VMS executables unchanged, for example.

The task/port/thread/pagers are primitives that let Mach run some other OS executables, not made it easy to port Mach to other hardware. There were papers about getting MSDOS and VMS executables to run in Mach tasks. Some startup was making a system to run (pre-OSX) Mac executables.


Portable compared to what?

« microkernels are easier to port » is not a thing.

NeXT did make their own hardware, it was 680x0-based and it was nicer to use than the x86 PC shitboxes it ran on when they stopped making hardware.

« very little assembly » is closest to the truth. NeXTstep and OpenStep were written almost entirely in C and Objective-C and compiled using the GNU toolchain. NeXT was always a hybrid of Unix userspace on top of a Mach microkernel. It’s just not that hard to port a Unixlike OS to another CPU architecture. It’s literally been done since the 1970’s.

However, it is much more difficult to port the OS and maintain binary compatibility with apps created for the previous CPU architecture, which has always been Apple’s special sauce since the original PowerPC Macs.


The hardware certainly looked nice (I have a "turbo" slab in my collection!), but the performance was "meh" at best. NeXTstep needed a lot of resources, and late 680x0 was no match for an early Pentium. It's no wonder NeXT ported to x86.


Business necessity. Something about your own hardware business failing makes you really eager to port to other hardware architectures. That and your processor manufacturer (Motorola) openly indicating that they're killing the ISA you built your platform on.

Obv writing in a portable language and on a portable microkernel helps.


Short answer: small teams of Really Smart People.


You vastly underestimate the effort involved in a port of something like an OS.

This comment left me shaking my head and facepalming all at once.


MacOS always ran on x86 chips. SJ admitted as much. There was always a skunk works project to keep OS X compatible with x86z


The parent comment wasn’t expression frustration at that, He was expression frustration at people thinking running OS X on arm way back when was some kind of feat.

At the time, it was.


This is a middlebrow dismissal. From what I can see this article played out exactly as predicted, and the idea that NeXTSEP rans on multiple architectures in the 90s is just half-assed retconning. In 2014 the conventional wisdom was that Intel was best-in-class and would remain dominant.

Per the other thread, I'm not seeing the storyline that the author is arguing that Mac OS will run on ARM but still be first class on Intel architecture. We can assume that is not the case because Apple doesn't half-ass their transitions.

Please highlight what you think the prediction got wrong so we can talk about that specifically.


I'm not dismissing the prognostication that Macs would ultimately run on ARM (though to be honest that was obvious a long time ago). I'm dismissing the breathless way in which the author announced that somewhere, somehow, a Mac was already running ARM in the bowels of Apple in 2014. Of course it was. It was doing that a decade prior.


I am no a fan of apple and their digital golden jail, but I see that move as a way to prepare to jump to risc-v which does not have toxic IP tied to it, once wi have performant risc-v cpus ofc.

This is a little drop of good in a ocean of bad.


Sorry this makes no sense:

1. Apple clearly has no problem with 'toxic' (not sure what that means) IP as long as they have access on acceptable terms - which they clearly do with Arm.

2. Apple were almost certainly one of (maybe the only) lead partner for Arm in developing the A64 ISA - they have had a lot more input into A64 than they have into RISC-V.

3. They design their own architecture - they could have built a RISC-V CPU for the Mac if they'd wanted - they don't have to wait for anyone else.


I don't think RISC-V was ready for prime time when Apple decided to switch away from Intel, which was likely many years before the first M1 products were announced. The RISC-V spec may be stable and fine but you need a whole ecosystem around it to put it to the kind of use Apple wants.


But:

- ARM64 was announced in October 2011. Apple was shipping SoCs based on ARM64 only two years later. Apple can do these things very quickly when it wants to.

- Apple had all the info it would need to base a decision on Arm or RISC-V at the time of the decision to leave Intel. It could have delayed a short while to allow the ecosystem to mature if it had wanted to go for RISC-V.

- It controls a large chunk of the ecosystem anyway (LLVM etc).

I really find the idea that Apple - hardly the most open company in the world (to say the least) - is expected to switch ISAs again to RISC-V for no apparent commercial advantage very, very implausible.


I agree Apple probably isn’t switching to RISC-V any time soon. I’m just not buying the story that they looked at it, could have done it, passed on it, and that’s that.

Even in 2022 it may not be wise to start a huge RISC-V project yet for a company like Apple. It’s not mature enough.

I also agree Apple is not an open friendly company. That’s why I think if they do it then it will likely be loaded with proprietary extensions. So it’s not a matter of simply swapping ISAs, it’s completely rethinking the entire stack. That takes time and expertise. The industry isn’t there yet, not at the scale to support what Apple (and others) would like to do.


Fair enough but in that case what’s missing and stopping them now?

Edit: rereading it sounds like you think Apple might want their own (version of) ISA which is reasonable except they already had huge input into A64 and are adding their own extensions already anyway.


It’s possible Arm is the ISA for the next century, with all the others mere footnotes of history, the same way the 8-but micros have become a dead-end in computing history.


As it quotes, Anandtech had many benchmarks suggesting Apple's chips are going strong and even speculating similar idea.

Anand Shimpi who founded Anandtech has stepped down from Anandtech and is working for few years now at Apple in hardware.


One thing: Intel produces chips. Apple produces hardware with bundled software.

Apple doesn't design chips, they design functionality. Intel design chips that are as much flexible as possible.


Now the clock begins on Apple switching to RISC-V


This seems very, very unlikely to me. RISC-V is a very similar architecture to AArch64; I haven't heard a reason to expect it to have improved performance or efficiency. (Its only distinguishing feature (that I know of) is compressed instructions -- which ARM used to have (THUMB mode) before dropping support in AArch64, so presumably it doesn't help on Apple's systems with their huge caches and high memory bandwidth.)

Rather, RISC-V's primary advantage over ARM is its openness; vendors can do whatever they want with the ISA without having to pay license fees or maintain compatibility. But this doesn't affect Apple at all; they co-founded ARM, and they have a huge amount of influence over the direction of the architecture along with some sort of special license that allows them to do things other CPU vendors aren't allowed to do (https://news.ycombinator.com/item?id=29782840, https://twitter.com/stuntpants/status/1346470705446092811).

Developing an entirely new high-end CPU mircoarchitecture takes many years and many billions of dollars. It's not something Apple's going to do unless they have a very, very good reason -- and RISC-V being really cool is unfortunately not a good enough reason.


I agree, it’s seems unlikely they will have another transition as they now control their own destiny.

The fact that Arm64 doesn’t have compressed instructions is obviously a very deliberate choice, thus I can’t agree with the notion that RISC-V compressed instructions could be a potential “advantage” over Arm64. Their absence is a massive advantage in many ways that I have explained over and over on HN and it’s the single biggest stupidity in RISC-V for high-end perf (not microcontrollers).


ARM instruction set licensing is a drag, an inefficiency.

ARMv8 is a nice ISA, and the work that architects do is critical, and it's not easy to develop a precise, clean, extensible, and relatively bug free ISA. But there is relatively little real innovation in it. It's a pretty conventional RISC, warmed over for the modern era. In fact no ISA has any real magic, that's all in the silicon.

There's certainly not hundreds of millions of dollars per year worth. That money is only paid because of the proprietary lock-in and ecosystem around the ISA, which is similar to the proprietary software model, so at some point it would be easy to imagine large chunks of the industry deciding to break away and go to something more open.

It may not inevitably displace ARM entirely, and ARM Ltd might change their ways or open their ISA to prevent it. But it could easily happen too, in the next decade or so.


As you know Arm makes most of its money selling actual designs, not the ISA. I suspect Apple pays very small fees which are not material in the context of the Mac.

For that it probably gets access to the full range of Arm IP, including for example the IP associated with big.LITTLE - and there are likely to be others.

There are two areas where RISC-V, for example, does have present a clear advantage for firms: where firms want to innovate on top on an ISA (eg Tenstorrent) or who want to shave cents off their BOM (eg WD).


ARM makes most of its money in royalties, not licensing. Under royalties, how much comes from chips that use their designs and how much from ones that don't, I don't know. Do they make that data available?

> I suspect Apple pays very small fees which are not material in the context of the Mac.

Why do you suspect they are very small?


There were at a minimum 1.5 billion A series based CPUs sold last year - possibly a lot more. Arm’s total royalty sales were $1.5bn. I think we can safely say that the fees Apple pays - with its architecture license - are not material in the context of the Mac.


> There were at a minimum 1.5 billion A series based CPUs sold last year - possibly a lot more. Arm’s total royalty sales were $1.5bn. I think we can safely say that the fees Apple pays - with its architecture license - are not material in the context of the Mac.

I don't see how that follows. You don't know what their royalty arrangement is. It's probably related to the value of the chip sold. If they sell 20 million macs a year and pay ARM 2 bucks a chip that's 40 million every year. If macs have a gross profit of a couple of billion that's not insignificant. Could be 5% of that. Not to mention several hundred million a year for iphones. Lot of money to pay to be locked into a proprietary ISA when you almost entirely support your own ecosystem, compilers, OSes, etc anyway.


Apple’s net margin is around 25% so that’s around $300 per Mac sold so with your $2 it’s about 2/3% for which it probably gets access to all of Arm’s IP (eg big.LITTLE) as well as use of an ISA that Apple has long experience using and almost certainly helped to shape. As I said not material in this context.

Oh and they are clearly not locked in as they have just changed the ISA for Macs anyway.


> Apple’s net margin is around 25% so that’s around $300 per Mac sold so with your $2 it’s about 2/3%

That's a lot.

> for which it probably gets access to all of Arm’s IP (eg big.LITTLE) as well

That isn't how their licensing works. They license IP and charge royalties for cores. Apple would pay extra to license ARM Ltd cores they use.


You obviously think that Arm's ISA and related IP has very little value so not much point in debating further.


What? You're the one who is trying to say Apple is only paying a pittance to ARM for it! You're making no sense.


This certainly applies to random manufacturers, but not necessarily Apple. Do they pay royalties to ARM at all? And of course, they could create any ISA extension, if they wanted. But currently they seem to be heading more into the direction of creating additional compute units (like the neural processors) rather than adding new instructions to the CPUs.


> This certainly applies to random manufacturers, but not necessarily Apple. Do they pay royalties to ARM at all?

I don't know if there are any public details about license and royalty structure for Apple, but this is ARM Ltd's core business and it's how they make their money. So, probably.

> And of course, they could create any ISA extension, if they wanted.

I'm not sure about that. I think licensees are bound by certain requirements to implement the architecture faithfully. Now Apple obviously has a huge sway and could lobby ARM to make changes it wants. But it does not necessarily have final control of that either.

> But currently they seem to be heading more into the direction of creating additional compute units (like the neural processors) rather than adding new instructions to the CPUs.


As Apple is a co-founder of ARM and holds and architectural license, I think the chances are high, that they neither pay any or significant royalties and have a lot of freedom. As could be seen in the special switch making the memory semantics of the Apple Silicon match that of x86, making the job of Rosetta easier.


Apple holds an architectural license, but I've never heard it suggested they can avoid paying royalties to ARM before. Doesn't seem very likely to me.


>I'm not sure about that. I think licensees are bound by certain requirements to implement the architecture faithfully

Apple designs already deviate from ARM64 specifications, like adding custom instructions: https://github.com/AsahiLinux/docs/wiki/HW:Apple-Instruction...


Apple co-founded ARM. I recall somewhere that they did have a very special license, but can't recall the details.


Apple will not have to pay ARM for licensing


Apple doesn't pay ARM for licensing.


Even if Apple does, it's probably peanuts.

Apple has invested billions to make a full transition to ARM.

I'd think that they would have to spend at least $50 billion or more to redesign all the Apple Silicon from the iPhone to the Mac Pro, rewrite 5 or 6 operating systems, and beg developers to recompile their apps.

It'd be a nightmare.

I think Apple will stay on ARM for at least 20 years...


Why just 20 years? This direction gives full control of a silicon roadmap. They can make it whatever they want or need. My guess is they have a more specific roadmap than even Intel or AMD at this point, as they're not commodity processors, but now engineered for a very specific purpose.


A Tom's Hardware piece about the possibility: https://www.tomshardware.com/news/apple-looking-for-risc-v-p...

I speculate this is somewhat less likely now that the NVidia / ARM deal has fallen through but surely Apple wants the option at least.


It's not going to happen.

Apple probably already use RISC-V designs somewhere in their hardware stack. But the ISA will remain ARM for a long long time.

Even Apple Silicon has a ton of stock ARM cores to control various SoC tasks.


5 years?


Bad article. Totally missed that one. Apple released M series chips, not B series. Ha!


If true then can we expect macOS to be being compiled for RISC-V right now?


Death Hugged :)


Arm is a flash in the pan.

5 years from now it will all be RISC-V and high level bytecode. Arm's ISA victory is Pyrrhic.


ARM is already around for decades and has a solid 100% market share in smartphone applications. It also has a very high market share in anything embedded, though there PowerPCs, Mips and others are still surviving.

Not having to pay royalties for the CPU architecture and the architecture being quite extensible has a lot of charm. That is why completely new CPU designs these days start on RISC-V, especially in research. On the other side, there are plenty of ARM-IPs, so unless you are designing your own processor cores, it often is way more efficient for the project just to include the ARM-IP.

I can see RISC-V might make quite an impact on embedded systems short term, but smartphone vendors basically buy what Qualcomm produces. Until they produce an equally powerful RISC-V based design for less money, they will of course stick with the current architecture.


what would be their motivation to switch? They already control everything they need to with Arm. RISC-V would not open any doors for them that are not already opened, as far as I can tell.

It is far easier for them to adjust their existing silicon designs than to adopt an entirely new ISA, given equal gains.

What we're going to see now is 20 years of solid innovation on Arm CPUs from Apple. I don't even like Apple at all, and that is apparent to me.


You base that confident statement on what exactly?


There's no reason for Apple to switch. The amount they might save on license fees wouldn't pay for the transition.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: