Hacker News new | past | comments | ask | show | jobs | submit login
Apple’s Mac Chip Switch Is Double Trouble for Intel (bloomberg.com)
93 points by ksec on June 12, 2020 | hide | past | favorite | 116 comments



Doesn’t Apple already include an ARM chip in MacBooks with the Touch Bar?

I have three feelings:

1. Apple will just enable the existing chips “today” and developers can ship ARM code to a sizable audience, with instant battery benefits.

2. Cheaper Macs will start dropping x64 next year and continue running only ARM code, maybe emulating x64 like Rosetta.

3. Pros will keep having both processors for the foreseeable future, similarly to how they have “dual graphics”, except that the OS will always run on ARM.

We should keep in mind that Apple just dropped a swath of 32-bit software which means they aren’t afraid to do it again.


There is a legal slowdown for emulating x86 (and x86_64 I beleive), because of Intel patents. That's why Microsoft's first attempt to ARM was such a disaster, with no compatibility with Windows' huge ecosystem. That's why there is no commercial toolsuit who offers x86 compatibility on other architectures.

But maybe the situation has evolve ? Expired patents or a deal between Apple and Intel ?


It’s a nice theory but the the Touch Bar CPU specs are pretty well known at this point - if Apple had wanted to do this they would have designed the system differently.

Supporting a different ISA on the same OS/Runtime is relatively “easy” in 2020 and doesn’t need such a sophisticated strategy - for 99 percent of developers this is going to be a one click recompile to different build target in their IDE. Push new binaries to App Store and done.

By “easy” I mean relative to how ISA transitions have been managed in past. Apple are so big now that runtime mechanisms for x86 code, such as Rosetta during the PowerPC to x86 migration or hardware emulation on the 68k etc, just aren’t necessary this time around IMO. People will recompile. For sure we will lose some software that is too old to be recompiled by long lost authors, but I don’t think Apple cares. They already just killed 32bit app support with similar consequences.

Remember today that your “arm” iOS apps work just fine in your x86-based iOS simulator in Xcode on an intel Mac, so access to actual ARM hardware isn’t a significant barrier at all to development for most.


Indeed this would be the fourth time they successfully switched architectures. They went from m68k, to PPC, to x86. Apple is in a pretty unique position in that they make both their hardware and software ecosystem. It gives them a lot of flexibility that companies who just make generic Wintel systems lack.


(both of these are genuine questions, not rhetorical ones)

1. If x64 wasn't already dominant and had the same market share as ARM, would you choose x64? Why?

2. With x64's current dominance, would you buy a ARM system as your primary one, knowing that you'll have significant difficulties developing for the majority of current market share?


I've heard from a fairly drunk Intel chip designer that CISC continues to make sense in gate counts where full OoO cores make sense.

1) You're almost certainly decoding into u-ops even if you chose RISC because you'll have uarch features like a seperate pipe line for the AGU and the load store queues, atomics that have to wait to go out all the way to L2 for fairly arbitrary lengths of time, etc. You can see this in cores as simple as BOOM, and it's opinion of the RISC-V community that macro op fusion of prescribed clauses is the way to go.

2) These decoders are a drop in the bucket when compared to OoO circuitry and power budget.

3) The complex addressing modes and memory RMW operands are effectively a way to address physical registers while consuming no architectural registers, and very few bits of I$. Yes, x86 is ancient and isn't as optimal as it could be from a huffman encoding perspective (hlt is a single byte opcode!), but it's pretty damn good overall. Better than AArch64 at code density and therefore I$ pressure. As an aside, I'm sorta curious what a CISC-V would look like, and if it would set a new bar.


> isn't as optimal as it could be from a huffman encoding perspective

I've been toying with the idea of literally having decoding as decompression, where there's a special instruction to change the dictionary. I guess this'd be tantamount to implementing the decoder as an FPGA, but I'm hoping there's some reasonable version where a fairly non-dense "base encoding" becomes a pretty optimal bit stream.


I've actually played with that idea as well in the past, as a mechanism for emulating other architectures. Who wouldn't love a RISCV that you can turn into a passable x86 or M68K or whatever? In the past too, there were user programmable uCode machines intended to be a generic platform for pretty arbitrary ISAs, so it's not an entirely crazy idea. I eventually came to the conclusion that programmable fabrics in the critical path like instruction decoders on modern processors didn't make sense from a timing perspective, and a classic RISC (or VLIW like Transmeta/Denver) + JIT continued to make more sense. In hindsight I believe you can see this in x86 cores where the microcode ROM is pretty much only executed in already slow paths, and the patch RAM is even more anemic. I'd imagine you'd hit the same issues.

That being said, my experiments were hardly conclusive and I'd absolutely love to be proven wrong.


This is essentially what RISC-V does with its "Compressed" instruction set, except without the dictionary switching. They pulled a bunch of statistics over real-world machine code, ran it through compression, then reverse-engineered that compression to make something a bit more sensible to a compiler writer. I think this will work out vastly better than the haphazard patching of e.g. Thumb on ARM.

http://www.icsi.berkeley.edu/pubs/arch/EECS-2011-63.pdf


What is OoO?


Out of Order

More specifically a core where the heart is being sequenced by Tomasulo's algorithm, and probably a large bypass network linking the functional units together.


Out of Order [execution]


> If x64 wasn't already dominant and had the same market share as ARM, would you choose x64? Why?

I would pick the one which performed the best on my workload.

Assuming everything was equal; performance, performance per watt, software support, market share, price to performance: I would pick ARM because I think the competitive landscape is more diverse. x86 is effectively an AMD-Intel duopoly, especially with the cross-licensing agreements.

It's a lot easier to start working with ARM IP, which means there should be more competitors, which should translate to better future performance improvements. Even today, we see many more companies competing in this space (Ampere, Marvell, Qualcomm, Apple to a degree), not to mention startups looking to develop their own ARM chips.

[1] https://techcrunch.com/2020/04/29/arm-is-offering-early-stag...


Seeing that most popular languages - Java, C#, Python, Javascript, etc. - run on top of VM and you can develop on one architecture and deploy to another, why does it matter?

I develop on a Windows laptop and deploy to Linux all of the time. I have some Python programs that have native dependencies. I still develop on Windows, push and the CI/CD pipeline runs on Linux and packages a Windows build.

But even with my first job out of college back in the 90s, I was writing C code that I developed on Windows and was cross compiled for DEC VAX and Stratus VOS mainframes.


Even with languages like C, targeting bytecode as distribution format is also an option.

Mainframes do it, as means to integrate C and C++ into their language environments.

Back in the early mobile OS wars, there was a company selling J2ME like stack, but using C and C++ instead.

Then there is the LLVM bitcode used by Apple on iOS and watchOS (which happens to be more platform neutral than regular LLVM bitcode).

Oh and WebAssembly and MSIL as well.


LLVM won’t allow that type of portability. That was a myth that was dispelled by none other than Chris Lattner during an interview on ATP. The transcript of the interview is now returning a 404 though.

https://atp.fm/205


LLVM open source version, yes you are correct.

LLVM proprietary Apple own internal fork, as used in iOS and watchOS, is another matter.

As a matter of fact, there is a WWDC talk about how it allowed the seamless migration of 32 to 64 on watchOS.


Lattner also said later on Twitter that he was purposefully being vague during the interview. The 64 bit chip built for the watch and LLVM bitcode was designed in concert.

https://mobile.twitter.com/clattner_llvm/status/104696072464...


Yeah, so we get into these kind of discussions, because everyone just talks about open source variant of LLVM.

Apple does whatever they feel like with their proprietary fork, to the point that Apple's clang also gets its own column on cppreference.

Just like Sony and Nintendo haven't contributed anything back to LLVM that would disclose any capability from their consoles.


I think Chris Latner has just a little insight on the inner workings of LLVM and it’s capabilities - even the proprietary portions that Apple hasn’t released in the open.

I tweeted @atp and let them know that the transcript was returning a 404. They have since fixed it.

https://atp.fm/205-chris-lattner-interview-transcript

John Siracusa: The same thing I would assume for architecture changes, especially if there was an endian difference, because endianness is visible from the C world, so you can’t target different endianness?

Chris Lattner: Yep. It’s not something that magically solves all portability problems, but it is very useful for specific problems that Apple’s faced in the past.


> I develop on a Windows laptop and deploy to Linux all of the time. I have some Python programs that have native dependencies. I still develop on Windows, push and the CI/CD pipeline runs on Linux and packages a Windows build.

Only peripherally related to your point, but I develop with Python on Windows and found that deploying to Linux, while in possible, can be a hassle in practice.

For instance, I use turbodbc on Windows to access SQL Server databases via ODBC. This works fine.

However, when I try to port the same code to ARM Linux (on Raspberry Pi), I learn that ODBC has dependency on the native ODBC driver which doesn't exist on the ARM platform. So I have to jump through all kinds of loops to compile FreeTDS and unixODBC, which aren't trivial. Not only that, I had to cross-compile on an x64-Linux VM to the ARM architecture because the Raspberry Pi itself didn't have enough disk space for gcc and such.

I think cross-architecture compilation isn't the issue -- it's cross-platform deployment, especially when there's no parity in dependency availability.


I still develop on Windows, push and the CI/CD pipeline runs on Linux and packages a Windows build.

Should be “packages a Linux build”

“ I was writing C code that I developed on Windows and was cross compiled for DEC VAX and Stratus VOS mainframes.”

Of course cross compiles is not the correct terminology. We built the same code on the target machines.


Someone described ARM Macs as targeting the modern developer, i.e. one targeting mobile first. I wouldn't want a Mac incapable of running Windows, but I don't think I fit this demographic.


Windows itself runs on ARM64 just fine. The third party software usually still requires Microsoft’s emulation layer. I’m not sure that Apple would still maintain Boot Camp, however. I doubt that that matters as much as it used to.


To me, it's funny how the newer legacy software that's 64-bit only is going to be the problem in an ARM64 world -- the 32-bit x86 binaries will still be fine.


That might be a problem in the Windows world but it is unlikely to be a problem for Mac OS. This is likely why Apple finally shut down the 32-bit apps last year. Almost any app built using the Apple 64-bit libraries (both Objective-C and Swift) should be relatively easy to recompile with an ARM64 target.


My impression is that Microsoft are taking a very conservative interpretation of the x86 ISA patents (compared to e.g. qemu-user which has made this available for years now).

Patents have a 20 year lifespan, most of the x86_32 ones have expired, but x86_64 is just barely still covered (the first publication was in 2000, and first commercial product was on AMD Opteron/Athlon64 in 2003). So it's probably just a matter of time before the userland emulation stuff is further extended, although we won't get anything newer than SSE2 for the same reason.


The question isn't really x86 ( inclusive of every instruction currently on modern x86 ) vs ARM.

It is simply the business model changes. The older I get the more I am convinced it isn't the technical that changes or determines the outcome, it is the value proposition.

Not to mention no one "choose" x86. They chose Intel, or Intel CPU that still currently offer top performance under 10 - 16 Core.


I can buy AMD or Intel CPUs, AMD/Intel/NVidia GPUs, a vast array of motherboards and cards, storage and networking pretty much off the shelf, throw Windows or Linux standard builds on it and expect it to work great and perform great.

Try doing that with ARM and tell me under what limited circumstances would you buy ARM laptop/desktop/workstation.

CPUs are at a stage where AMD and Intel are doing great in non battery constrained setups and with battery constraints you will only get better battery life if you don't make the CPUs sweat. In server/workstation design I would expect AMD and Intel to beat the crap out of ARM anything just because how much optimized they are in that space owing to massive use.

So except for frothing at mouth idiots that keep droning about this there isn't much value for the normal customer in buying ARM anything and lose on all fronts - software availability, build flexibility, compatible hardware availability and even performance for many cases. (Spare me teh Geekbench please.)

It does make sense for Apple to market their own CPUs to people who are into that type of thing - it's more control, more money and least dependence for Apple - all round win for them if they can pull it off. But it's not going to be easy unless they have something that is hugely better and overcomes at least some of the disadvantages.


>I can buy AMD or Intel CPUs, AMD/Intel/NVidia GPUs, a vast array of motherboards and cards, storage and networking pretty much off the shelf, throw Windows or Linux standard builds on it and expect it to work great and perform great.

I think this might be one of the motivations for Apple to switch to it's own CPUs instead of just switching to a different x86-64 CPU like AMD's Ryzen. Apple would be able to lock MacOS to the hardware like iOS and the iPhone.


Absolutely it is but it's not just to shut down the few hackintosh users although that'd be an welcome consequence. It's only because they still have not figured out how the locked down macOS will work that they have not announced anything so far.

Download apps only from Mac App Store, make it easy for people 100% in Apple ecosystem to do what they need to do (XCode, browsing, store apps, multi tasking) and they're good to go. They already charge premium for the hardware - not having to pay Intel will fatten the margins, with app store revenue and maybe banning 3rd party browsing engines they can get exactly what they have always wanted and so long as they provide people with some macOS features - terminal access (even if controlled) and multitasking etc. - people in the Apple ecosystem aren't going to care.

Of course it maybe gradual and it maybe not so restrictive - they may allow 3rd party app installations and make it harder like they have already done in a way.


> I would expect AMD and Intel to beat the crap out of ARM anything

Anandtech:

"This year, the A13 has essentially matched best that AMD and Intel have to offer "

https://www.anandtech.com/show/14892/the-apple-iphone-11-pro...

All at, as far as I can tell, a small fraction of the power consumption / thermal load.

So imagine a featherweight 12" MacBook with roughly the CPU performance of a much larger/heavier Intel laptop and the same or better battery life.


Ugh. Spec2006 is energy efficiency estimate, an Intel and AMD CPU with 45+w at their disposal and thermal headroom will look very different with server and workstation workloads if it was possible to run meaningful server/workstation benchmarks against Apple's CPUs that is - and the full AT comment is -

"This year, the A13 has essentially matched best that AMD and Intel have to offer – in SPECint2006 at least."

So yeah your comment is business as usual for Apple fan - taking comments and benchmarks out of context, comparing apples to oranges and just being generally dense to support some narrative. I.E. Nothing closer to the reality of having ARM CPUs beat AMD or Intel if I wanted to build a server or workstation or even a big powerful laptop with not optimized for email and browsing battery life.


> Spec2006 is energy efficiency estimate

SPEC doesn’t agree with your characterization of their benchmark.

“The SPEC CPU® 2006 benchmark is SPEC's industry-standardized, CPU-intensive benchmark suite, stressing a system's processor, memory subsystem and compiler.”

https://www.spec.org/cpu2006/

Neither does Wikipedia:

“SPECint is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPECint is the integer performance testing component of the SPEC test suite”

https://en.wikipedia.org/wiki/SPECint

Nothing about energy efficiency, and notice the definite article “the” in front of “integer performance testing component”.

So SPECint tests general compute tasks, such as compiling, XML processing and running Perl programs.

Of course you can combine the perf measurements of SPECint with power consumption measurements to arrive at an efficiency measure, if you so choose.

The other benchmark was SPECfp, which focuses on scientific computing tasks such fluid dynamics, quantum chemistry etc.

So tasks you are unlikely to perform on...your iPhone. The A13 still does well, but not as well as on integer, presumably because Apple just didn’t focus on putting idle resources on their iPhone chip.

Increasing FP performance when you have great integer performance is straightforward, afaik: just add more FPU resources.


Only one of the benchmarks you linked to compares it with Intel/AMD - https://images.anandtech.com/doci/14892/spec2006-a13_575px.p... - that says at the top - SPEC2006 Energy Efficiency Estimate. There is no power usage for Skylake or Zen2.

But leaving that aside, it is also a single threaded performance measure which is nice but not quite a big deal for desktop / workstation / server workloads. It's not very convincing that they will have a 12 core part for workstation that I can use (leaving aside the fact that I won't be able to run what I want on it) - that will perform much better than AMD/Intel in real world workloads including virtualization etc. So I still maintain that narrowly focused performance benchmarks Geekbench or SpecInt mean very little in factoring OP's question on whether you will replace your primary system with ARM - the other CPUs are also limited to what Apple's CPUs can run - not Java stuff for example.

ARM as a whole had lot of time to compete with others in desktop/workstation/server market and there is little evidence they got anywhere big.


> knowing that you'll have significant difficulties developing for the majority of current market share?

I think this is a very small issue.

The reason why it's hard to develop for ARM today is that reasonably powerful (read: non-phone) ARM processors are not very common. The barrier for developing on ARM is that you have to explicitly purchase another ~$500 laptop in order to test on ARM. Also, libraries generally support x86 but not necessarily ARM. With small desktop market share, the expense is often not worth it.

On the other hand, x86 processors are ubiquitous, and if you're a developer, you can almost certainly find a old laptop or desktop with an x86 processor laying around to test your software on (especially if you're in the demographic to buy relatively expensive MacBooks). Also, since most people use x86, it makes sense to invest in that market.

TL;DR: ARM -> x86 barrier to entry is much lower because x86 processors are so common, x86 opportunity is much more lucrative, and most libraries already support x86.


There is a plethora of powerful ARM processors for dirt cheap prices. The market is flooded with ~50$ 64bit multicore ARM boards that run Linux. In fact, there are no x86 equivalents, so you're basically describing the opposite of what's actually the case.


Raspberry Pi's? Pine64's? I have many of these devices. These are fun to play around with, but are no replacement for a modern x86-64 system.


A replacement for modern x86-64 system is definitely not what the comment I replied to described, even though a lot of these ARM boards certainly apply, depending on subset of tasks needed.

Here is one example of what you get for 50 dollars:

https://www.hardkernel.com/shop/odroid-c4/

This is a quadcore 2GHz 64bit board that can run Android / Ubuntu 20 / Wayland / WebGL / Linux Kernel 5.4 / Vulkan.


The comment you're replying to was asking for something better than a smartphone; this is worse. It has four of the lower-power and lower-performance Cortex-A55 cores; typically in a smartphone those would be paired with several higher-performance Cortex-A75 cores, but the board in question doesn't have those. This is fairly typical for Pi-style boards.


Yes, this is exactly what I mean. When I said "modern x86-64", "modern" meant anything in the past 4 to 6 years. The average developer would be better off buying a used laptop with an older Core i7, and putting a new SSD into it.


noob here my thought is 1. arm is more power efficient than x64? 2. emulation + shift to remote development environments?


One theory behind ARM's power efficiency is that you don't have to waste power and area converting x86 into uops. But this is a constant overhead. It matters a lot on 5/10W TDP mobile processors, but matters much less on 45W TDP laptop processors.


Hasn't it been a while since 45W TDP processors were common in laptops? My X395 uses a 15W TDP processor


No. They are different use cases. Personally I prefer the 45W chips with higher base clocks at the expense of battery life. 15W is unsuitable for some workloads, though the latest 15W "U" chips from AMD are making some waves there.


The i9-9980H(K) found in high-end laptops has a 45W TDP. Granted, this is the most power hungry laptop processor you can find.


Is the emulation environment really there? Maybe the nodejs (and everything not low level) can seamlessly develop on arm and ship on x64?


I spent a week recently working from a Pinebook Pro after spilling water on my primary laptop; my workflow was already fairly SSH-centric, but I had fairly few problems with software compatibility that weren't fixed by binfmt_misc + qemu. (That week was spent mostly developing in OCaml and Lisp; I suspect if I were doing C++ or Haskell, I would've had a harder time.)


What I don’t quite understand in the discussions here is the hardware focus.

No offense, but the libraries Intel is putting out there (IPP, MKL) for efficient parallel computing are really outstanding. It has been a fee years for me in the end-consumer high-performance market, but AMD chips would easily run 20% slower if you optimized the code w/ the Intel Libraries and OpenCL would Not get anywhere near.

A similar thing happened w/ Nvidia and Apple many more years back; E.g., Apple dropping Nvidia; fair amount of rumors went around that Apple did that so Adobe would be less competitive if they had to rewrite their rendering engine (which they just ported to CUDA) and Apple could sell their video editing solutions with an edge (they have been hand tuned for a while of course...).

Apple dropping Nvidia has been such a “non-customer-focused” bullshit decision. Not because the AMD hardware is bad - but OpenCL has simply been nowhere near CUDA. Same with Intel IPPs and OpenCL. But maybe that changed.

The differences in performance, ease of use etc. have been mind-blowing back in the days for CUDA and IPP. Hardware alone isn’t gonna cut it. The machines are built to run software after all...

Maybe that has changed or maybe nobody needs any parallelised compute intensive applications on Mac (research anyone, image and video editing anyone?).

The fact aside that mayor software companies like Adobe et al. will probably have to staff entire departments for rewrites...


In my data science org, virtually NO ONE joining new prefers MacBook Pros because of the lack of CUDA. Yes production workflows run on cloud and you can spin up notebooks on the cloud but having the flexibility of CUDA in your laptop for dev is just really high. People who joined with Macs are trading in for more powerful Windows machines


Maybe this rumor is only half right and it isn't what everyone thinks it will be. Instead of replacing x86, what if it's just Apple adding their ARM chip to the laptop? Basically just take the current MacBook and stick a iPad ARM chip in there. And maybe make the screen a touchscreen too. It would make iOS development much closer to the real thing. And for non-developers they can run iOS apps natively. So you get a Mac that can run both x86 and ARM natively. But if you try to use more iOS (ARM) apps you get really long laptop battery life since it's optimized for that. And x86 is there when you need to run more power hungry applications. Seems like the best of both worlds in a single laptop if they do this.


Adding significant BOM cost, growing physical volume, reducing battery life, creating lots of complexity to get certain apps to run on one CPU or the other and the associated cache coherency issues? So that iOS developers can test their apps on a native CPU (But still plenty of differences like screen size, no cell radio, no GPS, accelerometers, etc) Seems dubious.


All Macs already have an ARM chip inside them in the form of the T2 chip.


But that’s specifically for Secure Enclave work (disk encryption, biometric data). I don’t think they’d want to risk someone running arbitrary code there and breaking the sandbox. (The SEP is also separate from the A-series chip in the iPhone for a similar reason, IIRC).


In addition to the secure enclave, they already have an A series core on there. That's what runs the touch bar.

There's a full XNU based OS on there called "BridgeOS" that's in the iOS/tvOS/watchOS family.


Open System Information → USB on a Mac with a T2 chip and check out all the stuff attached to it.


So do all AMD Ryzen CPUs in the form of TrustZone/PSP, usually a Cortex-A5. It's equally as user-inaccessible as the T2.


There are probably a dozen ARM chips, e.g. WiFi module, battery controller, touch bar, etc


I seem to remember Dell doing something like this a number of years back -- an x86/ARM windows/Linux hybrid. The idea was that you could boot instantly into a Linux/Arm system, do things like take notes and email and get good battery life, and boot into Windows only when needed. I don't recall if it was just a concept or if it was eventually sold.



People thought Apple might be do the same thing when converting from 68K to PPC and PPC to x86. Apple would never do it.

However, there were PPC upgrade cards for 68K Macs but you had to reboot to switch from 68K to PPC.


And what about this other rumor? (Which is much more plausible). They could put everything T2, GPU and x86 in the same platform, bringing the cost down

https://www.engadget.com/2020/02/07/apple-may-testing-amd-pr...


I've been saying this for a while, that a semi custom x86 (probably from AMD) makes way more sense. Those 8 Zen2 cores with 16GB with a very passable GPU in the next gen consoles show that it can be done for a remarkably low cost all things considered. Yes, Sony/Microsoft are probably taking a hit on those, and yes Apple would have to spend even more to get it running in a laptop form factor/TDP, but they also have MSRP headroom to make that happen with a decent margin.

Even more out there idea (with literally zero proof, it's just a good idea, IMO): Apple buys Centaur, and gets an x86 licence.

* Apple gets to have custom power efficient cores augmented by all the fabless firms they've acquired over the years.

* Mac stays x86, so Intel wins a minor victory when the alternative was a major customer switching to ARM.

* Intel wins a bigger victory because an x86 licence is effectively removed from products being on the open market. The great equalizer that is the end of Moore's law makes that licence sitting out there a long term existential risk for Intel.

* Apple doesn't have a costly transition with the tail end of Moore's law meaning they don't have the same perf gains expected from the other transitions.

* Apple also puts their hands on Centaur's newer inference accelerator IP.

* Centaur's parent keeping them on life support gets a payout.

Everyone wins except AMD (which is another win for Intel).


Tim Cook has said it: We believe that we need to own and control the primary technologies behind the products we make.

So it would make no sense to trade Intel's poor schedule and performance for AMD's uncertain future performance.

And aren't all x86 licensee prevented from keeping that license if they are bought?


> Tim Cook has said it: We believe that we need to own and control the primary technologies behind the products we make.

Tim Cook was talking about context nearly a decade old.

> So it would make no sense to trade Intel's poor schedule and performance for AMD's uncertain future performance.

x86 is just going to become more of a commodity as time goes on. And currently, Zen 2 is hands down the best perf/watt combo currently.

> And aren't all x86 licensee prevented from keeping that license if they are bought?

That was a the rumor, but centaur has already been bought and kept it's license, so at a minimum there there's some fine print to that clause. And my experience with B2B is that clauses like that are ultimately a product of the circumstances from when they're written. If circumstances change, those clauses can change. The most indelible ink is the most likely to have new semantics later.


Not having x86 at all is even lower cost.


For my work, macs have never been really useful to develop on without additions. The whole "I can run virtualbox" and having a good out-of-the-box unixy setup is what has kept me coming back. Linux/others are not a realistic option as a daily driver for what I do.

No x86, no virtualbox. No virtualbox and I'd rather just get (with much sighing, complaining, and general ill will) a windows laptop. I mean, the terminal is becoming slightly more useable in windows, right? And sometimes you just need to run a windows VM, so you need x86 virtualization.

There are downstream effects to losing the "nerd" base and I suspect this move is a bad idea, but apple has a track record of pulling rabbits out of hats and knowing what really matters. Maybe this customer segment just doesn't matter.


The thing I like about macOS over Windows when it comes to the terminal is that it's a native part of macOS and I don't have to monkey around with as many things. On my Macs, things aren't that much different compared to when I'm working on my Ubuntu server.

I don't feel the same using WSL. It's not a seamless experience. Also, I've never found a terminal emulator I enjoy using on Windows. Maybe it's still way too early, but even the new Windows Terminal left me less than impressed.


what do you find lacking in the new Terminal?


There seems to be a lot of people just assuming that ARM macs won’t be able to run x86 virtual machines. It’s a reasonable assumption, but since they’re designing the chips themselves, and it’s highly likely that they’ll do a Rosetta-like translation system to maintain backward compatibility, it seems almost inevitable that they’ll have some built in hardware for x86 virtualization. Maybe not as fast as native x86, but might not be far from it. The only problem is it might take quite a while for software like Virtualbox to be compatible with it, if Apple even makes the instructions or APIs publicly available


Windows has an ARM version.


Yep, but windows apps are typically distributed as binaries, so the OS doesn't help when you need OS+architecture to make things run and folks build for x86.


It does come with a slow-ish emulator.


Just read the other piece about replacing mac by r pi. The potential is there. And with Apple control the chip ... it would be brutal.


My macbook has 2000 battery cycles, and thinking about acquiring. pi



How is the developer ecosystem going to fair? I would imagine a lot of tools don't work on ARM (or aren't regular tested on ARM). Moreover, I feel like it's going to be a big problem that the architecture you're deploying your code to is different than the one you are developing on. Who knows what kind of crazy performance differences/bugs there are between ARM and x86_64. Also, what about the audio/video/modeling software? I can't imagine that there's any support for ARM at the moment in that space.

Also, is the Mac Pro going to switch to ARM? I'm not aware of an ARM chip that can compete with the super highend Xeons in the Pro. Having laptops run ARM and Pros run x86_64 doesn't seem like the best idea (also sounds like a lot of work on Apple's part).

Of course, maybe this switch is going to create a high-end ARM space, allowing ARM to make inroads into HEDT and the server market.

A lot seems unclear at the moment, but one thing is clear (to me atleast): there's going to be a huge fight over the next 10 years, x86_64 vs ARM. No one can possibly know who will win, but it's exciting to the say the least. I think we've all been a little tired of the x86_64 monoculture since the end of PPC.


> Who knows what kind of crazy performance differences/bugs there are between ARM and x86_64.

A lot of code for demanding applications is often enhanced with SSE/AVX and JIT recompilation techniques. Those are inherently unportable, and I'm not sure how many developers will be willing or able to port that code over to Neon and AArch64, especially for a small 2-4% of the market.

Even if they do, it's quite a cognitive burden to have master and maintain two separate SIMD and recompilation implementations for the same applications.

This will probably further drive high-end gaming away from Macs, on the heels of OpenGL deprecation and the Mac-only Metal API. Combined with Cocoa and Swift, I imagine we'll end up seeing less and less applications that run natively on both Macs and Windows/Linux after the move.


> Those are inherently unportable

There is sse2neon https://github.com/jratcliff63367/sse2neon. For intrinsics it supports, you only need to add a header. There is also simde https://github.com/nemequ/simde. It is a larger project and may be more complete.


> This will probably further drive high-end gaming away from Macs

Agree. I guess people will end up using Stadia/GFN/PS Now, etc.


Android is much more than 2-4% of the market, though.


Not in the desktop application space.


I can't speak for others, but for me it should make no difference as long as they don't nerf the MacOS operating system.

The vast majority of well written C and C++ code will compile and just work on ARM64 as long as it doesn't depend on things that are undefined in the C/C++ spec but X64 lets you get away with. The big bugaboos are unaligned memory access, assembly or X64 intrinsics, and reckless casting. Vector code will need porting (assuming there's not already a NEON version) but that's generally only found in graphics, audio, machine learning, and cryptography applications. Most apps don't have any of that. (Auto-vectorization is irrelevant as the compiler does that.)

Code in higher level or newer languages is generally even less worrisome. Rust, Go, Java, C#, and any dynamic language will just work.

I do expect that these ARM64 chips are going to have lower per-core single-threaded performance but higher parallel performance than X64 due to more cores. That means that some applications may need refactoring or partial redesigns to be more parallel to take full advantage of the chip. But that's something that needs to happen anyway since all architectures are going many-core due to the end of big easy single core performance gains. It's been a long time since huge gains in single threaded performance were a thing.

As long as they don't screw it up by e.g. nerfing the OS I'm looking forward to better battery life and better overall performance due to many cores and lack of X64 instruction decode bottlenecks.

I'm curious about how they'll do it though. I predict instruction translation (X64->ARM64), but also the return of fat binaries. It's also possible that apps distributed through the App Store will be delivered only in your host architecture automatically. I think they were making some noise about that a while ago and that may be prep for this.


I think you are overestimating the change. IMO the OS changes things a lot more than the processor architecture.

Almost nobody (except imgix and a few others) run macos on a prod server, yet many devs run macos. For example: when they run stuff via docker they run it via a VM (whether they know it or not).

Any dev (again, except imgix and a few others) that actually cares about server performance is already not running their benchmarks/perftests/tests on a mac, so that should not make a difference.


> Moreover, I feel like it's going to be a big problem that the architecture you're deploying your code to is different than the one you are developing on.

iOS apps—which run on ARM chips inside iPhones—have all been developed on Intel-based Macs.


macOS has an ARM simulator though. Will they have a “legacy Mac” simulator too?


If you're speaking of the iOS simulator, that runs on x86, it doesn't emulate an ARM chip.


Where?


> I would imagine a lot of tools don't work on ARM

A lot of open source code works well on ARM. But will we start to see some newly-discovered-but-latent arch-specific bugs, compiler bugs, undefined-behavior bugs-but-worked-on-x86_64 bugs? Yes, sure.

The cool thing is that Win/ARM and Linux on arm are still very much the same OS as their x86_64 ports. Presumably macOS is/will be the same way.

ARM gets less love precisely because they're not as popular for developer native workstations. But I wouldn't be surprised if that changes over the next decade.


> Who knows what kind of crazy performance differences/bugs there are between ARM and x86_64.

iOS developers?


Aren’t iPad Pro devices faster than some laptops nowadays?


Faster is kind of a hard metric to quantify.

Couple of benchmarks show intel clearly winning in the multi-core world but it looks like single core performance is a bit more of a tossup.

https://gadgetversus.com/processor/apple-vs-intel-core-i9-99...


In certain ways, yes.


Apple isn't the only company pushing ARM - Microsoft already released Surface Book Pro, AWS has ARM instances, doubt it will take long for Google and Microsoft to add their own ARM flavored CPUs.


Chromebooks have been running ARM forever, so Google already has their platform ARM-ready. In fact, I'd even say ChromeOS is more of a complete desktop suite than any ARM project Microsoft has undertaken as of yet, mostly because of the web-first style of development Google follows.


Wouldn't it be similar to the PowerPC -> Intel transition or is there something that makes this more complicated? That one worked pretty well with the Rosetta emulation layer.


That transition was greatly assisted by the huge performance difference between Intel and PowerPC especially for the laptop chips that most users were using.

It took developers a long time to have stuff fully ported, but it wasn't a huge deal - if you switch to a chip that's twice as fast and take a 50% hit on the emulation layer, you haven't lost much.

I'm very curious to see stuff that isn't first- or second-party already-natively-recompiled stuff works. There's a lot of quality-of-life Mac native apps that I like, that aren't backed by a bunch of dev resources.


There is just the opposite precedent for the 68K to PPC transition. My 6100/60 (PPC 601/60) was about the speed of a 68030/25Mhz Mac when running emulated software. It was actually slower than my upgraded 68030/40Mhz Mac when running emulated code. My Mac was less than half the speed of top end 68K Macs.

The PPC Macs couldn’t emulate a 68K floating point unit at all.


Last time, Apple switched to the "industry standard" architecture with all the benefits that entails; now they're switching away from the (current) "standard".


All iPhones are ARM based chips for a long time already, so it's not something that's completely new to them and by the number of devices it can probably be called a current industry standard too.


That's exactly the difference - iPhones define the industry (co-)standard. Macs do not. They simply don't have the market share.


They don’t need to. It’s all about building the platform that is simple for developers.

They will be able to hit desktop -> mobile in one shot.


my guess is, apple feels that in the near future, "industry standard" → ARM, and thats one of the reasons <insert hockey puck analogy here> they are moving...


the architecture you're deploying your code to is different than the one you are developing on

One thesis of the article is that people will start to deploy on ARM (e.g. Gravitron) so that it will be the same architecture.


I have to say, I'm not sure how Apple can pull this off. Mac commercial software is pretty sparse and crappy as it is. I mean, if I can't run the apps I use on a daily basis there (Adobe stuff mostly), it's a complete non-starter. Introducing another arch is not going to help matters there. If they try to pull in software from e.g. iPad, that's also pretty crappy. How many note-taking apps does anyone really need? And besides what few good apps iPad has, they won't work all that well in the desktop context.

I guess we'll see soon enough. I doubt they're as naive as to believe that they can pull another massive arch switch without Jobs' reality distortion field, and without their current arch severely lagging (like it was in PPC->x86 transition).


Well Adobe will definitely support the new architecture, it would be stupid for them not to.

And additionally with the latest iPad becoming much closer to ‘Mac’ in terms of the browsing experience (eg with a touchpad) it makes total sense for Apple to start making some of their apps truly cross platform. Does the Spotify desktop app really need to be substantially different to the iPad app?


It took them quite some time to start supporting Intel Mac last time.


I just bought a new 16" MBP earlier this year and I can't decide if it was the very best or worst time to do that.


Apple is "preparing to announce" the switch, but haven't we heard these rumors for, well, years now? How much more reliable are these rumors now compared to a couple years ago?

I'm a little surprised to see Bloomberg basically cribbing MacRumors.

EDIT: Instantly downvoted? LOL


You don’t need to rely on the rumors. Anybody who has seen the trend of Apple’s hardware can see that they are moving in this direction. The iPads have touted “desktop class” CPUs for years.. Macs already have in-house supervisor chips, (the T2 in the MacBook). Apple is a smart behemoth, an expert at vertical integration, who has a lot to benefit by cutting out Intel. They will certainly attempt it. It's only a question of how and when.


MacRumors is the one doing the cribbing this time around, as there are extremely strong supply chain rumors for a new ARM chip that would support a Mac lineup.


I always take these rumours with a pinch of salt. But I don't think that there having been rumours for years discredits them. There were apple tablet rumours for years too, which turned out to be because they were working on a tablet for about 10 years before they released it (they ended up releasing the iPhone first as part of the same project, and I believe that was 7 years in).


When I worked at Apple I ported FileMaker to a Macintosh Tablet computer almost 20 years before the iPad.


Some analysts (including Gurman) have pretty consistently called out 2020 as “the” year for a couple years now: https://www.bloomberg.com/news/articles/2018-04-02/apple-is-...


'Rumors' like this don't have exact dates like June 22nd.


The rumors were valid a couple years ago, too; any change like this would definitely take years of planning.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: