The M4 is the highest single core CPU in the world and it’s in a ridiculously thin tablet without cooling. I don’t understand where this idea that Apple has lost most of their chip design talent and is in trouble rhetoric is coming from. Apple still makes the world’s most advanced and efficient consumer chips, years after the M1.
>The M4 is the highest single core CPU in the world and it’s in a ridiculously thin tablet
That's kind of the problem. The world's most powerful CPU is put in the world's most expensive and thinnest Netflix machine lol. Was the previous "thicker" M3 iPad holding anyone back?
All that power and I can't use it to compile the Linux kernel, I can't use it to play the latest Steam/GOG games, or run CAD simulations, because it needs to be behind the restrictive iPad OS AppStore walled garden where only code blessed by Apple can run.
Until it's put in a machine that can run Linux, it's more of a benchmarking flex than actually increasing productivity compared to previous M generations or the ARM/X86 competition that allows you to run any OS.
Do note that the M4 is built on TSMC's 3nm process while the 9950X is built on the relatively older 4nm process. This has been the case for the earlier Apple ARM processors too, as Apple made deals with TSMC to get priority access to the newer processes. In the end as a user you get a slighly faster machine, but that doesn't mean that's all thanks to the CPU architecture.
Apple's also has lower memory latencies by virtue of being a SoC design. Memory speed is increasingly becoming the bottleneck compared to computing power. However this is done at the cost of other features, like repleacable/upgradable RAM and CPU, which the other desktop processors support. This is not to say that one is better than the other, but there are surely tradeoffs involved.
> Do note that the M4 is built on TSMC's 3nm process while the 9950X is built on the relatively older 4nm process.
That helps (perhaps a lot, hard to say), but note that moving your design from 4 nm to 3 nm pitch isn’t as simple as recompiling your Verilog. So this is still a significant engineering achievement.
Apple is chasing a different design point from the desktop folks: low power and optimized more for single core workloads. Works for them but not of course for everyone.
The Mx machines aren’t SoCs in the sense a MediaTek part is: the main chip has a lot of integrated functional units, yes, but so do the Intel and AMD design. They do have multiple dice in a single package though.
Personally I don’t lose any sleep over the non-upgradability of RAM or mass storage. I started computing when you could add instructions by adding wires to the backplane, and later by writing your own microcode; you could build our own IO devices and wire them into the machine. Gradually we have had more and more integration, and RAM and mass storage are just the latest to go…and in exchange we get better performance and reliability.
I don’t hear people complaining that they can’t upgrade their disk drive’s write cache, or CPU’s cache for that matter. This is the way of the world.
Upgradability of RAM isn't the issue, it's the price of getting access to that RAM. On PC you get it for cheap, you just pay for the RAM modules. On Apple, you get it for arm and a leg, as you don't pay for the modules, but for the privilege to use their higher end model with enough RAM.
The low end models are so cheap that Apple is definitely subsidizing them with revenue from the higher end models. If RAM upgrades were priced based on RAM stick costs, the base model would have to be much more expensive and less people would have access to it.
Professional workstations have always cost an arm and leg - both arms and both legs usually, for example a SGI workstation used to cost 50K dollars! I think it's great that Apple also produces a subsidized low cost model so more people can get access to it.
> The low end models are so cheap that Apple is definitely subsidizing
Depends what do you mean by "subsidizing".. Obviously their margins on entry level models are lower but I would still bet than they are much higher if not several times higher than the industry standard. They are certainly not selling them at or below cost, they can probably upgrade the base config to 16GB and still make more per unit sold than Dell/HP/etc.
Even MS get rid of 8GB with the new Surface: 13.8, Snapdragon, 16GB RAM, 256GB SSD is $1000, the equivalent 13.6 M3 Air is $1,299 (8GB is $1100). So if Apple is subsidizing their lower-end Macs what on earth is MS doing? Surface is already a premium(ish) tier device and I'd assume their are paying more to Qualcomm than it costs Apple to make an M3 chip.
> the base model would have to be much more expensive
Why? I don't really get this logic. They are pricing their base models at what the market will bear, it wouldn't make sense to sell them at a discount just because they have more expensive models, they'd hike their prices even more if they could.
What they are making from memory/storage upgrades is basically free money on top of already presumably very reasonable margins (by industry standards).
> The low end models are so cheap that Apple is definitely subsidizing them
Source? I find that quite unlikely, considering their brand has strong recognition and demand, and design+manufacturing costs are probably not wildly different from other laptop makers.
They could be trying to get more people on board via lower prices, but the prices I've seen, although accessible to many people, seem similar to other brands, and seems quite compatible with making a profit.
They might be subsidising low-end models like that with the M-series, but as they were definitely overcharging for RAM way back when it was user-installable sticks… my gut feeling is RAM* is mostly a differential pricing strategy
* for laptops and desktops, storage pricing tiers also give me this feeling; however in the case of tablets and phones, the way they're used — for most people they are the primary computing device in their lives — less so.
Any (even the slimmest) evidence that this might be the case? Because it seems like an extremely far fetched claim... And year RAM/Storage upgrades seem like a clear example of market segmentation.
When I chose "might be" rather than "are indeed", that was to say it's not impossible rather than to outright agree. Other companies do have loss-leaders, I cannot rule out the possibility that Apple also does exactly what was asserted in the comment I was replying to.
I'm not sure even if that was the case it would fit the definition of a "loss-leader" unless we assume that Apple makes back the loss and more through the App Store and other services which seems extremely unlikely.
Otherwise who would selling laptops at a loss increase the sales of higher-end laptops, most people don't buy both.
> given they make more from services than from macs:
Almost all of that comes from iPhones/iPad/etc. services. Also IIRC 1/4 of their services revenue is just the $20 billion Google is paying them every year.
> but it would be mild surprise rather than shock.
Alright, you might be right. However, while I'm not shocked but still more than mildly surprised that there are people who think that this might be possible.
You don't, though. Unless you only care about single core performance and/or power usage.
Apple's chips are not even close to AMD/Intel value wise when it comes to MT performance on desktop. The $6,999 Mac Pro somehow losses to the $400 14700K (of course you still need a GPU/etc. but unless you care about niche use cases i.e. very high amounts of VRAM, you can get a GPU equivalent to the M2 Ultra for another $300-400 or so)
Well.. it's rather hard to run most other benchmarks on any M4 device. Anyway I don't think the exact % matters too much to get the point across .
But if we compare the difference between M3 and the 9950X in Cinebench and Geekbench and assume the ratio stays comparable it would still be slightly faster
It's not just a benchmark flex. I have been using Linux since 1992, for many or most of those years on both desktop and server. I am currently more productive while on an Apple Silicon Mac.
I would venture to guess that for most people, they would be more productive on a Mac or an iPad versus Linux.
The question from me was whether the M4 makes iPad users more productive vs the M3 iPad or whatever the last chip was, not if the iPad itself make you more productive than Linux or window s.
And since people keep bringing up the M4 as the yardstick in discussions on generic X86 and Qualcomm chips, then my productivity metric for comparison was also meant in a global generic way, as in the general purpose compute chips like x86 and Nvidia have unlocked new innovations and improvements to our lives over decades being able to run code from specialty aerospace, CAD, earthquake prediction, to protein folding for vaccines and medical use because you could run anything on those chips from YouTube to Fortnite to mainframes and supercomputers. What similar improvements to humanity does the M4 iPad bring when it can only run apps off the AppStore compared to M3, most used iPAd apps being YouTube and Netflix?
As long as the most cutting edge MX chips are restricted to running only Apple approved AppStore apps because Apple is addicted to the 30% AppStore tax that they can't charge on their laptops running MacOS/Linux with the same chips, then they're relatively useless chips for humanity in the grand scheme of things in comparisons with X86 and Arm who make the world go round, power research and innovation because they can run anything you can think of despite scoring a bit lower benchmarks.
> The question was whether the M4 makes you more productive on the iPad vs the M3 iPad or whatever the last chip was
This same question applies to any other computing platform upgrade. So far, the hardware for most common platforms far out-scales the majority of use cases.
Nonetheless, tech must and is advancing, regardless. Every platform is releasing newer and faster versions, and only a tiny fraction will make use of that power year over year.
As to the rest of your comment, I see what you're saying, but those are your opinions. However, the vast majority of the user base would probably disagree with you, because they are not technical people.
They are productive on these closed platforms. they have different workflows than you or I. I'm not very productive on these platforms. These are consumption devices for me. I need things like a development toolchain and a command-line interface to be productive.
By and large, non-technical people are Apple's target audience, not technical people. In raw numbers, these people outnumber technical people by an order of magnitude.
This dinosaur (me) recognizes that what constitutes a computer has evolved and shifted away from what I think of as a computer. And this shift will further continue.
> This same question applies to any other computing platform upgrade
Hardly. Or rather to a much lesser extent, at least pro/power users benefit a lot more from performance improvements on open general purpose platforms.. since well.. you can actually do stuff on them. What (performance sensitive) use cases does the iPad even have? I guess video/image editing to an extent but pretty much all of those apps on iOS are severely crippled and there are other limitations (storage and extremely low memory capacity).
I never said otherwise, I asked what's the point of commenters bringing up the M4 performance in comparisons with X86/Qualcomm when due to the open vs closed nature of the platforms they're not directly comparable because the M4 is much more restricted in the iPad vs the other chips.
That's like comparing a Ferrari to a van saying how much faster Ferraris are. Sure, a Fierari will always be faster than a van, but you can do a lot more things with a van than with a faster Ferrari, and just like M4 iPad Pros, Ferraris are lot less relevant to the function of society than vans which deliver your food, medicine, kids to school, etc. Is the M4 good for you and an improvement for your own workflow? Good for you, just don't compare it to X86 until it can run the same app as those chips.
Like you said, it's mostly a consumption device, and as such, the M4 is mostly wasted in that, until they bring it into a device with a more open OS that can run the same SW as the other X86/ARM platforms, which Apple delays intentionally because they're trying to nudge users off the open Mac platform towards the closed iPads OS platform for that sweet 30% AppStore cut they can't get on their devices running MacOS/Linux.
I run Linux in a VM on my M1 mac all day long. That VM was the fastest Linux instance I had anywhere. Faster the my 5950 and way faster than anything in cloud. Your can't using it is your choice.
Sir, in the comment you're replying to, I was talking about the iPad here, not the Mac where you can indeed use most of the M chips potential while on the iPad you can't do that.
Was your VM running arm64 Linux on both Apple and AMD? On AMD cpu, performance of arm64 VM is expected to be poor, because it is a different architecture and "emulation" has to happen. Or do you actually compare to VM running amd64 Linux on the AMD processor?
I don't have recent experience with Docker Desktop. Around the time they went commercial we still had lots of stability issues with it (eating memory and needing restarted) so we told their sales people to take a hike. We tried Rancher Desktop for a while but something with their docker-cli implementation didn't play nice with dev containers so our remaining Mac users jumped to colima. For my part I'd had enough of faking docker/containerd on a non-native platform so I'm daily driving Linux (Fedora on a Framework 13). No more VM's for me :)
To a degree, that makes sense. Due to the end of dennard scaling, most high end raw compute makes more sense as more but simpler cores, and has for a long time. For instance the blue gene supercomputers made of tons of pretty individually anemic PowerPC 4xx cores.
For battery powered devices, race to sleep is the current meta, where once you have some bit of heavy compute work, you power up a big core, run it fast to get through all of the work as quickly as possible, and get back to low power as quickly as possible.
Because clockspeed ramps power exponentially, there’s a limit to how high you can clock before the cost of racing outweighs the savings of running a short time.
I believe I’ve read that Apple’s chips run under 3GHz unless their job runs longer than 100-150ms. I suspect that’s their peak race to sleep range.
It's not exponential; P = C V^2 f, so power is something between O(f) and O(f^3), depending on how voltage scaling works. It follows that efficiency (energy per instruction) is something between O(1) and O(f^2).
Clocks aren't the only way to race. Powering up the massive core with it's over a dozen pipelines, massive reorder buffer, etc. is another way. This is they way Apple has chose to focus on.
That’s a very narrow definition of “useful” and one I’d say is rather focused on yourself?
Why is Linux the arbiter of what is “useful”? Why would it still be a benchmark flex if it was on macOS, an Os where millions of people do professional work everyday?
And why is the iPad just a “Netflix machine” when tons of people use the iPad for professional creative use cases as well?
I'll attempt the best interpretation of the comment. Installing Linux would allow for general purpose use of the device (in a freedom sense). This increases the "utility" of the device and lowers the bar for extending its functionality.
Why is Apple the arbiter of what is “useful”? Why would it still be a benchmark flex if it was on Linux, an Os where millions of people do professional work everyday?
It doesn’t cut both ways because I’m not making that argument at all. You interjected your own argument that you’re now arguing against.
I’m making the argument that “usefulness” extends outside of Linux use. I’m not saying it only extends to a specific group. Point me to where that is even implied.
>And why is the iPad just a “Netflix machine” when tons of people use the iPad for professional creative use cases as well?
I never said people can't use iPads for profesional applications, I was asking what professional tasks are people doing with the iPad that necessitated the M4 to be in that closed platform vs the M3 or M2, instead of being put in a more open one like the Mac where it can be put to better use at more generic compute tasks similar to what X86 and Nvidia chips are used for, instead of being stuck in a very restricted platform mostly targeted towards content consumption.
And so far nobody has provided an answer to that question. All the answers are either repeating Apple's marketing or vague ones like "you can use the iPad for professional applications too you know, it's not just a Netflix machine". OK, but what exactly are those professional iPad applications that mandate the M4 being in the iPad instead of the Mac?
I'm bringing this up, because commenters bring up the M4 as the holy grail of chips in discussions about the latest X86 and Qualcomm chips, so if you compare a chip that can currently only run AppStore apps vs chips that can run most SW ever then we're comparing Apples and Oranges.
No, people have provided answers. One of those people is me. You just don't like them because you seem to have a problem imaging other people have differing uses than your own.
Edit: this person keeps ignoring any responses that don’t align with their world view and acting like they don’t exist. When challenged they have called multiple people names instead.
I’m not responding to them further to prevent dragging this out further.
Sorry but your answer didn't actually answer what I asked for though. Which is fine, just don't claim you did and then switch to gaslighting me that "I just don't like your answer". Please keep a mature attitude and not act like a toddler.
And it's not just "my use case" that I'm referring to since I don't do any of those, it's generic use cases that I'm talking about in examples, since X86/Qualcomm are also generic chips being used for a shit tonne of use cases and not restricted to certain AppStores like the M4 is in the iPad, so if you bring M4 in comparison with those generic chips then you'd better provide argument on how they're comparable given the current SW platform restrictions, and so far you haven't.
> And why is the iPad just a “Netflix machine” when tons of people use the iPad for professional creative use cases as well?
For my family, the iPhone and other Apple devices are the ultimate productivity machines. The App Store, fantastic.
Why? Because they are (rightly) terrified of installing apps on Windows. Or any “computer.” They’ve been burned too many times, warned too many times. Unless it’s Microsoft Office, it doesn’t happen. “Programs” are a threat.
But apps? Apps don’t hurt you. iPhones and iPads turn on every day when you want them to, they act predictably, it feels consistent in a way a laptop is not. As for the speed of the chip only being useful for “consumption” - technology advances. That A10 Fusion powered iPad from 6 years ago stings to use now, even though it was plenty comfortable at the time. 5 years from now, nobody will regret an M4.
Do you really think actual people ask for and want "ultimate productivity machines"? Btw this term reveals a true kool-aid drinker. I always wanted a machine that is reliable, supported by the software I want to use, and fast enough, and affordable. I never cared for "ultimate productivity" or winning in some bench. I accept that you do; that is something e.g. managers may talk or care about. But most of them do not seem to agree with you that Apple is the ultimate best, because most workers using computers are still on windows machines.
> That A10 Fusion powered iPad from 6 years ago stings to use now, even though it was plenty comfortable at the time. 5 years from now, nobody will regret an M4.
Sorry, I've been around for too long to believe this. :)
No matter how big of a milestone, a jump in performance anything ever brought, it never stopped software from eventually using up all the resources available. Just another layer of abstraction, just another framework, just another useless ui gimmick and you'll see that the M4 isn't immune to this either.
How are "programs evil" and "apps good"? Without any convincing arguments to back up such claims, they do read like astro-turfing and just regurgitating Tim Cook's keynote speeches. This board is called Hacker News last time I checked, no?
I believe the point that was being made is that the perception, on average, is that iOS apps are carefully manicured and will not require troubleshooting by the typical user, as opposed to windows programs. There have been enough instances where this is true that for a certain slice of the population it is globally true. There is a different expectation of debugging from a consumer not versed in technology from those of us who work with/on it every day.
> Was the previous "thicker" M3 iPad holding anyone back?
Since I carry a laptop and an ipad, the lighter the better (I use them for different things, and often in concert). That MacBook Air is so powerful and so light that I sometimes have to check my bag to make sure I haven’t forgotten one or both.
There's a lot of long tail issues with peripherals that still need to be worked out, to be fair. Sound wasn't enabled until fairly recently because they were still testing to ensure the drivers couldn't damage the hardware.
To be fully fair, that's the status quo for Linux on hardware that didn't ship with Linux support as a selling point. I had issues with Intel Wifi 6 hardware compatibility on a Lenovo laptop within the last year or so. On 3 different x86 laptops with fingerprint readers, I've never been able to get any of them to function in Linux. One one of my laptops, the sound works but it's substantially degraded compared to the drivers for Windows on the same hardware. Another supports the USB4 port for data but won't switch to DP alt mode with it, though it works in Windows. On the other hand, my Steam Deck shipped with Linux and everything works great.
Linux not supporting your hardware perfectly is just the nature of the beast. Asahi on Apple hardware meets [very low linux user] expectations.
No year will ever be "the year of Linux on the desktop", despite that phrase being so old that if I'd conceived a child when I first heard it, I'd be a grandparent already.
The fact that you can't imagine a device being useful until it can compile the Linux kernel etc., that you dismiss it as a "Netflix machine", says more about you than about Apple.
AI inference. Editing, not merely watching, video. Ditto audio. Gets more done before the battery runs out.
I cannot emphasise strongly enough how niche "compile the Linux kernel" is: to most people, once you've explained what those words mean, this is as mad as saying you don't like a Tesla because you recreationally make your own batteries and Tesla cars don't have a warranty-not-void method to let you just stick those in.
So linux kernel compiling is niche but not editing videos and audio on the iPad? (source, have friends making money in the audio and video industry and none of them use iPads professionally for the M4 to matter, they all use MacBooks or Mac Studios even though they tired iPad pros)
And you haven't answered my question. For video and audio editing, were the M3 IPads being held back compared to M4 for it to unlock new possibilities that couldn't have been done before and convince new users to switch to iPads?
>How many million youtube channels are there, and how exactly do you think that stuff gets made? Magic?
Since you asked, they're using Macs and PCs mostly, sometimes Linux too. Rarely iPads though. That's more of a strawmen your building here.
And I'm bringing in arguments that X86 and ARM improvements are more important that the M4 chips, because they power the world innovations, and you're bring up editing YouTube videos on iPads as an argument for why the M4 is such a big deal. I rest my case.
>Disingenuous. "Held back" implies Apple could have made M4s a year sooner.
I never said or meant such a thing, you're just making up stuff up at this point to stoke the fire and that's why I'll end the conversation with you here.
> Using Macs and PCs mostly, sometimes Linux too. Rarely iPads. That's more of a strawmen your building here.
You're arguing as if everyone changes production mode in one go when new hardware arrives. This chip was only released 50 days ago.
Your example of "power the world innovations" included "compile Linux kernel" and "play the latest Steam/GOG games".
YouTube is more valuable to the world than endlessly recompiling the Linux kernel.
Let me rephrase via metaphorical narrative:
--
"This internal combustion engine is very good, isn't it. I don't know why people keep saying they're a dead-end."
"That's kind of the problem. The world's most powerful engine is put in the world's most expensive and thinnest carriage, lol. Was the previous horse-drawn carriage holding anyone back? Can't use car exhaust as manure!"
"Obviously it held people back, just look at all the things horseless carriages let people do faster, how the delivery of goods has been improved. And honestly, most people have moved on from organic manure."
"What, manure is niche compared to trucks?"
"Very much so. I mean, how do you think we get all the stuff in our shops?"
"But most of the stuff is delivered by horses! And also, you're talking about trivial things like 'shopping', when horses power important things like 'cavalry'. I rest my case."
--
> I never said or meant such a thing, you're just making up stuff up at this point to stoke the fire and that's why I'll end the conversation with you here.
I copy-pasted from your own previous comment, and at the time of writing this comment, the words "were the M3 IPads being held back" are still present. I cannot see how they could mean something else. Just in case you edit that comment (you've edited a few others, that's fine, I do that too), here's the whole paragraph:
> And you haven't answered my question. For video and audio editing, were the M3 IPads being held back compared to M4 for it to unlock new possibilities that couldn't have been done before and convince new users to switch to iPads?
> example of "power the world innovations" included "compile Linux kernel"
This is a bad faith argument. You should assume compile Linux kernel = doing any software development if you expect to have a rational discussion with anyone.
Since your comments are deviating in the bad faith direction with you trying to score gotchas off of interpretations of various words in my comments, instead of sticking to the chips comparison topic at hand, I will have to stop replying to you as we can't have an objective and sane debate at this point. Peace.
I think that even software development in general is pretty niche; about 29 million of us worldwide, putting us significantly behind sex workers (40 million or so), which I get the impression most regard as (at the very least) an unusual profession?
I want to say it's also way behind the number of YouTube channels, but I can't find a citation for the 3rd party claim of "114 million" active channels, though obviously most are not professionals and there's not a 1-1 requirement between channels and people.
I doubt there are more professional video editors than software developers (or people working in related areas/fields).
Even if we focus on video editing the apps available on the iPad and they are crippled by having very low amounts of memory which also makes "professional" usage rather difficult. The iPad Pro is mainly a luxury product for people who just want the "best" iPad and don't really care about the cost.
> 114 million" active channels
You should be comparing this to the number of active Github accounts or something like that which seems to be about the same.
There was no M3 iPad so the question is incorrect.
But if you’re talking compared to the M2, it has a number of updates
1. Significant performance and efficiency gains
2. New GPU with raytracing and mesh shading , and much higher performance.
3. AV1 decode
4. New display controllers required for the new tandem OLED
5. Huge upgrade to the neural engine
I’m sure I’m missing stuff, but the M4 iPad Pro is a legitimate step up from the M2 for capabilities. Unless you fall in the camp of it being just a media consumption device
Sure, those are nice improvements, no question about it, but none of those change the iPad fundaments or unlock new possibilities for it, which is that it wasn't previously compute limited but limited by iPad OS. It's also not just my opinion but almost all users who reviewed the M4 iPad Pro, like MKBHD, calling it a overpriced Netflix machine.
I mean, if your question is “what new capabilities did it unlock?” then doesn’t that apply to the whole CPU market? Did any of the stuff you mentioned you do has fundamentally changed in desktop in the last decade +.
People like faster and snappier things. Along with that the new hardware unlocks the use of tandem oleds which is a big change for HDR creation and color accuracy. They go hand in hand.
A lot of people create on iPads. I used to work in film and almost all my concept art friends have shifted over to iPad. A lot of my on set friends use it for on set management of content, visualization and rushes.
The reviewers you mentioned don’t use the device that way. Would I similarly be right in taking their opinion about how niche it is to run Linux?
Like this argument you’re proposing just boils down to “it doesn’t solve my needs and therefore I can’t imagine it solving other people’s needs”
>Like this argument you’re proposing just boils down to “it doesn’t solve my needs and therefore I can’t imagine it solving other people’s needs”
I never said that. I asked what needs does the M4 solve that the M3/2 couldn't? I asked that because people keep bring up the M4 in discussion, arguing against X86/Qualcomm chips, how they're slower than Apple's latest M4 chips, and for that I counteract with the fact that for a lot of cases the M4's extra performance over x86/Qualcomm is irelevant since X86/Qualcomm chips solve different and a lot more diverse problems that the highly restrictive and niche problems the iPad solves.
And it's not me, because those are not my needs, I don't compile the linux kernel or doe CAD/CAE, or microbiology simulations but those to me (and to society and humanity)
those are still more important than movie writers having a slightly faster iPad for drafts, since it's not like that was the reasons most movies suck nowadays.
You're arguing multiple axes and this argument feels really nonsensical to me as a result.
So first of all, your entire argument hinges on YOUR belief that the iPad is just a consumption device. So you don't believe the M4 is a significant jump over the M2, even when I give you reasons that it is.
Then your argument hinges on the comparison to Qualcomm SoCs, but isn't the use of the iPad irrelevant unless you also believe it'll not make its way to other devices? Which feels unfounded.
Those are two distinct arguments that IMHO have no bearing on each other unless you also make the two assumptions that I think you're erroneously making.
> I never said that. I asked what needs does the M4 solve that the M3/2 couldn't? I asked that because people keep bring up the M4 in discussion, arguing against X86/Qualcomm chips, how they're slower than Apple's latest M4 chips, and for that I counteract with the fact that for a lot of cases the M4's extra performance over x86/Qualcomm is irelevant since X86/Qualcomm chips solve different and a lot more diverse problems that the highly restrictive and niche problems the iPad solves.
If you think one brand of "chips solve different and a lot more diverse problems" than another, it sounds like you don't know what "Turing machine" means.
All chips can always do what other chips can do — eventually.
M4 is faster. That's it. That's the whole selling point.
Faster at what exactly? Where can I buy these M4 chips to upgrade my PC with to make it faster as you claim? Oh, it's only shipped as part of a very locked down tablet OS and restricted ecosystem with totally different apps than those running on the X86/generic ARM chips which can run anything you write for them? OK, fine, but then what's the point of it being faster than those other chip if they can't run the same sw?
Like I said, you're comparing a Ferrari to a van. It's faster yes, but totally different use cases. And the world runs mostly on people driving vans/trucks, not on people driving Ferraris.
Your complaint is more like saying a people carrier is an overpriced sportscar because of its inability to function as a backhoe, while ignoring evidence not only from all the people who use people people carriers, but also disregarding the usage and real world value evidence in the form of the particular company behind this people carrier manages to be wildly popular despite above average prices for every single model.
It’s not a game changer, never meant to be. Apple updates the hardware, shows you the possibilities and charges you an arm and a leg for it. What you do with it is your own business *
Another thing is you’re comparing apples to oranges. iPads aren’t meant to be used in that way and if you want to do it anyway you have to hack your way there. You should be perfectly capable of compiling the Linux kernel on their more general purpose machine - the Mac.
Correct, and if they're only meant to be used within the restrictive limitation of the AppStore then who cares about them, other than the small market of iPad OS AppStore users, most of which don't even used the full potential of the M3/M2 on their iPads let alone need the M4?
Chips from Intel, AMD, NVidia, etc are big news because since they're generic compute chips, so they unlock new uses cases and research that can improve or even change the world, not just run IOS apps a bit faster.
For example, do you think those Apple EEs are using iPads to design the M4 chips or X86/Mac computers?
Most of the stuff WSL 2 allows for was already available with Virtual Box and VMWare Workstation, which I have been using since 2010.
The nice part of it, is coming in the box.
In any case, I do agree, Linux on the Desktop will stay packaged in a VM, without the hurdles of dual boot, or lack of the hardware support OEMs will never care about.
Even on devices that ship the Linux kernel like Chromebooks, running the Crostini GNU/Linux distribution is done inside of a VM, WSL2 style, and to this day there are several issues regarding hardware support that Google never bothered to improve.
Eh, yes and no. Windows kernel plus it's backend features for security, emulation and virtualization, which enable things like WSL2 and backwards compatibility to work are great, but they're hampered by a crap front end with news, ads, web search in start menu, and dark patterns everywhere being forced down your throat like OneDrive holding your files ransom in the cloud if you don't pay attention or the failed push for the CallBack feature with unencrypted screenshots, or Windows Explorer being slow as shit due to it now running JavaScript code for some reason.
All in all, I'm moving away from it to linux, as I don't like the direction Microsoft is taking, and learning to fix the rough edges of Linux will serve me better in my career than trying to keep up with and fight the dark pattern frog that Microsoft keeps boiling slowly.
Mutahar on YouTube did a review of a leaked copy of Windows 11 Chinese Government edition which is Windows 11 Enterprise with everything non essential stripped out of it: no AI, no telemetry, no OneDrive, no ads, no news, no Edge, no media player, no web search in star menu, no defender, nothing, just the kernel, drivers, window manager and explorer, that's it, kind of like lightweight linux distros. If Microsoft would sell that to us consumers I would buy that in a heartbeat. But no, we get the adware and spyware version.
I use Linux since 1995, and have used multiple UNIXes, most of the well known commercial ones since 1993.
Unless we are talking about installing it on a classical desktop PC tower, or some embedded device like the Raspeberry like boards with the OEM custom distribution, I am not bothering with it, it goes into a desktop VM and that is it.
Too many wasted weekends throughout the years, always sorting out the same issues.
So on my main PC I have Windows 11 and NixOS. I don't remember the last time I've rebooted to linux. Simply because, thanks to home-manager, my terminal experience is identical between nixos in WSL2 and nixos on bare metal.
But yeah, Windows 11 was a bit of a downgrade compared to Windows 10 in terms of how much needless shit has been added to it.
> learning to fix the rough edges of Linux will serve me better in my career than trying to keep up with and fight the dark pattern frog that Microsoft keeps boiling slowly.
Yes, but pretty much everything desktop related is completely useless in non-desktop use-cases. I'd rather go back to macOS.
"fun story": couple weeks ago I had to do something that was not possible in WSL2 due to how esp32 devboard I had worked. I've decided to add linux vm to my freebsd server, and passthrough usb controller that conveniently was plugged in (unused). Simply because I knew that the moment I reboot to linux I might get sidetracked into fixing my linux on desktop instead of working.
The idea that Apple lost their most important talent came from SemiAnalysis. It’s a saucy idea so it spread from there without much backing.
They’re a tech news blog mixed with heavy doses of dramatized conjecture.
The primary author has written multiple times that Apple has lost lots of their key talent but has never been able to back it up beyond “I keep tabs on LinkedIn”.
End of the day, tech folks like drama as much as the next person. Sites like that are the equivalent to celebrity focused tabloids.
I don't understand what prompted you to bring up the M4 in a meticulously researched deep-dive on the Oryon architecture. The original article doesn't mention it once. Is this just flame bait?
Thank you for saying it out loud. 100%. Seen a handful of very strange top comments this week that you usually wouldn't see on HN, I assume because of the holidays.
i.e. more practicing rhetoric than contributing content, and/or, leaning on rhetorical strategies to do a more traditional definition of trolling circa early 2000s Slashdot. i.e. generate tons of replies by introducing a tangent that'll generate conversation.
The article does reference firestorm cores like those in M series chips. It’s also an obvious comparison, because the Oryon is the only other desktop class ARM chip not designed by ARM the company. This chip is what Microsoft will try to use to compete with M series chips of which the latest is the M4. Seems like the obvious competition
> Apple has lost some or many of their chip design talent. Not All or Most.
Did you read this somewhere? I tried to Google for it, but I cannot find anything. Or do you have an inside source?
The last that I knew/read, Johny Srouji was the key senior executive in the Apple CPU division. It looks like they continue to expand his R&D labs in Israel (according to a Wiki source).
It is no secret that some of the chip design team left over the years. From the Nuvia / Qualcomm Oryon team to a few RISC-V CPU design IP companies has many ex-Apple design team. And this is specific to CPU only. There are many other moving parts in the whole SoC design. Which yes Apple is also expanding.
Basically some media make a big deal out of some people left the Apple's CPU design.
> I don’t understand where this idea that Apple has lost most of their chip design talent and is in trouble rhetoric is coming from.
Why are you saying this? The article doesn't seem to imply that?
Apple isn't in any kind of trouble. But the gap between Apple and the competition does appear to be closing. Right now Apple's biggest advantage is they buy up all of TSMC supply for the latest node. They're always a little faster because they're always a node ahead.
Qualcomm Snapdragon X Elite is reasonably impressive from a CPU and x86 emulation perspective. The GPU hardware seems kinda sorta ok. But their GPU drivers are dog poop. Which is why they suck for Windows gaming.
I hope AMD and Nvidia start to release ARM SoC designs for laptops/desktops next year. That could get interesting fast. All hail competition!
The first 4 minute mile was revolutionary. It’s not ordinary today, but isn’t super surprising either.
Apple bet big that we hasn’t hit the limits of how wide a machine can go. They created ARM64 to push this idea as it tries to eliminate things that make this hard.
Everyone wrote off iPhone performance as mobile only and not really representative of performance on a “real” computer.
M1 changed that and set the clock ticking. Once everyone realized it was possible, they needed to adjust future plans and start work on their own copy (with companies like Nuvia having a head start in accepting the new way of things due to leadership who believed in M1 performance).
In the next few years, very wide machines will just be the way things are done and while they won’t be hyper common, they won’t be surprising either.
> I hope AMD and Nvidia start to release ARM SoC designs for laptops/desktops next year
Or AMD / Intel could just make more power efficient x86 core? What would they gain by switching to ARM?
Also developing a competitive ARM core in less than a year is pretty much impossible, it took Qualcomm several years to catch up with ARMs cores (hence the title...). They even had to buy another company to accomplish that.
> Or AMD / Intel could just make more power efficient x86 core? What would they gain by switching to ARM?
I mixed some thoughts in rewrites. I hope Nvidia releases laptop/desktop SoC. AMD is getting better at x86 mobile, Steamdeck is pretty decent. I hope they keep getting better.
I'd like to see high-end integrated GPU on a SoC from AMD for laptops/desktops. That doesn't exist yet. It requires a discrete GPU and there's a kajillion issues that stem from having two GPU paths. Just give me one SoC with shared memory and a competitive GPU. I don't care if it's ARM or x86.
> Also developing a competitive ARM core in less than a year is pretty much impossible
What makes you think they'd just be starting? Nvidia has been shipping ARM cores for years. Nintendo Switch is an Nvidia Tegra.
There is a different between design and developing a CPU Core like Oryon Core here, and Nvidia shipping ARM SoC which the CPU design itself that is from ARM, likely Cortex-X5.
Exactly. And that was nearly 10 years ago. As with all Custom ARM CPU design, it always ends up the same fate, arm 's ARM Cortex Design will catch up and always prove to provide better performance per dollar than owning its own design team. Oryon is great had it been launched in 2020 or 2021 but it will be competing against Cortex X5 soon.
Not that I dont want Nvidia to have their own CPU design again. But judging from current trend and trajectory and very likely their perspective it doesn't make any sense. The future isn't in CPU but GPU or HSA.
if you tried to say "Apple still makes the world’s most advanced and efficient chips" instead of qualifying it with "consumer" who would you be cutting out?
I think "consumer" literally means people who consume things, instead of people who create things. Before detouring into "content creators", people who create things are frequently engineers and scientists, who apple does not target.
Unfortunately, I think apple doesn't do general purpose computing. sigh.
It comes from the idea that apple didn’t advance significantly in M2 and M3, which are equally untrue, but equally pervasive.
People were absolutely sure that apple made basically no real advancement but just were running up the TDPs and that’s where all their M2 and M3 gains came from. That was the narrative for the last 2 years. But then you look at geekerwan and apple is making 20, 30% steps every gen, and with perf/w climbing upwards every gen too. Mainstream sites just didn’t want to do the diligence, plus there’s a weird persistent bias against the idea of apple being good. It’s gotta just be the node… or the accelerators… or the OS…
Reminder that we sit here 3 years later and even giving AMD a node advantage (7940HS vs M2/M3 family) they’re still pulling >20W core-only single-thread power (even ignoring the x86 platform power problems!) to compete with a 5W M2 thread. And yes, you can limit it but then they lose on performance instead.
But yeah, anyway, that’s where it came from. People completely dismissed M2 and M3 as having any worthwhile advancement (even with laptops they could objectively analyze!) and were in the process of repeating this for M4 yet again. So why wouldn’t you think that three generations of stagnation indicates apple has a problem? The problem is that apple hasn’t actually stagnated - there is an epistemic closure issue and a lot of people won’t admit it or aren’t exposed to that information, because it’s being proxied through this lens of 25% of the tech community being devout apple anti-fanboys.
It's a problem with every single apple/android thread too. People will admit with the ML stuff that apple does a much better job handling PII (keeping it on-device, offering e2e encryption between devices, using anonymizing tokens when it needs to go to a service), and people intellectually understand they use the same approaches and techniques in other areas too, but suggest that maybe apple isn't quite as bad as the literal adtech company and you'll get the works. People don't want to think of themselves as fanboys but... there is a large contingent that does all the things fanboys would do and says all the things fanboys would say, and acts how fanboys would act, and nevertheless thinks they're the rock of neutrality standing between two equally-bad choices. False balance is very intellectually comforting.
I think people generally expected larger improvements between iterations. Intel and AMD continue to deliver sizeable performance and efficiency gains every 1-2 years while it feels like the Apple M-series isn't getting comparable gains. It definitely seems like Apple has suffered significant enough brain drain in recent years that they're finding it difficult to iterate on the M1.
Thank You for the Google Spreadsheets. I did my comparison on my Apple Numbers, dont know why it never occurred to me I could do it on Google Spreadsheets and shared it out on the web.
For M1 - M3 the IPC improvement were minimal. Absolute performance came from Node and Clock Speed improvement. And for M4, most of the single core improvement actually came from floating point improvements. If we look into each test results and in terms of Integer I remember last time I checked it was less than 5%. In total, M1 to M4 has less than 10% of IPC improvement in terms of integer performance.
This is not to say I am disappointed or surprised like all others. A lot of the CPU improvement went from A12 to M1 / A14 which was barely showcased by Apple developer toolkit during ARM transition. The ultra wide design which Apple were perfecting for many years before it reached M1, has reached a plateau.
I am now wondering if there could be some other Integer IPC improvement, but judging from M4 we will likely have to wait until 2026 before we see anything new in terms of CPU uArch design.
"Up to 20-30% faster" or "20-30% faster than <couple generation old version you might upgrade from". Comparing the "plain" M1 model vs M4 model (not pro/max as those aren't out yet, though follow nearly identical trends so far) there has been a total ~56% single core and ~59% multi-core performance uplift, or an average of ~16% per M* generation.
M1 released in November 2020. In almost the exact time frame AMD went from Zen3 -> Zen 5 (so one less generation) with a total gain of ~34% multicore and ~59% multicore gain. Intel went from ~10900k to ~14900k for a total of 76% single core and 128% multicore gain (i.e. more than double).
Two disclaimers before the conclusion: This is just from one lazy benchmark (Geekbench 6) and of the high performing model. That doesn't necessarily tell the whole story on both accounts, but it at least gives some context around general trends. E.g. passmark is going to say the differences on Intel are a little smaller, comparisons between Max and non-Max generations confuse multi core growth, and changes in low-mid market chips may follow a different slope than the high end products. Also there are other considerations like "and what about integrated graphics performance, which eats up a huge amount of die space and power budget?"
Anyways, my conclusion is that Apple is doing reasonably well with generational improvements vs the competition. Maybe not the absolute best, but not the worst. Being on top in the single core realm with a mobile passively cooled device makes equivalent gains all the more difficult but they're still doing it. Apple may be a victim of its own success in that the M1 from 2020 is still such a performance powerhouse that a 50% gain 4 generations later just isn't that interesting whereas with Intel a catchup of ~doubling multicore performance in the same time seems more impressive especially when M*'s story isn't as tantalizing on that half of things.
20-30% per year may (I'm not sure) be better than what AMD and Intel have managed since the M1 came out, but it's definitely not as good as the 40-60% per year improvements that I remember when I was a kid.
I'm anchored on those historical performance gains, so I'm naturally (and, I recognise, unreasonably) disappointed by "only" 20-30% improvements from Apple.
I've been waiting for an upgrade to my M1 and I still haven't seen one worth spending that much money on. I'd rather just sink that into upgrading my Windows tower.
CPUs got good enough for most applications a decade ago, it is hard to talk about upgrades being “worth spending money on” without specific workload info.
Single core performance went from 2300 with the M1 to 3800 with the M4. That is a huge improvement for my workflow (large TypeScript mono repositories) which is dependent a lot of single core performance (even with parallel builds, because one rarely re-builds everything, rather hot reloads.)
I picked the Ars article because they showed real-world performance like encoding. Geekbench scores are difficult to impossible to equate to real-world results. There are ways to measure it properly, but most sites seem to just do Geekbench or something else like it and call it a day. Single-core performance isn't this universal thing. What's your actual workload like? I'm all over the Ryzen x3D CPUs because they have proven massive performance improvements for things I care about like Factorio. Some site reporting "yeah, single and multi-core scores are 20% better" doesn't mean anything. 20% better at what exactly?
I will say that my M1 Mac runs Factorio like an absolute dream. There is an ARM native port now and it’s really good. It something like doubled UPS over the old Intel binary.
One of the big advantages the M chips have is the insanely fast integrated memory, since it’s all right on the die. It’s much closer to ultra high spec GPU ram than PC ram.
My M1 studio has 200GB/sec of memory bandwidth. Extant DDR5 modules are under 50GB/sec
The X3D chips have an absolutely ridiculous amount of L3 cache, so when your workload fits largely in L3 they have far, far more memory bandwidth than any other option on the market. That's the case pretty often, especially for games, hence the workload specificity. The X3D cache system means some workloads are 50% faster and some are 20% slower, so workload really does matter.
These statements are orthogonal though i.e. 2300->3800 is still less than 20% per generation (17% per if you use the exact single core numbers for the M1 vs M4 iPad). That might be meaningful for your workload but it also means 20-30 percent per generation is quite a bit off.
That wasn't the question, they did improve tremendously over the last 4 because the were basically not doing anything 4 years prior to that...
To be fair desktop Macs these days are just laptops without a screen so it's not that surprising. Of course they are significantly more power efficient but also much slower than high end AMD/Intel chips (if you don't care about heat/power usage that much like a lot of desktop users).
Also even on mobile 165H seems to be not that far from M3 e.g. 10-20% worse battery life, slightly slower single core but faster multi-core. Not ideal but considering where Intel was when M1 came out not that bad either.
Apple gives 115 W as the peak draw of the lowest power Mac Studio model (10 core M1 Max) https://support.apple.com/en-us/10202) and 295 W for the highest power (M2 Ultra 24 Core).
I believe 38 W is what you see under your personal "full utilization workload" but it's just that's not comparable to "peak workload" numbers. To be honest the peak numbers are relatively useless anyways, you'll never hit them unless you go do something akin to running purpose written CPU and a GPU benchmark at the same time, specifically designed to utilize all of the hardware capabilities at once rather than do something useful with them. The idle and typical wattage numbers are much more useful and much lower.
Also keep in mind some CPU models (even of the same family) are pushed to e.g. double the wattage for a 25% multicore gain or the like. It doesn't mean the CPU family is complete shit for power efficiency it just means there was a market for a SKU which wasn't very concerned with power consumption.
All that said the M* line is definitely still better efficiency/Watt but I concur with qwytw the difference between Intel and Apple now vs 4 years ago is massively improved and the gap isn't as anywhere near as big as you've been listing.
I am sorry I have to say this out loud. But seriously? We expect 20% per generation? When the word generation used to mean 2 years in CPU terms because that was what Moore's law or Intel's tick tock dictate.
Even ignoring the word generation, For the past 20 years how many times did we see a successful uArch bringing 20% IPC improvement over the current leading IPC?
Where did that 20-30% per gen coming from.
The worst part about Apple, its media is that it seems we have a whole generation of people who were never interested in CPU performance suddenly coming up with these expectation and figures.
On the contrary, _not_ expecting at least 20% per year (more for generations farther apart) is actually only a recent take for CPUs [1][2] which formed in what I like to call "the great stagnation" of the mid 2010s where AMD bombed, ARM was still a low performance mobile play, and Intel didn't have any stiff competition. After that things have started to pick up a little again now that there is innovation on said fronts [3]
I remember there was even a period Moore's law was commonly conflated with "a doubling in performance every 2 years" (implies people being used to an increase of at least 41% per year) instead of a doubling in transistor count. Because Intel's tick-tock model started ~5 years prior to the great stagnation people commonly claimed it's the end of Moore's law and that's why we have tick-tock's now. Of course these days it's common knowledge that Moore's law (the actual transistor count version) has been holding steady all these years still and that the performance lull wasn't related to hitting and innate technical scaling barriers.
Marketing departments are definitely going to be marketing departments but they aren't the origin of the idea CPUs can have more than minor increments in performance each year.
Are those numbers in absolute performance inclusive of clock speed improvement or IPC ( Instruction per Clock ) ? I am inclined to think [1] and [2] are the first and not the latter.
>Finally, Snapdragon X Elite devices are too expensive. Phoenix and Meteor Lake laptops often cost less, even when equipped with more RAM and larger SSDs. Convincing consumers to pay more for lower specifications is already a tough sell. Compatibility issues make it even tougher. Qualcomm needs to work with OEMs to deliver competitive prices.
Qualcomm can start by working with their accountants and reducing the price they charge for their SoCs. Rumors indicate the Snapdragon 8 Gen 4 mobile SoC, with Oryon cores, will cost between $220-$240 USD.
From the article it seems like it clones a lot of the Apple M1 technology. So smart acquisition on Qualcomm's part to get competitive again.
Apple engineers who worked on the Apple M4 should go start another company so Qualcomm can acquire it again for +1B. :). Or better yet, acquired by ARM themselves.
It is likely that as this tech makes it to the smartphone market, Android phones are going to get a major speed boost. They have been so uncompetitive against Apple for a while now.
>Android phones are going to get a major speed boost. They have been so uncompetitive against Apple for a while now.
Uncompetitive? These numbers indicate otherwise while being on a previous generation TSMC 4nm node. Just imagine if they were on the same 3nm node as the A17.
According to Geekerwan (who is one of the best mobile SoC reviewers) Apple does not destroy the competition. In fact, aside from the A17 single core score (which comes at the cost of increased thermals and aggressive throttling) Apple is the one destroyed in multi-core and GPU performance. All the while being on an inferior TSMC node.
>Snapdragon 8 Gen 3 comparison with A17. Apple's multicore lead is gone, GPU lead is gone, and the single core lead hasn't been smaller in a decade and has worse thermals.
This is because the Adreno ~830 excels at mobile benchmarks and falls significantly behind in “desktop” oriented benchmarks.
You can see this borne out in all the Snapdragon X GPU benches where they share the same GPU architectures respectively.
Apple have the better GPU for more modern and demanding workloads. Qualcomm have the better GPU for more dated but still highly relevant mobile gaming needs.
Get proper Linux support and the RISC-V variant they seemed to be working on and I'll buy the laptop ;-) I have no interest in Windows and even less so on ARM.
Jim Keller is confident that Ascalon will have performance close to zen5 when it is finished later this year. Chips in hand could be some than a lot of people seen to think.
> The predictions put Tenstorrent's upcoming CPU core comfortably ahead of Intel's Sapphire Rapids (7.45 points), Nvidia's Grace (7.44 points), and AMD's Zen 4 (6.80 points). Yet, AMD's Zen 5 is projected to hit 8.84 points, making it the absolute integer performance champion in 2024 – 2025.
I absolutely do not believe it. Until independent reviewers can test the new Ascalon processor, these numbers are just fantasy.