If you look at the specs of the regular, Pro and Max chips on the M1 and M3 generations, it's easier to see M1 Pro as a sort of "Max Lite" chip. You got identical CPU core counts on both the Pro and Max variants in the M1 and M2 generations, but that's no longer true with the M3.
Apple seems to have realized that people with only CPU heavy workloads won't buy the Max variant, so it seems to me they've weakened the Pro line on the M3 to push more people toward the Max.
Yeah, the changes they made to the M3 Pro were all about making it a less attractive option for people with CPU-heavy workloads who didn’t need/want the GPU horsepower. Unfortunately, it seems that in order to chase this product stratification goal, they actually made the newest generation chip perform worse than the previous generations.
I think it “makes sense” already. The previous generations of Pro were too tempting and Apple probably feels that hobbling them will push people to spend more on Max chips. I also bet the new Pro chips are less costly (node issues aside) since they have many fewer P cores. It’s just sad that the new M3 Pro is a sidegrade at best. I’m hoping M4 shows an actual performance increase for Pro chips now that they’re (hopefully) done cutting the P cores.
The core counts listed don't make sense: for M1 Pro they're quoting the fully-enabled 8p2e config but for M2 Pro and M3 Pro they're quoting the cut-down configs (6p4e and 5p6e) that are only available in the 14" MacBook Pro (and Mac Mini, for M2 Pro). For context, the M1 Pro's cut-down config was 6p2e.
At best, none of these conclusions would apply to the 16" MacBook Pro, and would be the wrong comparisons to make for the 14".
Yeah, so? The M2 Pro and Max also had the same CPU config for their respective fully-enabled configurations. But I don't see how the degree of similarity between Pro and Max chips is relevant to this failed attempt to compare three generations of just the Pro chips.
I was going to trade in my M1 Pro for an M3 Pro (I love the black), but I'm seriously getting second thoughts. I guess you can make a product that's just too good, after all.
I jumped into M1 the moment it came out to ditch the god awful intel books. Now I don't see the need to upgrade for a while. The gain is minimal for most of the cases
Despite its 'anti-fingerprint' anodization, the black still shows smudges more than silver/space gray, fwiw.. do you really need to upgrade a ~2yr old laptop??
> but the efficiency cores are slower in the M3 vs the M1,
I think that's only true when comparing M3 Pro vs M1 Pro (rather than the base M chips that each have 4 efficiency cores), and only when running threads with a background-priority QoS setting. Testing like [1] has shown that macOS runs background threads on M1 Pro/Max efficiency cores at higher clock speeds to compensate for only having two of those efficiency cores compared to 4-6 for all the other chips. But when normal-priority threads spill onto efficiency cores because the performance cores are full, the efficiency cores run at full speed on all chips.
I had to buy a new computer when I fled my country. I compared the Macbook Pros M1 and M3. It just made zero sense to get the M3 right now. The extra money just isn't worth the slight performance gain. The M1 Pro is just a beast.
> We can only speculate why the cores are used differently depending on the DAW. Since even Apple’s own DAW Logic performs worse with M3 Pro, the question arises as to whether it is due to the software itself or the operating system. However, this test is a clear indication for anyone who is still considering Apple’s new systems to take a closer look when making a purchase.
I didn't get an answer to the main question I had (sounds like they don't have one yet): is this because the DAW software has been optimized for M1 but not M3, and once they update it these scores will change?
He is testing by running many many tracks at once -- different DAWs have very different performance profiles when running more than 50 tracks... and some DAWs (I'm looking at you, Ableton) have utterly bizarre performance, CPU-wise, most of the time that doesn't directly correlate to the CPU's performance but to do with their really old and idiosyncratic code base.
I’ve no need to upgrade from M1. Apple really needs to push out their own discrete GPUs to upsell which will eventually happen if they’re serious about AR/VR.
Surprising, but you really don't even need an M1 for those programs. You want fast external SSDs, though. I use the last Intel Mac mini for audio things, and it's fine.
Only if you're mixing audio tracks with minimal processing or using lots of sample libraries. Modern virtual analog synthesizers or guitar amp emulations like the Archetype Nolly mentioned in the article use a ton of CPU and require almost no disk I/O.
For example he mentions FL Studio can handle 70 tracks of Nolly. The latest FL Studio version's tutorial project has 98 tracks and can run on an HDD if your CPU is fast enough. The effects being used are lighter than the Nolly guitar amp effect. But they're still more cpu heavy than I/O heavy.
Oh, interesting. I'd probably never use something like Nolly. So, out of my range. I do use the synth stuff, and I can get the Intel mini to like 50% with some effort, but it's never been a real problem.
The SSD matters for picking the sounds, in my experience. So you're going through listening to a bunch of drum hits, for example. If the drive is slow, this stalls.
Plus, isn't he simultaneously playing 70+ tracks with a CPU-intensive filter on each one? That's not a realistic workload.
> Plus, isn't he simultaneously playing 70+ tracks with a CPU-intensive filter on each one? That's not a realistic workload.
I linked this in another comment, but here's a real video of a Skrillex single and it has 180 tracks loaded with plugins.
He mostly uses virtual synthesizers, but you'll notice the track is mostly composed of audio clips and not MIDI running a real synthesizer. Because he does the synthesis in a different project and renders the audio out. Because it would be impossible for a project like this to run in realtime with the synthesizers being calculated.
M3 Ultra will be a monster. M1 Ultra is 16/4 chip with 40 or 64 GPU cores. Based on the M3 Max, the M3 Ultra will be a 24/8 CPU with (up to, depending on binning) 80 GPU cores. Each pCore is up to 30% faster, each GPU core up to 60% faster (even more for ray tracing). Overall we're probably looking at nearly double multicore CPU performance (M1 Ultra is at ~18k on GB6, M3 Max is ~21k, so M3 Ultra could be as high as ~40k, depending on well it scales).
If only my M1 Max's speakers didn't randomly crackle whenever AU plug-ins consumed even as little as 20% of the CPU. Thanks core audio! It's a bizarre state when I'm forced to swap to my Windows 10 and MOTU with ASIO to make music without becoming absolutely infuriated.
The methology used in these tests is utter nonsense.
Yes, there is something to be learned by his results but it has a the very least 50% to do with DAW internals and CPU vs the CPUs themselves.
He used large numbers of tracks to test which isn't a valid measure of CPU optimization across different DAW platforms. There is widely varying CPU overhead issues among the different DAWs. Running 300 tracks in Ableton is going to cause a bunch of different bottlenecks and inefficiencies to be pushed... not just CPU.
Clickbait. I'm not saying that the M3Pro is better or worse or trying to defend Apple, I'm just pointing out as someone with a lot of experience in DAW-land that this testing methology is bunk as a CPU test and as a DAW performance test.
Showing that the efficiency cores aren't fully utilized by the DAWs is appropriate for the audience. It shows to less specialized users that they'll need to examine their tools and their use of efficiency cores to determine if an upgrade is worthwhile
I'm not sure if it's quite accurate to say that the efficiency cores aren't fully utilized; it looks more like some of these apps are second guessing the OS and going out of their way to avoid using the efficiency cores (probably by only spawning one thread per P core, since macOS doesn't let apps directly control thread affinity).
Most of the time, that is probably the wrong approach for an app to take, but maybe DAWs have latency requirements that can't be met when E cores are in the mix. But that's something the DAW vendors should have to document and justify, because it's just as likely that they're making assumptions about E cores that don't hold up over time across the full range of chips.
> I'm not sure if it's quite accurate to say that the efficiency cores aren't fully utilized
The graphs in the source video[0] display 0% utilization on E-cores for some apps.
Given that it only occurs for some DAWs, it's very likely the wrong approach (given that 2/3 of the apps using E-cores have better performance characteristics than the others)
testing how a DAW handles 300 identical low-CPU consumptive tracks is not at all appropriate for a music CPU workload assessment. No one except Charlie Clauser or Hollywood cinema post production uses projects with 300+ tracks -- it is a really idiosyncratic test that exposes the optimization liabilities of the DAW vs the CPU.
A Better test would be to set up an intensive real world 20 track project, assess utilization and overhead, and then find a real-world way to increment that stress upward to the breaking point.
Most computer music forum wonks use some kind of test based on softsynths or reverbs or a combination and stack them high on a smaller number of tracks.
I mean... 300 tracks? You're basically testing the DAW's thread management at that point, not the CPU as it would be used.
> I mean... 300 tracks? You're basically testing the DAW's thread management at that point, not the CPU as it would be used.
Any realistic test is going to have more tracks than there are CPU cores, so thread management would seem to be an unavoidable aspect of the test. The test results were in the 60–100 track range, not 300, so not really as excessive as you are claiming. And while testing with too many tracks could certainly be problematic for introducing excessive context switching overhead, the problematic behavior that was actually revealed is much more serious and less subtle and applies to any scenario with more than 5–8 threads.
https://en.wikipedia.org/wiki/Digital_audio_workstation