For those of you who aren't planning to run LLMs locally and picking M4 Pro/Max over the regular M4 (high) with 32GB RAM, what's your consideration for doing so other than perhaps future proofing with a higher RAM options?
Personally doing boring old software development and some random everyday stuff, I really can't see why I'd pick anything more powerful but I'd love to know if I'm missing something.
Parallelisation of compilation for large codebases. For example, an M2 Max with 96GiB of RAM takes about 12 minutes to compile the Waterfox (aka Firefox) codebase from scratch.
Any reduction in that time saves a lot of my life over a large period of time.
Any idea what that compile time might look like for each of the M4 configurations? Also, how often do you have to compile a codebase of that size from scratch and why?
> Any idea what that compile time might look like for each of the M4 configurations?
Unfortunately not, but I'd hope to see a reduction to under 10 minutes with the M4 Pro at the very least.
> Also, how often do you have to compile a codebase of that size from scratch and why?
Very often; since Waterfox's changes are always rebased on top of Firefox, every time we pull from upstream, the build system will do a from scratch compile.
A full release build takes about 1 hr (due to monolithic LTO and PGO, which requires 2 builds and about 15 minutes of app runtime to profile).
I will gladly spend $2500 more of my employer’s money (or even my money) if it can make me 5% faster at my job. If you’re paid $100,000 its already well worth it
For sure but that reduces my question to choosing a model if they all cost the same.
I'm going to order a Mac Mini for personal use (which I'd say is 50% software development) and would like to make a cost efficient choice given id mostly do web development, data analytics and everyday stuff. Currently I'm using M2 with 16GB and rarely ever do I feel like I need more RAM so I'd like to think that getting a 32GB would be sufficient for at least few more years unless I end up needing run LLMs locally but I don't expect that.
If you’re happy with 32gb, then be happy! For me there’s a big difference between “rarely” wanting more performance and “never” wanting more performance.
I pre-ordered a 16” M4 Max MacBook Pro with 128gb of RAM. I’m going to give 16gb of heap to my compiler, my IDE, my back-end application, and my web browser / front-end application. I’ll have plenty of RAM left over to keep all the code on my machine in RAM cache.
As far as I know, there is no official statement given by Apple regarding the M4 Ultra/the next Mac Studio. If it should follow the same cadence as the first generation, an announcement in March would be likely, though for the M series SOCs, there has been an often somewhat unusual release cadence (e.g., M4 launching on the iPad), so using past releases as a predictor is likely unreliable.
Also, as a customer, I have become somewhat annoyed by Apple's naming scheme on the M series SOCs. Having multiple different tiers in each section (M4 low vs. high, M4 Pro low vs. high, M4 Max low vs. high) can be quite confusing and makes comparisons with past generations even more confusing. I am fully aware that every competitor in the space has a naming scheme that is equally bad or far worse, though.
I wish someone started establishing a naming scheme similar to Nissans engine designation. As an example, without knowing anything but a few simple rules, VR38DETT tells you everything about the engine. It is part of the VR line of engines, has a capacity of 3.8L, double overheadcams, electronic fule injection, and twin turbos. Easy to understand at a glance.
Something like that for SOCs would make things far more streamlined. I know that it isn't great for marketing though, so this will remain wishful thinking.
Lastly, would have liked for Ars to include the M1 series as well, though honestly for many, even upgrading from that may be hard to justify. As a previous M1 Max low (24 GPU core variant) owner, I am particularly interested how the M4 Pro high (20 GPU core variant) stacks up in comparison, especially as the lack of 64 bit atomics made some UE5 features hard to reliably implement on the M1 generation, regardless of performance.
VR38DETT is unlikely to be useful to the average consumer.
I don't know why they are using low/high on the article. It's just tiered like yesteryear's i5 vs i7 for mobile chip; but using i5 vs i7 was highly misleading to consumers because it would lead people to think that there is some great gap between the two chips (when they were the same chip at different clock speeds).
I suspect they are using low/high in the article because there are massive differences within the respective tiers. Look at the M4 Pro. There is a variant with 8p4e and 16 GPU cores, as well as one with 10p4e and 20 GPU cores. That is a very significant difference, and to compare it to i5s and i7s (or Ultra 5 and Ultra 7), remember that Intel had designations within those tiers. It was never just an i5, it was a 3570k or 9900k and while not perfect, it was still easier to look up than saying the less or more powerful M4 Pro.
Essentially, these are six very different SOCs sharing only three names. If they aren't clearly separated by Apple, then the media must find some way to communicate that.
I fear though that rather than more clear, the path taken, most noticably, by Nvidia of mixing and matching tiers with completely different silicon will win out, trying to use 4080 branding for both AD102 and AD103 parts certainly didn't harm their long term prospects.
They’re 3 SoC designs and a binned variant of each.
That’s different than 6 very different SoCs as your comment says. Each tier has different capabilities, ie the pro is not a binned version of the max and the base is not a binned version of the pro.
Yes, they are binned different, I could have phrased that even more clearly (whether bins of the same silicon should be considered different SOCs or not, I can see arguments for either though agree more with the former), but that does not affect the point I made.
Let's look at Nvidia again. AD102 is used in the RTX 4090 Ti, RTX 4090, RTX 4080 Ti, as well as a less common variant of the RTX 4070 Ti. Each uses the same underlying silicon, just differently binned with certain parts fused off, similar to Apple.
Yet, and this was the point I made, what Apple currently does would be equivalent to Nvidia just calling all of them RTX 4090 Ti (they are the same underlying design after all), with reviewers and customers left to hunt down the specific core counts and differences between them.
And as mentioned, Nvidia tried something even more egregious with the 4080 12Gb, though (this time) were faced with such backlash that they pulled it back. Whether with Apple, Qualcomm, Intel, Nvidia or AMD, every time these practices aren't pointed out by the media, we get closer to a world were a 4080 12Gb will be pushed onto consumers who assume launch day reviews of the "proper" 4080 show equivalent performance.
Personally partial to the SR20DET, that hatch was hot. And again, you don't have to look anything up to know everything from capcity to fuel injection and turbo count.
I don't pay attention to supply-chain rumors, but a podcast that I listen to confirmed that all the expectations from leaks were spot on about this past week's announcements. Those same rumors apparently suggest that all Macs will be on M4s by sometime in 2025. It stands to reason that an Ultra chip will arrive next year.
Personally doing boring old software development and some random everyday stuff, I really can't see why I'd pick anything more powerful but I'd love to know if I'm missing something.
reply