Great investigation and data. While the M3 Pro looked pretty disappointing at first because of the loss in P core count, it looks like the huge strides have been made with the E core performance, so it's not the downgrade that one might think.
I own a 16" M1 Pro for work and a 16" M3 Pro for play, and wow I can really tell a difference in both performance and battery life. It's a nice upgrade.
I just ordered M3 Max Macbook Pro, currently own M1 Macbook Air. The one thing that I found the Air lacking is running VM with Windows 11, which I need for work. All those translation and virtualization layers add up and CPU temp rises to 95 Celsius sometimes, visibly slowing down. I decided I need a laptop with active cooling.
If you have M1 Max, and a fan, then there's probably no point in upgrading.
Apple M1 does not support hardware assisted nested virtualization. If someone was trying to run WSL from Windows, that would be horribly inefficient on M1 vs M3.
Is there actually a way to use nested virtualization on the M2 or M3 yet? As far as I can tell both Parallels and VMWare are still saying they don't have that feature.
No, it needs hardware support for that. It needs another level of virtualized MMU, which Apple didn’t build in. That might come with the M4 generation only.
> will cause the CPU to pause the guest hypervisor, and let the host hypervisor running at EL2 decide whether and how to proceed. That is a big part in enabling nested virtualisation. There are other details, for example related to memory and interrupt management.
And they don’t elaborate on whether these "further details" were implemented or not; I would guess they’re not implemented, or else Apple would have mentioned it at a WWDC workshop. I might be wrong here though.
There is always Windows on some vdi platform like Win365 or Amazon or Citrix etc so you're not tying up your devices resources. the downside is it's absolutely not available offline and can have a monthly expense.
The P core drop is apparently kind of a downgrade for audio production. Some DAWs only use the performance cores and Logic is one of them. This means you can utilize more tracks with an M1 Pro.
You can set the number of cores utilized by default in the settings, yes. But Logic will only fully utilize the performance cores. I made a mistake and over-stated by saying it would only use performance cores. The conclusion isn't different though. M1 Pro will handle more tracks currently.
Is that possible? You can set priority to background and your process will be restricted to E cores, but last I checked other processes will favor P cores but will be spilled to E if everything is loaded. I also don’t remember an API to query the core layout (to only spawn as many workers as there are P cores).
IIRC there are sysctls that list the number of cores of each type, and their cache sizes. So an application can check how many P cores there are, and choose to only spawn that many threads.
Yeah but it just means that software needs to tweak some parameters on M3 to slow spilling to E cores. In fact, a good system would always spill to E cores ands just let completion rates drive acceptance of work stealing.
Battery life must be hard to compare because presumably the battery life on the M1 Pro must have degraded by about 10% by now, which is quite a significant difference in capacity compared to a brand new 16" MacBook.
I have an M1 Air, I got it new and use it heavily. I see "battery health" is "normal" and "max capacity" is 90%, but I don't see a count of charge cycles.
You can check the cycle count using the System Information app in the power section. Option-click on the Apple logo is a convenient shortcut to open System Information.
There is also the third-party Coconut Battery app that gives you other nice statistics and keeps track of battery health if you open it occasionally.
I can't recommend Al Dente enough. Apparently I'm only at 134 cycles with 92% capacity*. M1 max is 26 months old. I have used a work computer for large portions of time obviously.
* TBH the capacity seems a bit low for my cycle count. Possible I've let it sit around 30% for too long at points. Such is the scourge of living in San Diego with the highest kWH rates in the country and only trying to charge at night or on weekends. I cry inside when I see people quoting 12c/kWh. Try 50c-1.20 depending on time of day/year.
The built-in option is a step in the right direction, but doesn't provide any way for a user to specify that the battery doesn't need to be charged to 100% if the OS thinks it needs to be. In my experience, it usually ends up charging the battery 100% every night, then letting it discharge to 80% during the day.
I’ve had it never charge, and sit reliably at 80% for weeks. Then when I unplugged it the schedule was ruined and it reverted to charging at night. Overcomplicated for no reason lol
I have the original M1 air that I got the day it released in 2020. In a typical week, I will let my battery discharge to less than 10% twice and recharge it to 100%. I've logged 458 cycles and lost 11% of my capacity. Not too bad.
How old was the Thinkpad? I used to expect a laptop battery to be borderline useless after 4 years or so, but my 2016 MBP is coming up on 8, and the battery is doing surprisingly okay. It reports 70% capacity, and that feels about right.
I can't say exactly, but it degraded remarkably quickly. Maybe someone else can chime in with a different experience, but I doubt it would be anywhere near as good as what I read from MacBook users, especially M-series.
I got the ThinkPad brand new, and it was a relatively recent model.
Most recent Macbooks should last for 1000 full cycles. Though I'm pretty sure you can brick your battery quickly by keeping it at 100% charge all the time or if you use it in a very hot environment.
- E cores can go faster, to reduce the need to move a workload to a P core
- P cores go faster overall, as expected for a new chip
Left untested is the real new feature in the M3 chips, dynamic memory resizing.
In earlier M chips, memory allocations were fixed per app, and now they can change. Net result should be slightly more overhead for dynamic indirection, but better usage of available memory (i.e., fewer stories about how many apps you have open when you compile). This is where user-level scenarios might be more illuminating than micro-benchmarks. Whether this stops people complaining about paying for more memory remains to be seen.
It's a real achievement that we're back to paying $2-3K for high-end machines for serious work.
> In earlier M chips, memory allocations were fixed per app, and now they can change.
This isn’t correct. You’re confusing the CPU and very specific GPU behaviour.
Memory allocations on the CPU side have always been dynamic. Memory allocations on the GPU are also semi-dynamic even on earlier GPU generations.
What is new, is the ability to
1. Dynamically adjust GPU memory caches to different purposes, whereas every other GPU (not just Apple) will have specific caches for different GPU functionality.
2. Dynamically adjust register usage on the GPU cores, whereas other GPUs will use the maximal number of registers a shader might need even if they go unused in a branch.
I think the memory reallocation feature you're talking about is a GPU feature, not anything to do with the CPU cores. And probably not anything to do with DRAM at all but rather caches or registers within the GPU.
I'm really hung up on why Apple won't do what Intel has done with their freaky Intel Radeon NUC or new Xeon chips which is to on package some high bandwidth ram and allow expansion to ddr5 as a slower ram tier.
we are stuck on apple with apps that want 8gb plus like chrome and then they want to gpu accelerate which eats more of the ram and I don't get how you can buy anything configured like this with less than 32gb of memory and expect long life out of it.
This is one way to see things. The way I see it, is that M3 performance cores are faster in some situations but also not that much faster. The efficiency cores are almost as fast as M1 cores but also the conditions to get there are specific.
Since they actually removed performance cores, the overall real-world performance is actually not very enticing. It is somewhat better in some cases but mostly equivalent and sometimes even worse, which is just ridiculous considering the price increase.
All this is happening while they significantly increased frequency across the board and they benefit from state-of-the-art fab process. The M3 E cores have 85% of the frequency of the M1 P cores; it is absolutely nothing special that they have about 85% of their performance.
Similarly, the M3 P cores use 20% more frequency, so their "performance increase" isn't really something to write home about.
In fact, what we can see is that already at the 3rd generation, their chips use enough power at a large density that they became challenging to cool. Plenty of M3 users are complaining about heat and noise just like in the intel days.
Their absurd focus on efficiency cores also makes their desktop computers even less valuable than they had already become under apple silicon. They were already overpriced but you are now paying even more for dumb compromised efficiency cores that have no reason to exist on computer plugged into the wall.
Their GPUs are still stupidly underpowered for the asking price and their architecture too different enough that porting stuff is more trouble than it could be worth for most developers (as we have recently seen with the cancelation of various software on macOS).
Meanwhile Intel just released their first try at an efficient mobile chip and not only they are pretty close but thanks to their GPU investment during covid they even managed to make entry level chips that make low end Apple Silicons SKUs like a pretty bad deal, especially if you are even remotely interested in gaming.
There are laptop now going out at the asking price of a base MacBook Air with not only twice the memory/storage but also double the gaming (and every other workload that is close to that) performance at a close enough efficiency and very similar form factor.
It seems that we are now in the post AS hype where the situation feels a lot like the PowerPC days. They will pretend they are faster but in fact in most case do much worse at twice the price.
Apple needs to revise their commercial strategy or they are in trouble...
I think some cold water on the hype is good; I just feel this is a bit too negative. It seems that there are many areas where the M3 is doing a good job of increasing both performance in general, and also performance per wattø:
https://www.notebookcheck.net/Apple-M3-SoC-analyzed-Increase...
I also haven't gotten the same impression as you with regard to temperatures. Mostly, people seem to think it's very good, but a bit harder to cool the 14" versus the 16", and this seems to be backed up by actual battery life when unplugged.
Example: https://youtu.be/E9IJ7nOAZyM?si=xI6vR2Y8f0sr8dgo&t=743
I know that you didn't say that it's twice as bad as its competitors when it comes to performance and temperature (you were making an analogy to the PowerPC), but right now, it actually seems to be the other way around, where almost every other hardware architecture is using twice as much power for the same performance compared to Apple silicon (on PC at least, it seems a lot closer in on mobile where Qualcomm exists).
I think you can choose to view the GPU pricing from many angle. For instance, the value in relationship to token per second in large language inference seems to be high on the M3 compared to equivalent hardware needed to run it using any Nvidia GPUs. Unless you also think that Nvidia GPUs are also stupidly overpriced Of course.
Am I understanding you correctly that you feel the jump from M2 to M3 is not bringing much to the table? Or do you think that the M3 is overpraised compared to other CPU/GPU architectures? Or both? I would love it if you have some links to people that review it with a bit more critical eye on performance per watt and such, as I don't think an assertion that "People are complaining about temperature" can be refuted, so it might be better to focus on actual measurements and temperatures compared to previous generations or computer models.
I wouldn't be surprised if E cores running at their max frequency had worse performance per watt than P cores running normally, so such ramp up before use of P cores wouldn't make sense.
The M series macbooks are amazing machines. I don't want to take away from them at all, but I recently realized I can get the same power that a 3k Macbook pro has with about 1k when I build my own PC (excluding the screen), and with it full upgradeability making for a much better bang for a buck, and illustrating how ridiculous the MacBook prices are. I mean they charge $400 for an extra terabyte of storage, when I can get around 6TB for the same price.
But, unfortunately you can't build your own notebooks that would remain compact and decent, so now I just use a cheap laptop as a thin client that remotely connects to my PC.
What you pay for is the form factor, the silence, the nice and cool temperatures, the battery life and the ability to actually use the damn thing without being plugged in. added bonus as a dev and a creator is the unified memory, hardware decoding for modern codecs line h265(something that is weirdly missing from nvidia gpu's) and of course the beautiful screen. if this stuff doesn't matter to you, then i agree it's a waste of money
i recently bought my first MacBook ever (M3 Max) and all i can say is that this machine is liberating. I've been jumping from pc to pc every couple of years from Dell xps to razor blade, thermo nuclear piece of shit to a customized clevo 10kg brick almost taking off from the table running too many tabs in chrome. all i can say is that these new M chip machines are on another level, truly amazing.
But at least it runs fast enough, and it's portable too.
To run the same llm on other laptops is basically impossible without an external graphics card, or you have to give up the laptop for a desktop. And that still sounds like an airplane taking off.
But that's argument absurdum considering the price. Yes, it can do that, but you can also do that much better with a cheaper combo that gets you more actual hardware...
Which is exactly what the OP is saying. And I agree, laptops are generally poor value no matter the brand/side. But you can mitigate that by buying a good enough not too expensive notebook and associate a "real" desktop computer with it.
At curent Apple price, the laptop convenience factor for almost equivalent workstation is not really worth it for the vast majority of peoples (at least those who have to point with their own money...).
Fair point, my Zephyrus G14 4090 64+16GB has the same issues, it can run smaller LLaMA2 models much faster than 128GB M3 Max (3x) but 70B one is much slower due to CPU doing half of the layers (10x). So in the end I ended up with both.
My Clevo’s cooling fan is also super loud, which ramps up too frequently like many open tabs or playing video and light weight games. Unfortunately I forgot to look into the audio decibels detail when buying this most recent laptop.
Oddly I haven't yet needed to work offline in my 12 year career as a software dev. But then again it's not common for internet to go away where I live.
Indeed, the only time I'd use my laptop is because I'm offline or out and about. There's no need to use a laptop if I'm at home, since, like you, I have a desktop.
It just makes sense. You can build a desktop + a cheap laptop for half. I don't have to really worry about how what I'm doing affects battery life because it's consist as a thin client. It's cheap to replace if it gets lost/stolen/broken. It's cheaper to upgrade as needed, and I don't have to replace the whole machine when I do.
I get all the portability of a laptop and all the power of a desktop, and money to spare.
For people wanting to do LLM work at home, a fully loaded MacBook has more memory available to it than anything else a consumer can buy. All in a laptop form factor (versus Nvidia GPUs).
A small market, sure, but one they have on lock down.
Not many people use compute 24/7, so if your use cases can be ephemeral than spot/savings plans can be a good saving and using the cloud allows for opex rather than capex controls and taxes
It will be interesting to see how much further it will be possible for Apple to push their platform.
I think the Jobs/Ive legacy that has pushed the thermals ahead of raw compute had given them a massive headstart in the LLM age for both notebooks (obviously) but also desktop (less obviously).
With the size of a normal 4090 heat sink the ATX platform is showing its age. The biggest element of any system is a series of huge heat sinks. The graphics card ends up being massive. You have support struts coming out. Giant power supplies. Bespoke connectors. Paying for RAM twice.
Apple have a form factor people are comfortable with and haven’t set the bar too high with raw performance. They aren’t hindered by existing designs. The thermals are good.
The integrated GPUs Intel and AMD are shipping don't really have enough performance to be worth the trouble, and the configs that support 100+ GB of RAM and have integrated GPUs don't have much memory bandwidth (running at speeds like 6400MT/s is a lot harder with multiple ranks of memory).
Intel's Arc A770M is a discrete GPU with only 16GB of DRAM. It is not to be confused with the "UHD Graphics 770" found in their top desktop processors. The discrete graphics is on the order of 16x larger and faster than the desktop integrated graphics, and the latter is the only one that can plausibly be said to have access to 100+ GB of RAM.
Tflops isn't a reliable way to measure GPU performance anymore. For example. Nvidia Tflop numbers for their new GPUs are 4-5x the performance of 2080Ti but in actual performance, it's like 2x only.
Normal DRAM (i.e. not GDDR) is far too slow to facilitate this, you're basically limited by the CPU's cache if you want to reach a level of performance even remotely comparable to a GPU that has GDDR or HBM.
GDDR isn't as special as you think, and it's special in exactly the opposite way that HBM is special. HBM achieves high bandwidth by using extremely large bus widths running at fairly slow speeds per pin. GDDR achieves high bandwidth by running at very high speeds per pin and using medium to large bus widths.
Desktop CPUs using DDR5 have low memory bandwidth because they have narrow bus widths (128bit total) at low speeds. Server CPUs use the same memory at the same per-pin speeds but with much larger bus widths to achieve total bandwidth comparable to a mid-range GPU but supporting orders of magnitude more memory capacity. Apple's high-end chips likewise uses significantly larger bus widths than desktop CPUs, combined with slightly per-pin speeds (LPDDR rather than DDR) to also reach GPU-like total bandwidth.
I don't think it's accurate to single out any one of the above technologies as being the "normal" DRAM. GDDR, LPDDR, and DDR are all fairly different from each other and all quite mainstream.
GDDR is "special" because it's far closer to the GPU die. System memory isn't that close to the CPU, so any integrated GPU won't be able to make as good use of it as a dedicated GPU with co-located VRAM.
> GDDR is "special" because it's far closer to the GPU die.
LPDDR is always at least as close to the processor, and is often in-package so significantly closer. HBM is always in-package.
But all of that is a red herring, because the distance is only loosely correlated with performance and is a completely irrelevant metric if you've already accounted for bus width and clock rate.
It would be pretty weird to ignore the existence of smartphones when discussing LPDDR, since they probably account for most of the volume of LPDDR sold. Lots of smartphone SoCs, all of Apple's laptop SoCs, and that one Intel custom job for Asus last year have LPDDR in-package, while the rest of the x86 systems with LPDDR have it on the motherboard at similar distances from the processor as GDDR.
Doesn't fit in a backpack. Can't take it to a coffee shop. Heck it involves building an entire custom PC around it.
Sure the m3 with 128GB of RAM costs $7k, which is an absurd number, but that will run high quality LLMs that can have sci-fi level intelligent conversations.
Not a bad price for a laptop capable of doing things that was literally "haha that'll never happen" levels of unimaginable 5 years ago.
Agreed. Estimates put the H100 BOM at around $3k. If they do offer that much memory, they're going to implement some way to make it unattractive, to hold onto that sweet 800% margin [1].
I’d want a better source than someone on Twitter summarizing an unspecified report by a financial analyst before repeating that margin claim. That sounds like the people who claim an iPhone has 400% markup because they added up the bulk prices for the major components and then assumed R&D, assembly, testing, distribution, software development, and support all cost $0.
> and then assumed R&D, assembly, testing, distribution, software development, and support all cost $0.
That's literally what BOM means: bill of materials. It's not supposed to include any of those other costs. BOM is not always the right kind of cost to be talking about, but in a case like this where you're talking about just swapping out the memory on an existing product, BOM accounts for all the costs that change significantly.
Yes, now can I direct your attention to the term “margin” used in the comment I was replying to? It is common for people who are looking for the sweet upvotes from a “this company is ripping everyone off” spin to present this as if “retail sales price - BOM = UNJUST PROFIT!!!!!!”. The source is a blog post based on a tweet by one person who does not work in the industry who is repeating another unverifiable claim by someone who does not work in the industry. The term “BOM” is not mentioned anywhere in that, and none of this has any details which could be reviewed for accuracy.
The blog post really highlights how effective those headlines are at getting clicks but not informing readers: it leads with “nearly 1,000%” because that sounds even more rage bait-y than the already high “823%”, and that's clearly the impression people take away from the story even though starting with the second paragraph it makes it clear that this is a very fuzzy claim which is likely significant understating the true cost for Nvidia to make the hardware as sold or the other factors I also mentioned.
Finally, “swapping out the memory on an existing product” can mean “we replaced part A with larger part B” or it can mean “we upgraded a controller, added additional channels, and redesigned the physical form factor to provide better mechanical and thermal capacity for the extra chips”. That's why those extra details are so important since it's otherwise easy for someone to make a tweet which sounds like they know what they're talking about but is in fact leaving out a lot of important context.
Definitely a nice GPU, but considering airflow, power use, heat generated and the like it's more like a GPU with a PC attached, not so much a PC with a GPU attached.
This gets brought up every time someone talks about macOS and MacBooks.
Another thing people don’t take into account is the time and thus money saved not dealing with this crap.
Building PCs from raw parts is annoying for heaps of people. Warranties are annoying, making sure all the parts fit together is annoying. You’ll spend the extra cost in short order just making sure you have the right parts.
Also resell value of MacBooks is very very good. When it’s time to upgrade you typically are losing 20-30% of purchase price at around the 2-3 year mark. Where as PCs are hard to give away at times.
Everyone has their own needs but Apples machines for me are a work tool, I need a laptop and nobody comes even close to the quality or peeper dollar spent in the form factor.
I'm mostly interested in upgradeability and the cost of parts. With Macs I'll just have to buy totally new Macs every time to upgrade, but with a PC I can swap out RAM, SSD, GPU, etc however I want for a fraction of the price that it costs to upgrade Macs.
the thing is though, that you can actually resell your old Mac when buying a new one. not like a pc that's basically worthless the moment you take it out of the store. the upgrading thing is kind of a myth since the new hardware coming out will often be incompatible with the old hardware you have..
It's most definitely not a myth. I upgrade a component or two every year around Black Friday. This same machine has been evolving for a decade and many spare parts have been handed down to family members.
Some stuff swaps out a little at a time, but something like the massive AM5 swap has you changing 3 of the 4 most expensive parts (CPU, RAM, and Motherboard) of your machine at the same time.
Yes I just made that swap. But it's been almost a decade since the last time I had to swap all 3. I did however get to keep my graphics card, case, power supply, cooling system (liquid), and SSDs (3 x 4TB).
People don't trust potentially overclocked parts which in turn means they aren't worth very much on the used market. This is one area where an efuse indicating if a chip were overclocked could be useful and not anti-consumer.
Then, don't overclock or buy overclockable parts, you're the only one talking about overclocking in this thread. I've found great success buying and selling on online fora like r/hardwareswap.
I think you're misunderstanding why Mac products are more resellable than Windows ones, it has nothing to do with being "abused" or overclocked (that's such a niche thing even among gamers that it bears almost zero relation to reselling). It's because macOS itself has value to consumers, especially as only Apple hardware can run macOS (legally speaking at least, and practically, increasingly more so as Hackintoshes can't yet run on ARM hardware, so eventually they will be phased out as Apple removes features from its Intel builds over time, as they have already started doing).
It’s a fraction of the price if you don’t need to track the latest specs. Most of the people I know who do this end up needing to upgrade motherboards, RAM, etc. at the same time which cuts into those savings somewhat notably. There’s nothing wrong with it if you enjoy the hobby but most people don’t really save money that way because chasing top performance isn’t a poor man’s hobby.
While true, desktops are often difficult to carry around and use while mobile. It also doesn't have the benefits of macOS. I can get Linux desktops to be decent, but Mac feels and looks like a finished Linux desktop OS.
Apple charges big prices for MacBook pro's because they know other corps will foot the bill, they're work equipment like a tractor. The significantly less expensive MacBook air is the choice if this isn't the case.
I still love building PC's but honestly their main use now is either for server work or graphically heavy gaming.
Anyway, mostly agree otherwise. Actually, macOS looks hideous and confusing to me, but the level of integration with the hardware is something I envy from Linux-laptop-land (imo it works fine on the desktop, but Linux on laptops could be better), the build quality seems quite nice, and the ecosystem seems neat.
"Always use the correct Apple product names with the correct capitalization as shown on the Apple Trademark List. Always use Apple product names in singular form. Modifiers such as model, device, or collection can be plural or possessive. Never typeset Apple product names using all uppercase letters."
("MacBook Pro" is one of the trademarks in the trademark l... oops, my apologies, Trademark List.)
So I suppose the correct plural is probably "MacBook Pro devices", but Apple's branding department has no power over us as individuals posting here so we can say what we like.
Osx is very literally unix, and it's extremely polished compared to linux desktop environments out of the box. (yes I know some people have no issues with their linux desktops but I've used it over 15 years and it's NOT nearly as seemless as OSX)
They're also extremely well sandboxed and have become increasingly more focused on security the last few years (in some ways annoyingly so).
> Also what do changes to chrome have to do with apple/osx?
Maybe parent meant chrome as in "OSX is focused continously updating it's shiny ui"?
I've been using an M2 Macbook and M1 Mini for a while now and I can't confirm those statements about polish.
The dock's icon scaling animation under the mouse cursor freezes in place at least once every few days. To this day the Stage Manager has a bug where pulling a minimized window to the front can throw it across the screen, with most of the window landing outside the visible screen area. For a few months I could reliably freeze the screenshot app by clicking 2-3 of the UI's buttons in an order that occurred during normal use. If I open more than ~10-15 images at once in the preinstalled preview app it often opens two or three windows instead of one and randomly distributes the pictures across those. More than ~50 image files can cause an error message and partial failure to open the files. In the first month after getting my Macbook, the "Open Anyway" button in the settings always crashed the settings app the first time after trying to start a new program. Turning off the second connected screen left windows on that screen inaccessible and I couldn't disable/remove the screen in the settings, and I could still move the cursor to the second screen while it was turned off. Clicking an 8GB video file in the file selection dialog spawned a ThumbnailExtension process that allocated 25GB memory within seconds and if not killed the process kept running for a minute after the file chooser was already closed...on a system with 8GB RAM, so it swapped 20GB. The file saving dialog is frozen as long as any connected HDD hasn't yet spun up, no matter if I want to save the file on that drive or not.
That's the things I can remember right now within a bit over one year of use. No, I don't think Mac OS' desktop is more polished than the best current Linux distros, though the OS stayed stable through upgrades, which I can't say about Windows 10 and some Linux systems. Imho the M1-3 Macbooks are still the best laptops currently available. But the desktop UI is clearly not as stable as iOS has been for me.
Well for one the keybindings have been updated from the IBM PC. Doing that on linux isn't impossible, but it would require an enormous effort across many codebases.
I have tried for many years to match the productivity of macos on linux, and I can't. The mac (and I suspect the iphone) is just too predictable and reliable to abandon.
Number of times my MBPs have crashed on wake from sleep: 0
The number of times my last PC crashed on wake from sleep: nearly every time it went to sleep.
I ain’t got time to troubleshoot stuff like this. I just need my stuff to work. I also need something portable. I’ve also been using modern macOS for its entire existence and find it vastly superior to Windows (which I have been using since 3.1), and Linux (again, I ain’t got time to troubleshoot hardware).
You can make your pc sleep? Bro I haven’t gotten my windows 10 machine to successfully sleep in years. It just wakes itself up (or maybe never goes to sleep) and becomes a toaster oven in my bag.
and the battery, and the better speakers, and the quiet cooling, and USB-C/Thunderbolt 4 that just works (compared to most PC implementations) and of course, macOS
Sure, sure. Some pathetic 1080p monitor, or a monitor with ridiculous pixel density and an awful TN matrix. Some oven-like CPU and “turn off your heater” GPU. And, because you are talking about laptops (you are not comparing laptops to desktop computers, surely), it will be a 3-5kg monster. But cheaper, hurray!
Yeah, if you ignore many of the benefits the Macbook provides, it is "overpriced" compared to a homebuilt PC.
On the other hand, if I consider the benefits you excluded of the Macbook, your $1000 PC is literally worth $0 to me. And by that measure, the Macbook is worth infinitely more than your PC.
16GB isn’t enough for Windows. macOS is surprisingly more efficient - it’s not just swapping to their faster storage, either: lots of attention to detail everywhere means that you see less memory pressure across the board. Even using VSC, Teams, and Podman heavily, 16GB has plenty of headroom.
The reason you can’t update the RAM on Apple silicon is the engineering tradeoff they made to have it be faster. That required work on the software side – e.g. they added a CPU extension to compress memory pages and updated the OS to use it effectively – but they have the advantage of not needing to coordinate multiple internal feuding factions and outside vendors the way that Microsoft has to.
I have a 16GB MacBook Pro (M2 Pro chip), and my Linux PC (I don't use Windows) runs a lot smoother than my MacBook does for programming tasks such as running IntelliJ / Android Studio / bunch of docker vm's.
Gamers like me have known this for ages. The reason I am in IT is that I save money and built my own desktop from parts. Even now its waaaaaay better in price to performance, you can freely have any peripherals you want, any x86 OS etc...
We're very close to $2200 if we go with solid, midrange stuff and don't include a monitor.
The cost to apple self-repair guys for a 16" mini-LED display is $670 with Apple offering around $100 back if they'll turn in the old screen.
If we add in $600 for the monitor cost, we're up to $2800 while the 16" MBP we compared to is actually $4000. The savings of a desktop are there, but they aren't what you make them out to be.
If we look at a comparable laptop with good build quality like a Lenove Thinkpad, we're going to pay as much or more. For the same price, we'll have terrible battery life and our performance when not plugged in will drop off a cliff.
You need to compare likes as much as is reasonable. I literally priced everything out on Newegg.
1. You certainly can, but that's the Corsair with the crappy components rather than one with good guts. If I were being completely fair, I'd be spending well over $300 to buy a GaN PSU.
2. Maybe you're different, but I'm not spending almost $500 on a high-TDP CPU only to cripple it with a crappy cooler.
3. I went with a midrange case. In truth,You can't even buy a case on-par with the MacBook Pro case. Even trying to get close would be in the $400+ boutique range.
4. I never mentioned what work was going to be done. If I were going to compare for LLM work and be competitive with Apple, I'd have to go WAY more expensive on the GPU.
B650E and X670E are for overclocking and you'd pay WAY more to get them over their non-E counterparts. x670 offers quite a bit more IO. Maybe that's more IO than Apple offers, but the cost difference for a decent B650 isn't very much (maybe 3-4% of build cost). Certainly less than upgrading the other things you mentioned.
I see nothing wrong with the CX750M PSU [0]. What do you consider good guts? Compared to the macbook charger, which gets fairly hot under load, I see it as more capable.
2. Cooler cost has very little to do with performance. Thermalright units compete with liquid coolers for a fraction of the cost[1].
3. What does a macbook case offer? Holes for ports and the keyboard, hinge for the LCD. The components themselves do the work. What does a ~$50 case [2] not do? Plenty of cooling, mounts all your components.
4. For llama.cpp text generation, an M3 pro does ~31 tokens per sec with the Q4_0 profile[3]. A 3070 does ~34 tokens per second [4].
WRT the motherboard, I'm not sure what did I suggest upgrading? I recommended cheaper parts than the original post.
But this build is not only equivalent to an M3 MacBook - it’s probably 2+x as powerful when it comes to CPU and GPU. There are cheaper ways to build an equivalent machine.
Laptop performance is often thermally limited due to less airflow. Desktop PCs have big fans and coolers and can handle sustained load better. Bursty loads are less of a disparity, though.
But of course, the tradeoff is that it's not portable like a laptop.
Very common on pc laptops, doubly so with discrete GPUs. However Mac laptops seem pretty good about not thermally limiting and are especially at giving same performance when plugged in as when on battery.
PC Laptops with discrete GPUs often show huge differences when on battery vs wall power.
Macs are notorious for thermal throttling, partly due to inadequate cooling and party a policy to only turn on the fan once it's already throttling. This hasn't changed with Apple Silicon.
Every review I've seen for the apple silicon macs show they do quite well, especially compared to similar PC laptops. So not zero impact, but MUCH better than the competition. I'm not basing this on impressions, but the published performance metrics show performance over time as well as performance on wall power vs performance on battery.
The first part of your comment is correct. This part is not true for any benchmark I’ve seen - it’s possible to detect thermal throttling on the Air models but it requires uncommonly heavy workloads and takes much longer to kick in than it did with the Intel processors. In practice, a software developer who isn’t running LLMs locally will likely never hear the fan on their MacBook Pro or experience throttling on an Air.
I chose the 12-core because it's very close to the M3 Max in overall performance.
There are cheaper ways to build most things. A high-end Thinkpad is $3000-4000 and you can buy a terrible gaming laptop with the same components for $1000-1500. The Thinkpad is going to be more reliable and last longer than the gaming laptop because the build quality is much higher.
Apple uses good components that cost more, so comparing to bottom-of-the-barrel components isn't an apples-to-apples comparison.
I agree and that's why I added the comment about a Thinkpad at the end. My point was that the difference is closer to 25% rather than the 300% that was claimed.
I own an M3 Max laptop, so you can suffice it to say that I think the 25% markup is quite reasonable for being able to stuff a very nice desktop into my bag.
I own a 16" M1 Pro for work and a 16" M3 Pro for play, and wow I can really tell a difference in both performance and battery life. It's a nice upgrade.