Hacker News new | past | comments | ask | show | jobs | submit login
Alder Lake Core i9 processor is faster than the M1 Max (macworld.com)
225 points by kofejnik on Jan 27, 2022 | hide | past | favorite | 357 comments



3.5%-5% increase in performance at more than double the wattage, for a chip that is not in consumers hands against a chip that basically hasn't changed and was released 14 months ago~

This is not the victory that the title implies.

It's also a _little_ bit funny that the i9 laptop was double the price, given people usually rant loudly about how expensive Apple computers are.


I don’t think this is going to happen, but: It would be extremely funny if Apple strapped a heavy copper heatsink and a turbo fan to an M1 and just overclocked the ever-lasting crap out of it.

The crazy thing about using a Macbook Pro (with an M1 Pro or M1 Max CPU) is that, in single-threaded performance, it's really no faster than a dippy little Macbook Air. And at the end of the day, single-thread performance is more effective than multicore! TBH I think this tradeoff is the right one (that is, not overclocking Macbook Pros), but when the iMac Pro and the Mac Pro roll around… why not? Just go for it.


Overclocking isn't just limited by heat dissipation, and dealing with the additional heat generation is one of the easier issues to overcome. Much more limiting is the fact that the electronic components in the hardware (everything from RAM to the more "integrated" components on the PCBs) have certain amounts of time which they need to take in order to tie voltages high or low, to either read or output a bit. This amount of time is typically very small as compared to the amount of time that a voltage stays tied high or low, and so isn't a problem, but while you can shorten the amount of time that a voltage needs to stay high or low when overclocking, generally the transition time isn't as negotiable. Increasing clock speeds to the point where these transition times dominate means you're going to get garbage reads and garbage writes out of a component.

This is part of why if you're overclocking your computer, you end up experiencing errors or unexpected crashes even when temperature doesn't go too high. Past reasonable tolerances, you can't just cool it more and expect it to work.


Transition times decrease as voltage increases. The amount of power being dissipated is roughly proportional to V^2 f, and V is roughly proportional to f, so the tradeoff kinda sucks (double frequency = 8x power consumption), but you can eke out some additional performance if you are willing to dissipate more heat, within limits.


> have certain amounts of time which they need to take in order to tie voltages high or low, to either read or output a bit.

When you overclock you add voltage which helps with this which goes a long way. You would often hit thermal limits first, so heat dissipation is a major factor.


This is not my experience in the olden Athlon XP days. Even though you give the extra voltage and give enough cooling, the CPU starts to encounter many internal errors and needs to re-run a lot of instruction trains to get sane results.

This causes heat and load spikes, and while you get no errors, the increased frequency doesn't give you any real world performance gains. Returning a to a slower configuration actually gives a much snappier and performant system.


Athlon was 20 years ago.

It feels unfair to generalize across time and architectures


It's not about architecture. It's about physics. Once you pass a certain threshold, electron/current leaks and escapes start to wreak havoc inside any silicon. So beyond voltage, current and temperature, you're limited by the silicon itself.

I stopped overclocking systems after Athlon XP. This is why I gave that example.

Even without overclocking and overheating, I've seen and still seeing partially cooked processors which shut-down half of their FPU pipelines to stay reliable albeit with orders of magnitude lower performance.

CPUs are much more complex with their MCE and more advanced microcode structures ever, and there's much more than meets the eye.


> This is part of why if you're overclocking your computer, you end up experiencing errors or unexpected crashes even when temperature doesn't go too high. Past reasonable tolerances, you can't just cool it more and expect it to work.

People push CPUs to absurd limits with LN2 cooling so they’d just not true.


It's not like Apple would ever do it, but the M1 should have heaps of OC headroom. Given that the RAM is already on package, all they need to do is turn up the CPU multiplier.

AMD chips are doing similar clocks to Intel on TSMC N7, so Apple could (but won't) have a chip running way higher than the clocks they are currently shipping with.

Also, it's kinda inaccurate to imply any overclocked setup will crash, there's plenty of room unless they come turned up to the max from stock like the 12900k.


Overclocking headroom means you have timing slack, and timing slack means you have faster ~= leakier circuits than necessary, or stages which aren't filled with work which is also an inefficiency.

I expect Apple has an extreme focus on power efficiency and especially idle / leakage power, much more than Intel considering the core basically the same as they use in their phones. They also have a different approach to turbo / dvfs. So I would expect M1 to actually be a lot tighter than Intel and not have so much OC headroom.

Obviously you can buy timing with voltage to some degree, so there would be something there probably. Modern nodes are running into more problems with voltage induced breakdown though so the OC limit looks very different to what you can ship in a product. Has anyone measured M1's VDD?

> AMD chips are doing similar clocks to Intel on TSMC N7, so Apple could (but won't) have a chip running way higher than the clocks they are currently shipping with.

Not their existing microarchitecture though. They do nearly 2x the work per clock as AMD chips which necessitates more logic per stage. Getting a microarchitectural edge means making less logic do more work and it's very possible Apple have some edge there, it just wouldn't be near 2x IMO.

The silicon technology of course plays into it, but when you look at how fast individual transistors and the shortest poly to connect them can switch, speeds over 100GHz have been possible on 90nm. Today's cutting edge is probably over 200GHz (e.g., search ring oscillator). So it's not a fundamental switching speed limit of the tech that gets you.

I would say Apple could probably redo the physical design and synthesis work and minimal logic changes to target a faster and leakier device that's not suitable for phones but might be a little fairer comparison. It wouldn't put it at a 5-6GHz frequency, but could easily be enough to re-take these benchmarks and still be ahead on efficiency.


> They do nearly 2x the work per clock as AMD chips which necessitates more logic per stage

Not all work is created equal. Decoders on Arm are definitely parallel (= don't have more serial logic for wider decoder) compared to the variable length decode x86 is stuck with. And your backend ports are also parallel (although maybe scheduling isn't?). The only places where wider always means more logic per stage is caches - register file, L1/L2/L3, BTB. For example Apple managed to work some magic with a 3 cycle 192kB L1. AMD and Intel are at 4 and 5 cycles for much smaller L1's. Part of the reason for that is probably because Apple doesn't need to hit 5GHz and can afford more logic per stage.

And in any case, it's very likely you could just shove more voltage through the chip and get it to clock higher, since the current 3.2GHz is very far from what we know TSMC N7/5 can do. I don't think you'd need a rework unless Apple wanted to target 4.5+GHz.


> Not all work is created equal.

> And in any case, it's very likely you could just shove more voltage through the chip and get it to clock higher,

Yes.

> since the current 3.2GHz is very far from what we know TSMC N7/5 can do.

N7/N5 can "do" 200GHz. 90nm could do 100GHz. The limit a device can do depends most highly on the logic.

> I don't think you'd need a rework unless Apple wanted to target 4.5+GHz.

3.2->4.4? I doubt it with any reasonable voltage that could actually ship in a device. Very hard to predict these things unless you've at least got basic shmoo plots and things like that in front of you.


> Decoders on Arm are definitely parallel (= don't have more serial logic for wider decoder) compared to the variable length decode x86 is stuck with

x86 decode is parallel too


It is parallel in the sense that you can decode 4-6 instructions in parallel, yes. It is not parallel in the sense that variable length decoding requires each of your decoders to talk to the other ones to coordinate on instruction length boundaries, which means there is going to be a lot of serial logic in your decoder circuit.


It doesn't, if your L1$ predecodes at fill time and stores instruction length.

Something, somewhere does have to do a serial length decoding of course. But when you look at the L2 access latency and throughput (which is the minimum L1 fill latency), it's clear you could afford to do that part of the decode over more cycles.

New designs are not just predecoding lengths but entire uops now into the first level instruction cache which is the same concept they just call it a L0 and L1 rather than L1 and L2.


The M1 family already can only sustain 3.2GHz with only one core per cluster active, so there is some bottleneck there, either due to power dissipation/density or power delivery. If the latter, upping the voltage would give you more oomph, but that comes with diminishing returns. I wouldn't expect Apple to give the chips more than a trivial clock bump (3.4?) this generation, if that.


cryogenic cooling tho?


Wouldn't solve the problem. Basically you need the integrated components to have shorter transition times. These are an inherent property of the component: speeding up the transition time basically means "replace this component with a different component that has a shorter transition time"


Overvolting the component and sending more current to any transistor tends to speed it up, though at the cost of severely more watts used.

Overclocking is often a combination of overvolting (leading to more amps), allowing you to increase clock speeds.


As they say, "the candle that burns twice as bright burns half as long", or, in this case, "the transistor that runs at thrice the voltage runs for about 1.3 seconds"


Overvolting is how this is solved and works with adequate cooling.


Google propogation delay, your pipeline's length and delay of devices within that pipe determine the max frequency your circuit can run at (a very high level overview).


How is pipeline length related to max frequency?


It’s related indirectly. A longer pipeline tends to result in shorter stages within the pipeline. (This is the whole point of pipelining.)


OK, yes there's often correlation but that's it. A longer pipeline doesn't affect max circuit frequency.

The other sibling comment regarding distance isn't true. regardless of how insane your feedbacks are. Physical placement tools have always handled this in my experience (as a chip designer for 10+ years).


There is no point of lengthening the pipeline if you don't get higher frequency from it. It would just add complexity and and a higher branch mispredict penalty.


Physical length the signal has to travel.


Apple (if they were willing to expose a more wild side) should make an "M1 Pro Max Unlocked" board (or something like that) and just sell it as a bare, microATX board. It could cost exactly the same as the Mac mini for all I care, but if PC builders could use it in crazy cases and with liquid (or even liquid nitrogen) cooling, that would be so cool.


This will never happen.


It would not surprise me to see some future Mac pro use the M1 in a desktop form with exotic high wattage but near silent cooling (liquid, air or otherwise). These chips are fantastic for video and media work and really need to get put into more workstation-like form factors.


Nope, it won't, Apple will never be willing to show a more wild side. I'm just dreaming.


I remember hell freezing over with iTunes for Windows.


iTunes for Windows was not unimaginable. Now, iMessage for Android ...


I bet Woz would totally do it, and he has the clout to do it too - as a retro-mod calling back to the Apple I or II


Just rip the board from Mac Mini?


Would love a liquid cooled mac.


They have already done it, almost twenty years ago now: https://www.computerworld.com/article/2565453/new-power-mac-...


> I don’t think this is going to happen, but: It would be extremely funny if Apple strapped a heavy copper heatsink and a turbo fan to an M1 and just overclocked the ever-lasting crap out of it.

If this was released and allowed for relatively straightforward overclocking, user replaceable RAM and a full size GPU of one’s choosing, what could they charge?

The M1 Mad Macs edition would be loved by all 9 of us.


Sadly, I don't think the M1 family will clock any higher than it's already going. The rumored M1 Max Duo and M1 Max Quadro will have more cores but they'll still trail Intel in single-thread performance (although not enough for most people to notice).


I still maintain the M1 Max Quadro is a figment of media imagination. The M1 Max die is designed for precisely two-die usage. Not four. The IRQ controller has registers for two dies. The drivers reference two dies. The whole thing screams two dies everywhere.

4-die Apple Silicon machines will surely come, but not with the M1 Max die. It'll be a new design.


What about a dual CPU Mac Pro?

Edit: thinking about this for a few more seconds, I realise this is a completely separate discussion, but I was just trying to think how Apple could accomplish the Quad branding even if internally its separate CPUs.


How would you connect the CPUs together then? There's no mechanism for that. You need a bus, IRQ routing, cache coherency, and more. Those features are there in the Max with the d2d interconnect, for up to exactly 2 dies.


Precisely. ARM has traditionally had issues scaling it's clock speed even when given ample cooling and power, history suggests that the only way they'll significantly improve performance is with new fab technology, engineering a better design, or expanding horizontally (eg. adding more cores).


Apple has an architectural license, which is why they are so insistent on calling them "Apple Silicon" in branding instead of ARM.

The only thing the Apple M1 has in common with ARM is the instruction set. The actual core design? Completely Apple. No Cortex X1s or A73s in there. Apple does use the stock ARM M3 in a few places IIRC but that's a very-low-power ~100mhz part for managing some internals.

Also, as for who can push the limits of ARM, Apple appears to know ARM better than ARM knows themselves. Both based on Apple's higher-performing core designs than ARM stock, and also because Apple cofounded ARM with VLSI and Acorn, and an Apple VP was the first CEO.

Edit: On that note, it's kind of ironic how every Android phone has the DNA of a company Apple cofounded and initially led.


> Edit: On that note, it's kind of ironic how every Android phone has the DNA of a company Apple cofounded and initially led.

You do realize that the ARM architecture existed for almost a decade before Apple was involved, right? From 1981-1990.

Apple was involved with ARM Ltd, the company that was formed when the chip division was rolled out of Acorn; not in the design of the original chips. Apple wasn’t significantly involved until the ARM6. Considering Microsoft was as involved with the StrongARM project, most of which has long since been integrated into the mainline ARM architecture, you could argue their DNA is just as integrated. By your logic, at least.

> Both based on Apple's higher-performing core designs than ARM stock

ARM’s vanilla cores aren’t intended to be high performance cores. They never sought that out until the recently started X-series, which even still isn’t designed to operate in the same envelope as Apple Silicon. Their designs have always been primarily focused on power efficiency with a secondary focus on drop-in generality.


Also, for every Android commentator talking about ARM, consider that Apple even named the company "Advanced RISC Machines," because they opposed having the name of a former competitor in there (because it was originally going to be "Acorn RISC Machine"). Apple also provided almost all the funding and initial leadership for the venture, while Acorn supplied most of the engineers and VLSI the silicon tooling.


That's making it sound like Apple had significant design expertise that went into arm. But it was because they didn't have this test they cofounded Arm, because they wanted a processor for the Newton and Acorn had the expertise. Moreover when Steve Jobs came back he sold off all investment in Arm, apple hasnt owned part of arm for a long time.


It's touching that you reduce me to "Android commentator", but my point is that Apple has a lot of history to spite with their second-gen chips. Whether or not they're engineering their own cores or decided the name of ARM is irrelevant, they simply haven't proven that they can increase their desktop chip performance without packing more cores onto a die and calling it a day. If they manage to make significant single-core gains with their next-generation chips, I'd be impressed. The odds are against them, and their competitors are starting to wake up.


No. They have in fact increased the performance of their cores with every single generation since the A4.

I think the point about Android commentators is that they may not have been following Apple’s silicon development closely enough to know this, especially if they think that the M1 is the first in the line.


> may not have been following Apple’s silicon development closely enough to know this, especially if they think that the M1 is the first in the line.

It's not just Android commentators. A surprising amount of people have the view that Qualcomm/Google/<whoever> will surely come up with better CPUs in a couple of years. Apple has been doing this for more than 10 years. They probably spent 4 or 5 years designing A4 before it came out. And A4 came out in 2010


Apple does not just have that history, but also has the benefit of scale that others lack. The M1? It was literally just the A14X, the sequel to the A12X used in their iPad Pros, but Apple rebranded it.

Qualcomm is not going to make a better ARM Windows processor anytime soon. They had five years of exclusivity on Windows on ARM. Every single device they released was massively overpriced, had crappy performance, and until very recently, could not run 64-bit x86 programs at all (only 32-bit was translatable).

Someone is of course going to remind me that they bought Nuvia, and so Nuvia will change the boat around. I'm a skeptic. Judging by previous pricing, I believe that Qualcomm's chips and royalties result in disproportionately expensive hardware (every Windows on ARM machine to date, the cost of devices with Snapdragon 888+), and may not be competitive cost-wise as much as some would hope.

Also, Nuvia was designed for server processors, and is being retooled into a mobile processor. I'm going to bet that it's better than Qualcomm's current lot, but I doubt the single-core performance will be very strong compared to Apple. Qualcomm claims it will be "competitive with M-series," which considering M2 is going to be coming out before Nuvia isn't good news. Also, Qualcomm famously claimed that their Windows on ARM Snapdragon device was "competitive" with a mobile Core i5, but that was quickly laughed off the table when it went to reviews.


Apple has sold more than two billion devices using custom ARM-based silicone in just the last decade. There's a world of difference between the first ARM-based A4 and the current M1 line of products, which is no less impressive than what Intel or AMD is accomplished in the same period.


> If they manage to make significant single-core gains with their next-generation chips, I'd be impressed.

Depending on what you mean by significant, they have made those every year since the iPhone 4 came out.


It appears that you doctored my quote somewhat, hm?


I literally copied and pasted a complete sentence, from beginning to end. Your sentence and my quote do not diverge one bit. You're off your rocker.


My understanding is that Apple isn't using ARM IP cores (i.e. ARM Cortex) but just implement the ARM instruction set architecture using their own proprietary CPU core design.

Edit: Since the A6-era for CPU cores and and A10/11 for the GPU cores.


> The crazy thing about using a Macbook Pro (with an M1 Pro or M1 Max CPU) is that, in single-threaded performance, it's really no faster than a dippy little Macbook Air.

That “dippy little M1 MacBook Air” is damn fast. Even for single core tasks. I’ve owned one for a year now, and recently got the new M1 Pro with eight of the same cores, and single thread performance is not low by any sensible measure.


> I don’t think this is going to happen, but: It would be extremely funny if Apple strapped a heavy copper heatsink and a turbo fan to an M1 and just overclocked the ever-lasting crap out of it.

I fully expect this is what the new Mac Pro will look like, probably with multiple M1s.


M1 Max has a cross-die interconnect they photoshopped out of the marketing die shots, so I fully expect Apple to ship dual-Max configs on the Pro.

(Also, RIP one of the most tinkerer-friendly Mac configs in a very long time.)


The internal design is 2-die max, so that'll be a somewhat puny Pro by Mac Pro standards. Maybe a Mac Mini Pro?

I expect them to come up with something much crazier with a newer design, perhaps in the M2 generation. Doesn't mean much, but the IRQ controller architecture and driver in these machines claim to scale to 8 dies, even though the existing implementation in the M1 Max is sized for 2. That's no guarantee they're looking at a product like that, but they might be.


They won't overclock it to any major extent; that's not their game and it's not designed to scale to insanity. But word is the cooling system in the new Mac Mini is pretty nice. At the very least I'd expect it to ship with an M1 Max with both neural engines active (the laptops have one disabled for some reason) and wouldn't put it past them throwing in the rumored (and all but confirmed to exist based on what I know about the architecture) M1 Max Duo.


It is likely that 3.2ghz is close to practical ceiling for these CPU cores for some technical reason. It's not about thermal headroom, but the very ability of the circuitry to run at faster speeds. It has been speculated that one way how Apple achieves their impressive perf/watt is to use logic gate arrangements that can only run at low frequency but bring massive wins in power consumption.


> in single-threaded performance, it's really no faster than a dippy little Macbook Air.

Maybe in terms of raw FLOPS or something, but more importantly it has enormous levels of memory and storage bandwidth, which is what's actually important for most user workloads these days.

That's not even accounting for the performance of the GPU cores which can be used for a ton of multimedia workloads.


> more importantly it has enormous levels of memory and storage bandwidth, which is what's actually important for most user workloads these days.

I've heard this tossed around a lot, but is there any evidence that it's actually true? I struggle to imagine a regular workload that benefits from 32gbps of bandwidth, much less a scenario where the average layman could saturate a 200gbps or 400gbps bus.


> I struggle to imagine a regular workload that benefits from 32gbps of bandwidth, much less a scenario where the average layman could saturate a 200gbps or 400gbps bus.

Since this is HN, many of us like… compiling, you know, stuff. It is a regular workload for many of us on here.

The main beneficiary of the M1 Max wide memory architecture appears to be the Haskell GHC compiler, which is very, very voracious when it comes to the memory consumption and the data bus width. Regular Haskell builds shift the data around at 35-40 Gbps on average, with the Haskell optimiser hovering around 50 Gpbs and easily peaking out at 65 Gbps on medium size projects. Since cabal (the Haskell build toolchain) for some reason can't fully utilise the «-j10» parameter, roughly a 1/2 of CPU cores are not utilised. Which means we can kick off another large Haskell build in parallel without affecting memory transfer speeds of the first build.

Rust release builds are a runner-up with 30-35 Gbps transfer speeds on average. Static type analysis and type inference in Haskell and Rust is hard, is complex and requires a lot of fast memory and benefits vastly from large CPU caches.

I have not attemped a Rust LTO build yet, but I would expect the LTO builds to like the wide memory bus, like, A LOT. In general, a LTO build (e.g. large C++ builds) is heavily memory bound and shifts data blocks, small and large, around extensively.

But an application is not very useful with a dataset, so many of us then like training our data models. Data model training can easily saturate 200 Gbps per a M1 Max CPU cluster.

Then the application needs to load the data in and usually do something with it. For example, issue complex queries to a graph database. Large CPU caches coupled with a wide data bus is a winner here.

All of the above are examples of real, practical and regular workloads.


I'd be inclined to agree, but my 16gbps i5 520m compiles Rust just fine. Small and medium-sized desktop applications aren't constrained by bandwidth as much as they are CPU-bound.

I could easily imagine AI model training blowing up a memory bus, but every machine that isn't an M1 Pro or M1 Max is going to delegate that to a dGPU that is bound by PCIe bandwidth, not memory. Even then, properly-optimized CUDA programs will calculate most of the work on your GPU itself, constraining you to the 300-something gbps that your GPU is capable of.

Regardless, neither of these workloads are something that I'd imagine the average Macbook user doing on a regular basis. The only "real world" workload I can imagine that would exercise it is insanely high-res video encoding, which again, is only really going to be taken advantage of by a handful of users.


Discussing needs of an average Macbook user is a moot point as they would not purchase a maxed out M1 Max. I was referring to specific use cases of a specific target group of users with specific examples who can benefit from specific hardware architecture solutions and about being able to put them into a practical use today.

> I'd be inclined to agree, but my 16gbps i5 520m compiles Rust just fine. Small and medium-sized desktop applications aren't constrained by bandwidth as much as they are CPU-bound.

Gbps refers to the memory throughput, not to the memory footprint. A Haswell CPU equipped laptop with 16 Gb of RAM can compile Rust applications just fine; the difference is the time. Compile times are the single important delineating factor for software engineers.

> Even then, properly-optimized CUDA programs will calculate most of the work on your GPU itself, constraining you to the 300-something gbps that your GPU is capable of.

With all due respect, it is exceptionally unusual to train ML models directly in CUDA, which is a low-level proprietory Nvidia API. Of which CUDA is one, and there are several others as well (ROCm, OpenCL, CoreML, Metal etc), and they are swapped out for one another depending on what the underlying hardware actually is. ML training is done via the use of high-level ML frameworks (Tensorflow, NumPy, SpaCy, Pandas, SPARK and others), but not in CUDA – it simply almost does not exist.

Lastly, whilst the ML training undertaken on a high performant and an exorbitantly expensive dGPU itself would be substantially faster compared to 32 M1 Max GPU cores, one would yet be constrained with the PCIe bus data transfers speeds: ~32Gbps for PCIe v4.0 in the x16 lane setup (the high end of the commodity PC hardware today), and ~64 Gpbs for PCIe v5.0 in the x16 lane setup (not even unavailable for purchase today in PC laptops and desktops). With both being a far cry from the 200 Gbps of a single M1 Max CPU cluster.


Working with media files requires lots of bandwidth. No so much for productivity.


marcan seems to have a pretty good handle on the clock tree. There's probably enough breadcrumbs in asahi to void some warranties on a Mac mini, go nuts with cooling, and see how high it can go. The pmic might not be configurable though.


We actually know very little about the clock tree; we know where the clock mux registers are but not how they connect to each other, and that's mostly about peripherals, not the CPUs. Also, the PLL registers are locked out and read only. The CPU DVFS stuff is all done automatically (you just select a performance level in one register), and I wouldn't be surprised if the underlying tables and configs for that are also locked post boot. The PMIC is accessible to some extent, but runs its own firmware anyway which might lock things out, and I don't think CLVRs (final voltage regulator stage, the one you care about) are directly accessible (on the Pro/Max) and that should all be controlled by the SMC or hardware logic, which again has a good chance of being locked off.

Basically, all of this is configured at boot and much of it locked off. Apple have done a good job making it difficult to nuke these things from the OS level, for better or for worse :). I once overwrote all the GPIO registers and didn't even break anything (though the Type C controllers did crash, but a hard shutdown and boot fixed it - this was on an M1 Pro MacBook). It's in one of my streams somewhere...


Oh, word. Thanks for the correction. I guess I extrapolated too much from the combo of you knowing enough of the clock inputs to the peripherals to make them work and a statement of yours about finding a pll bypass signal. I want to say your main point your statements on the pll bypass was that the cores must be mainly static logic because they ran stable at whatever rate the crystal is, but I'm half remembering and could conflating that with something else.


Ah, the PLL bypass was just p-state 0, but that was still using the high level interface. macOS never uses that mode, but it's there.


copper body macbook pro?


>copper body macbook pro?

Apple Marketing: This year Apple is going green in an all new way!


> was released 14 months ago

The M1 Max was only released 3 months ago...

(Your other points stand though)


The CPU cores on the Max are identical to those of the original M1, though.


Same processor really just a yield thing.


There are 3 M1 series chips, the M1, the M1 Pro, and the M1 Max. The M1 is a different die to the M1 Pro and M1 Max.

I believe the Pro and Max are the same die binned differently as you alluded to.


They are all different dies. However, the M1 Pro is just an M1 Max design with a chunk lopped off the bottom. Not at the manufacturing stage; at the design stage. They are otherwise identical; the device tree representation of the M1 Max is a superset of the M1 Pro, down to memory addresses and everything.

Binning comes in with the lower-core-count variants of each chip.


Again yield; not sure how many people understand how big yield would be even Apple. At some yield you can’t support the demand. None of what was written above makes a lot of sense unless Apples just burning fab cycles. They might be but I kind of doubt it since it would seriously impact device cost and would take up slots in the fab leading to longer lead times. Not even Apple can dictate all their chips to be P1 at TSMC. In actuality it’s much more likely that the difference is in metal connectivity for cores. For low core designs it’s likely the lower number of cores are yield fallout and they use an e-fuse to disable them. The metal difference would be faster and allow them to respond faster to changes in demand even while wafers are in the line. The cost of all these masks and skus would be a factor even for Apple because we aren’t talking about some old technology node here. They are on the bleeding edge and I’m sure they have capacity constraints just like everyone else.


I'm not guessing. They are different dies. There is no need to write a hypothetical wall of text and get it wrong; you can just look up die shots or chip package shots or device trees and know they're different dies. This information is well known. There's a massive difference in size and aspect ratio between all 3 dies. It would not be cost efficient to fab M1 Max dies and sell them as M1 Pro. The M1 Max is explicitly designed as a logical and layout superset of the M1 Pro, but not for binning purposes, just because that way they don't need to do two completely different layouts.

SoC codes T8103 (M1), T6000 (M1 Pro), T6001 (M1 Max).

Yes, they do use binning and e-fuses for the lower core count versions within each die/SoC type.


I tried finding die shots of the M1.

Techinsights have a die shot of the original M1[0].

But for the pro and max, I could only find the die shots from the apple press release.

The pro just looks like a cropped max, I wasn't sure if it was just cropped for illustration purposes or actually a different die.

Are there any publicly available die shots of pro/max from a third party source?

[0]: https://www.techinsights.com/blog/two-new-apple-socs-two-mar...


The Pro isn't a flat out cropped Max; you can tell because the bottom edge looks like a proper die edge, and there are subtle places where the break line wouldn't be straight (compare the DDR channels with the rest of the blocks, they don't line up slightly). The Max is a cropped real Max because they tried to hide the die-to-die interconnect (meant for a 2-die Max machine) at the bottom. You can tell because the die abruptly stops at that edge hard, without the expected features of an IC edge. Clearly if they'd gone as far as photoshopping that on for the Pro, they wouldn't have screwed it up on the Max ;-)

I'm not aware of any other public die shots of the Pro, but there is one of the Max that went around on Twitter and it matches the marketing shot, plus an extra strip on the bottom edge as we'd suspected. We do know the M1 Pro has a completely different and smaller package, however, and the M1 Max die would not fit on it. That one was leaked months before the announcement as part of the board layout and schematics for that machine. So the Max die simply wouldn't fit on Pro machines, and this is from engineering drawings.


If the metal isn’t there it won’t show up in device tree because it physically isn’t connected on the die. So device tree as you keep pointing out cannot tell you if it’s same die or not. Also the device can be there and not be in device tree meaning the devices memory location was omitted from device tree when the device tree blob was built.

It may well be that they have different die. But that’s usually an expensive way to do things. Usually you want downgrade paths for devices that don’t yield.

Source: 23 years in semiconductor industry.


They do have downgrade paths for chips that don't yield: disabling some CPU or GPU cores, but not half the chip. That's why Apple sells those variants. Also, the M1 Max die physically doesn't fit on the substrate used for the M1 Pro.

Source: over one year working on reverse engineering this precise platform almost full time.

Also, if you've spent 23 years in the semiconductor industry, I have no idea where you're getting the "not connecting metal" story. Nobody does that. How would you even do that for a yield issue? That doesn't make any sense. Chips that have failed bits get the broken parts marked bad via eFuses after production and the initialization logic or bootloader will then read the fuses and power/clock gate those bits and lock them in that state, via existing isolation/gate logic. That's how the entire industry does it. Metal patches are for fixing design bugs in a respin and stuff like that, not for turning things off in a finished chip. And you certainly wouldn't make a change in the line to the metal to disable half a chip from the get go. That's just throwing silicon away for no reason, why wouldn't you try building the full thing first and seeing what works? The entire concept makes no sense.


M1, M1 Pro, and M1 Max are all distinct dies, with M1 Max being quite a bit larger, with a larger cache, and more I/O compared to its M1 Pro sibling. The 24-core M1 Max, however, is a binned die of the 32-core M1 Max. There are lower core bins of M1 Pro that are only available on the cheaper 14-inch MacBook Pro. M1 is also available in two forms, one with a 7-core and the other with 8-core GPU.


Specifically, the M1 Max has twice the memory controllers, and each one comes with a cache block, and hence has twice the cache. It also has twice the GPU cores, an extra ProRes core, two extra display controller cores, an extra AVE (video encoder) block, an extra scaler block, an extra PMGR (power manager, including some PLLs and such) block and extra bus fabric branches to drive it all, and two things Apple won't talk about: an extra ANE (Neural Engine) (disabled on all shipping machines) and die-to-die interconnect.

This is all in the lower chunk of the die, such that they could basically cut the design on the dotted line for the M1 Pro (not literally, but almost; they definitely didn't redo much if any P&R, as the shared portion is identical by all visual and logical appearances).


M1Max doesn’t have more I/O than M1Pro. All I/O blocs are localized on the top part of two dies.

And as Hector said, M1Max is identical to M1Pro but with an bottom extension which provides more Ram interfaces, Caches, GPU, Média and Neural Engines, PLL and probably all the logic for a die to die connection.

But this die-to-die area has been cutted on the Apple picture presented in keynote.

I don’t think M1Pro has a such die-to-die logic, but I don’t see real picture of M1Pro die.


By "I/O" I am including memory bus, not merely devices.


Not only that but the Apple GPU is around double the strength of the Intel integrated GPU.

Obviously the 3080 smashes the Apple GPU but that comes at a huge power cost.


Seriously why can’t Intel make a decent igpu? My understanding is that the M1 GPU is just “ok” and isn’t competing with AMD or nvidia?


because for the target audience the intel iGPU is "good enough".

For now: And for the cases where it isn't, Intel wants you go for an dedicated graphics cards. (There is a cost to stuffing more onto the same CPU package, chiplets do reduce that cost by quite a bit, but are a new technology you have to integrate into your design flow, tooling etc.)

In the future: Similar, but due to some shifts in their approach Intel might reach for chiplets. Similar there are dedicated Intel Graphics cards they want to sell to you.


The M-series GPUs are pretty great for what they are. I haven't tried a ton of games on it, but my 16" M1 Pro handles WoW about as well as a desktop with an RTX 2070 or so, and does it without the machine turning into a jet turbine. It's not destroying discrete video cards or anything, but it's universes beyond your typical integrated offering.


M-series GPUs feature a lot of ASIC and media encoder/decoders.

My M1 Air absolutely DESTROYS my RTX 3090 in h265/422 video editing, which is what my camera (Sony A7S iii) shoots.

My 3090 would shutter and play my 60fps footage at 11fps using 350W while my passively cooled M1 doesn't miss a frame.


this just means the M1 has a fixed function block for your codec or whatever else you're doing and the 3090 doesn't. or that your video software is crazy broken.

it obviously doesn't matter for your workload, ultimately the M1 is faster and that's all that matters for you. just highlighting that this isn't magic, just hardware matched to use case.


It also means that your hardware has a tendency to "strangely become supper slow" when video codecs change over time.

But then they don't change that often.

Your probably change your apple laptop more often.

And given that codecs are pretty much the only "high performance" use-case apples target audience has to a high degree putting special hardware in to make it go fast is the right way to go.


That must be new with the M1s, on the exact same project my 2018 Mac struggles (<5 FPS) while my 1080 rolls through ProRes 422 without any issue


I believe they are somewhere between equal to and a quantum leap over what Apple was shipping before depending on the task/machine.

But as we all know what Apple is/was shipping and the best you can get in a PC laptop are very different things.


Because of memory bandwidth limitations. There are EIGHT memory channels in M1 Max. To pull off that amount of bandwidth you would need to do a lot of crazy engineering and get not that much in return, because this approach doesn't scale well beyond M1 Max performance. You can't just add more and more memory channels. Even if they slap two chips together and use memory controllers of both, total bandwidth still would be less than in one 3090. And it will, most likely, loose to 3090 in all tasks other than video encoding. And you wouldn't be able to put a lot of these chips together, too much overhead. But on the other hand you can put 4x3090 into one PC. At the end it is just cheaper to use dedicated GPU past certain level of performance, especially if you don't make your own hardware AND software.


The 10th and 11th gen Intel CPUs each brought fairly significant boost to integrated graphics performance. I played Halo Reach and a bunch of other games on a 10th gen iGPU.

But, it had stagnated since the 4th gen. Intels 10th gen graphics were meant to go out with their first 10 nanometer chips, but that took them an extra few years.

All that said, yeah, I'd love to see even better integrated graphics. As others have mentioned, the RAM is a significant limitation.


Your understanding is correct, for the most part. Supposedly the new Xe graphics aren't all that bad, but I haven't used them enough to know for sure. AMD's still the king for iGPU performance, and Nvidia (as usual) takes the dGPU performance crown.


Yeah, Xe isn’t bad but it’s not anything close to what Apple is doing on the M1 Max or what AMD and Nvidia are doing on the discrete side for laptops.


Intel refuses to go beyond 128-bit memory which starves the iGPU.


Don’t they have their own new GPU that people expected them to announce around now but got “delayed”?


I don't expect Arc Alchemist to be that different than a GeForce 3070 or 3070 Ti. It's still going to be ~90W vs. a 60W M1 Max.


For one, the only power numbers in this article are from a benchmark where the i9-12900hk has a 29% performance advantage (and the power numbers weren't even measured under controlled conditions using consistent methodology). Second, you can't extrapolate from the numbers in the article to a fair comparison, since performance scaling with extra power is non-linear. You must compare the systems at iso-power or iso-performance, or even compare them over the whole power/performance range.

Probably a more fair comparison of the architectures will be between the M1X and the i7-1280p, since they have similar power specs. It wouldn't surprise me if the M1X comes out on top since it's on a superior process node (TSMC 5nm vs Intel 7) and on-chip memory is faster, but I think they'll definitely be comparable.


As someone who used to despise Apple-- the M1 chip has been such a pleasure to use.

Completely silent, cold, and I haven't had any issues in my personal workflow.


I ordered an M1 Max at launch at work and use an M1 Pro, and I've been so impressed with the M1 Air that I bought... a $150 Chromebook. It's complicated.

My next mobile purchase (or build, I'm into cyberdecks lately) will have a different form factor display, like 4:3 or 3:2, and I hope Apple makes it. I really thought the Framework might be it until I used one, but it felt like they're only making laptops as an experiment in opex minimalism, and that they're planning to sell off as soon as they establish the value of the "overpriced semi-open eco-conscious DIY laptop" market.

I've talked two people into buying M1 Airs and one M1 Mac Mini from Costco on sale. I haven't recommended anyone buy a Mac since 2014, but these things are fantastic.


yes, the M1 Air is the best laptop since... the original Air? it might be the best laptop in terms of value for price ever. I've never had a mac before, but this thing is objectively the best there is for the price.


my experience with any intel mac is that they are louder than a jet engine and hotter than motorcycle exhaust pipes in texas

it's a very strange feeling, they actually burn so much that they itch


This article is benchmarking the CPU alone but measuring the wattage of the whole machine, including a dedicated 3080ti which is obviously not relevant to a CPU benchmark. It would make more sense to do this benchmark on a laptop with an integrated GPU.


Knowing intel and PC manufacturers you most likely won't find this CPU in a laptop without a dedicated GPU. I gave up the last time I was looking for a new laptop (I ended up with a MacBook Air), but it was surprisingly hard to find a laptop that had a beefy CPU but no discrete GPU. Lenovo had a couple of think pads that had some offerings but they would make you jump to the intel CPU with vPRO and that would come with a huge price increase.


Yeah - I think it's a question of what the different target markets are willing to pay.

A lot of people want gaming laptops, so systems with big GPUs and big CPUs and not so great size/weight/battery life can be had relatively cheap.

But most people that want big CPUs and good battery life but don't care about the GPU so much are people who need the system for their job, so manufacturers can get away with pricing those systems higher.


Out of curiosity what jobs need a good CPU but also can't take advantage of a good GPU?

It seems like most of the high performance numerical computing software today is GPU-accelerated. If performance for your job is super important, why isn't your software GPU-accelerated?

I might just be naive but it's honestly hard for me to imagine why anybody would need a good CPU and not also want a good GPU.


I work on static analysis (points-to analysis) that does not use any GPU at all and is mostly single threaded.


Static code analysis on 100s of mb of code. Pins my m1max for 3min, my 2015 for 15-20ish.


Because here in the real world software isn’t GPU optimized in many cases.


Since this is HN... compiling.


Good point, not sure how I didn't see that, I'm a software engineer too.


Compiling is IO/RAM bound. CPU has impact but not much for most cases.


Nah, I only see minimal difference when building on a ram disk (source tree copied there too) compared to a nvme SSD.


You clearly aren't writing c++ :D


Perhaps that isn’t available yet, or maybe no one will ship the high end Intel chip w/o a discrete GPU because they don’t think anyone would want that combo.


The charts show the idle power draw though (20W), couldn't just just subtract that?


the fact is, apple hardware is not _that_ expensive relative to what you get for the money.

Their supply chain is super optimized, and it's hard to beat them on price if a competitor decides to try (at the same quality that is).


What gives people the perception of apple being expensive is that other OEMs play tricks to boost numbers on a spec sheet that the common person checks while cheeping out massively on parts that don’t show on the spec sheet.

I know someone who got an MSI laptop recently and it’s complete garbage. The screen casing cracked from the force of the hinge within months, the trackpad is borderline unusable, wifi is spotty, the paint on the keyboard started flaking off within months, and it’s a general piece of junk.

But on the store page it looks like an extreme bargain because it has the same ram, GPU and cpu as much more expensive laptops


> What gives people the perception of apple being expensive is that other OEMs play tricks to boost numbers on a spec sheet that the common person checks while cheeping out massively on parts that don’t show on the spec sheet.

This is very real. I noticed that the Lenovo T14s has a noticeably better trackpad than the Lenovo T14, but you can't tell unless you try it out on both Notebooks as both are listed as "Mylar® surface multi-touch touchpad" on the spec sheet.


> What gives people the perception of apple being expensive is that other OEMs play tricks to boost numbers on a spec sheet that the common person checks while cheeping out massively on parts that don’t show on the spec sheet.

Yep! Like with dell: non-standard power supply size, non-standard motherboard size. And with most of the companies: a motherboard that uses whatever the cheapest USB, sound, SATA chipsets were in the bin that week.

The boards are designed and reworked constantly by the factories...as opposed to Apple who pour enormous engineering effort into refining, testing, etc each revision.


This is very true!

At the same time, this can be important or not.

When I do my pc builds I pick the cheapest case as well, and I'm fine with it. Similarly I took non-Apple laptops all my life (up to 5 years ago, when I started coasting on employers' hardware) without many problems. I just couldn't justify spending 3k to have a laptop which performs as well as one for 1k, just for the better quality of the materials.


It's really the cost of capability and addons, I'm not sure about that MSI model but you're able to get a ton of storage and a good gpu for under what apple charges

if you want one or two tb of storage and decent ram you're looking at $3,000+ after taxes


I think this still ties in to the point. People laugh at the apple user mentality of “I just know it’s going to work if I buy Apple”. But then you have windows OEMs where you have to do careful research because the product line is full of scams and traps.


As a user of a 15" 2018 MBP, Apple has had their fair share of scams and traps. They had no business stuffing the chips they did in that chassis. Especially the i9s. The cooling is woefully inadequate even for my base model i7 and I have no doubt that a $1000 gaming laptop with its "cheep" build with the same CPU would have better sustained performance.

Apple has also repeatedly sold several year old computers at brand new prices, in particular the 13" 2012 MBP that was only discontinued mid-2016, the 2015 MBA that they "updated" in 2017 and sold for another few years, the 2014 mini, the 2013 Mac Pro. What's worse is most of these were computers at the bottom of their price range. I had to steer so many people away from the attractive seeming base model MBP and convince them that yes, it worth spending $200 extra to get a machine that's 3 years newer. What other OEM literally sells several year old machines for new prices?

EDIT: Just remembered they still sell the 2018 i5 Mac Mini for the same price.

EDIT 2: Oh yeah and I just remembered they'd hide what generation Intel CPU you were getting and just say Dual Core 2.5Ghz i5.


that's true for all products these days, including apples


If the exact combination of specs that you want is one of their handful of available configurations, apple hardware pricing is fine. But if you start from what you want to do, most of the time the cheapest mac that does what you want (which will probably be vastly overpowered in some area you don't care about) will be much more expensive than the cheapest PC that does what you want.


I think one thing that creates that impression is the costs for beefier options in the Apple store. Everytime I configure a Mac and add RAM I feel ripped off. The price difference between the upgraded Mac and what you'd pay on Newegg is just so large. I'm still a happy customer, since I still get so much lasting value, but in the moment it certainly hurts.


It's Apple's way of price discrimination. They want to offer computers at every price point, but they don't want to build a dozen different models.

So their entry level machines are a great deal -- you get the quality of a 5000€ machine for just 2500€ if you can live with limited storage.

But if you want enough storage, you're paying the full price...

I'm not a fan of their policy, because the result is that lots of people get computers that are artificially crippled by low storage. But there's no way they could keep their entry level prices as low if they charged a fair price for upgrades.


> So their entry level machines are a great deal -- you get the quality of a 5000€ machine for just 2500€ if you can live with limited storage.

It sounds really weird to talk about 2500 EUR laptop as "entry level".

> They want to offer computers at every price point

They don't offer anything under 1000 EUR.


It's the entry level configuration of that model.

Apple is doing the same as car companies: Make the base price of each model as low as possible and then charge ridiculous sums for extras.


Apple charges what the market will bear (and their guessing, mostly accurately, that people who want an extra 16 GB of memory would rather pay an extra $400 than switch to another laptop manufacturer). Claiming that they are selling their entry levels models at a discount is just silly, Apple has considerably higher margins than other manufacturers. I'm not sure what are their exact margins on laptops, but the 2500€ is more close to being an 1300€ machine than a 5000€ one (since their margins on storage and memory upgrades are almost definitely above 100% this gets worse the more upgrades you get).

Not that I'm trying to blame Apple, they've done a great job and it would be irrational for a company to not take the free money (I'd do the same and it's not that their competitors are better, high end PC laptops have similar margins it's just that less people want them so Dell, Lenovo etc. tend to sell them at massive ~50% discounts eventually (same with Android phones) which Apple never does, since their prefer to sell at relatively low volumes but with huge margins). However their prices for ram/storage are objectively predatory and the only reason they can get away with it is because there is no viable competition, it has nothing to do with "subsidizing" entry levels Macs. If they wanted to they could easily cut their prices by 500-1500 across the entire 14/16" mbp range and still remain more profitable than all the other laptop manufacturers.


Average margins on Macs are high because they charge huge amounts on updates. But I really doubt that the margin on the 2500€ Macbook Pro is 50% as you claim. Maybe the high end has big margins like that (since they overcharge on RAM/SSD), but I really doubt the low end machines have high margins like that.

I own one of the new Macbook Pros, and the build quality of them is fantastic. It's the sturdiest computer I've ever seen. It's a huge jump from their previous models, it's even better than the 2015 models, which were already very good. I've opened it up to look at the internals, and there's so much material everywhere. No wonder it's heavier. Even the base plate is a lot stiffer than in the previous models.

Now, I must concede that I'm not too familiar with windows laptops, but I've never seen anything remotely comparable to this. If you could point me to a 1300€ windows laptop that's even close to the Macbook Pro in build quality, I'd love to hear about it.

And that's not even mentioning how fantastic the display and the speakers are, because they are unbelievable.

The only thing that's cheap about the new Macbook Pros is the camera, which unfortunately is still a piece of crap, just with some machine learning algorithms applied (they don't help).


My point is that Apple is not subsidizing lower end models by selling overpriced ram, if they were fine with the margins other laptop manufacturers have, they could sell their products much cheaper (they could still cut their prices significantly and have considerably higher margins than others) but they don't, because they don't have to (their products are good, there is a huge demand for them and they don't have as much competition as PC makers).

Obviously I don't Apple's exact margins, they are probably lower than 50% but not by much however: "But there's no way they could keep their entry level prices as low if they charged a fair price for upgrades." is simply just not true, they obviously could do that if any of the conditions I mentioned changed, they just don't because as any other company Apple seeks to maximize it's profits.

Dell XPS, Thinkpad X1, Razer Blade and Surface Laptop lines have pretty decent build quality, of course they all have inferior CPUs which use more power and generate way more heat compared to the Mac (I didn't say you can get one for 1300€, just that that other manufacturers try to price their premium laptop lines at similar level as Apple but usually end up having to sell them at significant discounts)


Note that the PC that pulled off these numbers cost almost double what the Mac did. $2500 vs $4000. It includes the GPU (not cheap these days) but still a big difference.


The PC comes with 32GB of RAM and 4TB SSD. A similar specced Macbook Pro costs $4100, not $2500


Realistically, I think that the argument of Apple being way more expensive is simply now outdated. It was true in the past, but I don't think it's the case anymore.


It's still true outside the US.


Apple's base spec is super price competitive -- no $1000 laptop is even in the same universe of quality as a Macbook Air. But when you start upgrading the memory and storage is where Apple really takes you to the cleaners.


Yeah but it's locked down, unupgradable future e-waste.


I would argue the opposite, still using my 2012 15” MacBook Pro… a decade later. Go to Walmart and look at the laptops on display, and tell me any of those are capable lasting near a decade.


That’s not true of the current Macs though.

I still have a 2011 era MacBook that I can use, and was very pleasant to use until an OS update a couple of years ago.

But that was because I was able to upgrade the RAM, the SSD, and replace the battery.

None of those things are possible with macs from the last few years, so their longevity is much more likely to resemble phones than older macs.


I'm not arguing for or against, but we've hit a fairly stable period of diminishing returns on specs.

We've solved the storage latency and throughout bottlenecks, scheduling, and CPU core counts, so we don't really need beaucoups of RAM for a functional laptop.

RAM bandwidth doubles with each generation and throughout and latency improve, so less is needed for the same performance.

Batteries are relatively inexpensive to replace through the vendor are improved in energy density, energy density, and cycles, so there's not a big advantage to buying aftermarket even though you can.

For 2014-2016 Macs with flash storage, they're still pretty great to excellent machines. The greatest detriment to the longevity of 2017-2019 Macs is that they contain Intel components, and that every release of MacOS since High Sierra has been designed adversarial against their security flaws. Costco was practically giving away Intel iMacs recently, offering them almost half off.

Also, I remember benchmarking on apfs post-mitigations and seeing up to a 40% performance loss on some syscalls, which translated to an unusable computer on dual core Intels or anything with mechanical storage. How would upgrading components have helped then?


> RAM bandwidth doubles with each generation and throughout and latency improve

RAM bandwidth is improving a lot faster than its latency. (The latter is almost as stagnant as max CPU clock frequencies.)


Apple is rather good at planned obsolescence. Intel handed them a few years of unusable machines on a platter (thanks Intel) but I’m sure they’ll find new reasons for MacOS updates to degrade performance on old machines regardless of hardware capability.


They lead the industry in how long they provide OS update support for phones/tablets and desktops/laptops.

They downclock the CPU on mobile devices as the battery ages, to keep the CPU from browning out so you

My system, made in 2013, won't be supported by the next OS release this coming fall, but they usually continue security updates at least one release behind. That's basically ten years of hardware support.

Keep in mind that Windows 10 came out in 2015. A ton of manufacturers simply did not support windows 10 on hardware sold before it, because win10 coming out helps them move new PCs.


Linux and Windows both support most hardware for longer than Apple does. It may not have a manufacturer stamp of approval, but Windows 10 supports CPUs back to some Pentium 4s with full security updates. That's a decade farther back than Apple.


Linux leads the industry in support, and additionally, in openness.

Apple leads the industry in using slave labour to create disposable fashion items.

https://9to5mac.com/2021/05/10/seven-apple-suppliers-alleged...


The secondary market simply doesn't agree with the weird obsession with calling Apple's hardware future e-waste just because you can't change out an SSD.

Mac hardware holds value because it's well built, supported for a long time, has specs that are enough for a wide range of users...

-

For every power user running through battery cycles every day, there's people who just need a solid well built machine and won't feel the need to upgrade just because their machine gets less battery life, or a given IDE needs more RAM.

Those people don't replace hardware because it's broken, they replace it because it feels old. Design wise, materials wise, etc. Not having crummy bloatware slowly accreting gigabytes of junk and a dozen extra startup items.

Compare a sub $1000 M1 Air to any other sub $1000 laptop coming out today. You'd be incredibly hard pressed to find anything even half as well designed... that's how you get people to keep their hardware.


I bought Dell 3410 for $600 and upgraded it for another $200 to 16 GB/500 GB SSD. It's a perfect little beast. Air with those specs will be around $2000 in my country.


I bought my 2013 MacBook Pro with a 1TB SSD and 32GB of RAM, with the highest specced i7 at the time.

I sold it last month to pick up a new 2021 MacBook Pro M1 Max. I bought it with 32GB RAM and a 1TB SSD because I don't need anymore, 8 years later. And the i7 I had still beat out last years base model on Geekbench.


Not to take anything away from how long you got to use your computer, but you couldn’t get 32GB of RAM in an Apple laptop until 2018. You likely had 16GB of RAM in 2013 if you had a 15” retina MacBook Pro.


You're correct, just checked my receipt, it was 16GB. I upped it to 32GB with my new laptop.


Still using a Note 8. This phone is awesome. I forget how many years...

I would be OK with my M1 Air performing in a similar way.


This is a decent example of what folks are talking about in terms of supported lifespan. Your Note 8 was EOL'd in October 2021. It came out in September of 2017. That's four years of support.

My iPhone came out around the same time and Apple is planning to support devices several generations older than mine with iOS 16...


I am very likely to use this phone another three years.

Just saying.

Otherwise, yeah! My wife has an older iPhone still nicely supported, and older than my Note.

My last Mac was a 2012 i7. I used it 6, 7 years? It is still useful, but got coke on keyboard! I fixed it myself last time. Not doing it again.


What if there is a debilitating security problem with Android within those three years?

I wouldn't choose that kind of uncertainty.


Depends on how one uses the phone for starters. I may ditch it, I may not.


I used a 2011 Core i7 Asus N53SV till last year - full hd and had 4 ddr3 slots. I was able to run the latest Linux, and the latest Windows on it. Apple meanwhile has dropped support for 2014 macbooks. There are apps which won't run on them today, and Apple will stop supporting Catalina this year.


To act like your experience extrapolates out to the wider market is disingenuous at best.


hmmm thinkpad?


Completely false.

If you do any volume purchasing the resale value of old Acer laptop or asus ones is so bad they really do become w waste quickly. Folks w macs use them forever. And then resale market isMUCh better for apple


With the cpu change I suspect os support to be dropped in a couple of years, and resale values to drop. My friend had a pretty expensive g5 PowerPC that wasn’t worth much and had an end of life much sooner than the hardware should have had.


I'm curious how Apple's competitor Microsoft is handling ARM - I actually always thought Apple did a MUCH better transition / management of these types of tech variations.

Can I really run everything windows runs on windows ARM offerings? Be good if MS has solved things this time around. Some of us still remember Microsoft's "Plays for Sure" campaign against Apple.


Someone will buy an old MacBook, good luck ebaying your old Lenovo…


You're underestimating the hobbyist Thinkpad community.


I’m not, I’m sure for every modder that puts a new motherboard and mech keyboard in, there are 100000 that just get pitched


They're also not purchased at the volumes that Macbooks are, I do think you're underestimating how well they hold their value. You picked an unfortunate laptop line to make your point, especially since this is in no way a recent phenomenon. I'm sure people chuck away their Macbooks instead of reselling them too.


More like put it in a drawer. I question the notion that laptops are thrown away as opposed to put in storage. Especially when they’re used for sensitive information and not normally wiped if they’re not used for resale.


Both my wife and I had 2011 MBPs that were only replaced last year. I upgraded the RAM & replaced the HDDs with SSDs over the years, which is reasonable imo.

I'd say 10 years is a pretty good run for a laptop. YMMV


Intel know Apple has the ability to strap two M1 Max dies together, right? Right? It's all over the architecture and drivers. It's designed for that. The damn interrupt controller has an unused second half. Rumors are it'll be used on the iMac Pro, but who knows, they can throw it into a 16" laptop if they really want to, it'd still be quieter and lower power than the old Intel MacBooks.

Do that, same wattage, less than twice the price, twice the performance. Now what, Intel?


uhm, twice the chip twice the power usage if used for twice the performance...


>3.5%-5% increase in performance at more than double the wattage

The 100 watts mentioned in the article is the whole laptop power. Google tells[0] me the power consumption of the i9-12900HK is 45 with boosts up to 86W.

Intel's 12th gen i9 is built on 7nm, Apple's M1 max is build on 5nm. Rule of thumb, is about 30% power saving at same performance for a new node.

So assuming max performance of M1 Max is 40W and i9-12900HK is 86W, adjusted for process node we are looking at 40W vs 60W.

[0]https://wccftech.com/intel-core-i9-12900hk-alder-lake-is-a-p...


That’s just marketing fluff in terms of measurements. Most agree that in terms of gate size, intel’s 7nm is similar to TSMCs 5nm. So you could say that Intel moving to 5 would give them a 30% boost but the TSMC goes to 3nm.


Intel renamed their nodes, "Intel 7nm" is what they were previously called "Intel 10nm". So the node numbers are now (semi) compatible between Intel and TSMC.


To be clear, the node formerly known as Intel 10nm is now called Intel 7. There's no unit attached, which is probably for the best, as what was being measured in nanometers previously was super subjective and varied per manufacturer.


I was reading up a lot of reviews on these alder lake, either those reviewers are paid or simply Intel fanboy, they all seems to downplay the power consumption and giving Intel glowing reviews (and quickly skim over power issue). I wouls year end and see how 13th vs Ryzen 4 will battle out.


>3.5%-5% increase in performance at more than double the wattage

Which would be throttled to death at that point on a nimble laptop anyway!

Why do they even bother?

>It's also a _little_ bit funny that the i9 laptop was double the price, given people usually rant loudly about how expensive Apple computers are.

They ususally don't do an Apple to Apple comparison, else the price differential for similar features, construction, etc. and some uniques that you only get from Apple is either zero or around $100-$200, when you pile in the extra options.

Instead they check things like that an extra 1TB of SSD is X on the Applestore but half or less on the street...


Not to mention that M1 can run at those speeds on battery. The intel certainly would need to be plugged in to beat it by 5%.


Yeah, even AMD with their 5800X3D can't keep up.


At least this CPU can run more than a single OS.


Give it a couple more months and Linux will boot on Apple Silicon. Give it a couple of years and Microsoft will wisen up and start licensing Windows for Apple Silicon too. Then we'll be back to where we were before the architecture switch.


Linux already boots on Apple Silicon and has for almost a year; give it a couple more months (if that) and we'll have a reasonably user friendly installer for it and enough hardware working to be worth using ;)

(Basically I'm going to make battery stats work, polish up the install process, re-do the CPU frequency scaling driver, and ship it)


It’s obviously not valid for all use cases, but Windows already runs great in Parallels. If I give it half the CPU cores and RAM of my M1/16GB Air, it benchmarks around the same as an Intel Quad Core 13” Pro.


Intel is a process node behind

When you normalize for that, isn’t the power usage they equal?


What is your point? Intel failed to execute on their own Fabs? That intel needs to use TSMC's fabs to get parity? That Intel can't get access to TSMC's best fabs? The end result is the same. Intel has chips that are 3 years behind in power efficiency.


Well the point is that they're neck and neck while being a fab behind. If the fab catches up they might be greatly ahead. Its a big if but it adds context.


The power consumption is a critical issue on a laptop, unless you're one of those laptop users that never works on battery in which case you could get a lot more bang for your buck with a desktop.


LISTEN YOU GEN Z PUNK, YOU DONT KNOW ABOU TTHE REAL TIK TOK

https://en.wikipedia.org/wiki/Tick%E2%80%93tock_model


Too bad Intel dropped the tick-tock model years ago.


What reputable chip analyst expects Intel's fabs catch up to TSMC in the next 3 years?


The point is obvious.

M1 is not more powerful due to a better design, but simply due to more modern manufacturing process.

Otherwise you would see a much larger gap in performance.


So it's merely equally as good as the previous market leader's flagship product in every way except litography, where it exceeds them?

Are you implying that's not something to be impressed about just because lithography is not a property of the architecture?


Your comment is unclear.

Apple's contribution to M1 was the design. TSMC built the next gen fabs, and gets credit for the manufacturing.

You can't compare viability or success of the processor design when you compare cross fab generations. It's apples to oranges.

You can state that M1 is better, which is true, but it doesn't follow at all that it's due to the design component.

It could very well be that moving Intel's design to TSMC's fab would lead to it running circles around M1

I mean this is pretty obvious stuff, but these threads always end up full of people that don't understand hardware very well


> It could very well be that moving Intel's design to TSMC's fab would lead to it running circles around M1

Or not. There is no reason to believe Apple design has not been a major contribution to the performance. Sure TSMC process is a major component but to quote yourself it does not follow that that explains all of it. It could very well be that a hypothetical Apple design to Intel fab would still run circles around current Intel chips (or not). What we do know is system architecture is sufficiently different, and that it is likely to have a material impact on performance.


That’s just not true. The RISC instruction set has a growing advantage in performance per watt. There’s a reason no one is producing low-power x86 chips.


Surely it's a shrinking advantage? x86 CPUs have been microcode based for a long time, and the hardware to convert x86 instructions to micro ops is a relatively fixed cost that diminishes over time with process shrinks?


Can you really separate the “design” from how it’s manufactured?


Intel famously followed a "tik-tock" development cycle where they would move their existing designs to their next gen fabs, before updating the design.

So yes, you absolutely can separate them, and that's how it was historically done. Though Intel fell behind on this philosophy.


And those transitions were not seamless. Intel controlled the whole stack so they made it seem like it was. You obviously can’t separate design and manufacture at this level and scale, to even suggest it displays a massive lack of understanding of how this space operates.

If you don’t know how the space operates, why are you commenting?



Why would you normalize for that? The subject at hand is the product performance. Could Intel's design turn out better in future generations? Perhaps. I think to consider weighing which process node was used for the CPU changes the subject to Intel's prospects as a CPU designer/manufacturer.

If we normalize for Moore's law then historic processors would compare much more favorably.

In fact it's a bit unfair to Apple to compare this sample unit to last year's production model.


As a consumer, I would not normalize for that.


I don’t know about equal. It would certainly be better.

But the problem is that’s hypothetical. The article is comparing products available today. And that’s what Intel has to compete with.

Intel touted this as their “answer” to the M1. They may have overplayed their hand. Maybe should have held that marketing point for next year.


Who cares? Better is better.

I don’t care why something is better as long as it doesn’t require human sacrifice, just that it’s better. And the M1 beats the snot out of anything x86 without qualification. Add on that you can get it in a Mac Mini for $600 and ding ding ding Apple wins.


But the Intel chip is a fab generation behind. Not apples to apples


Ok, it's Apples to Intels then. :)


Go and try to build your own rig with the M1. I'll wait right here for your report on its performance and power usage. (Not picking on anyone, just annoyed at the incessant power usage comparisons between apples and oranges IMO)


I have a M1 in an entry-level Macbook Air I bought for $1000. Is Intel really having to compare its top of the line i9 to this chip now? Wow, I knew they were a bit behind but this is incredible. Price-wise, like you said, i9's are expensive.

Worse, the Microsoft ARM SQ1 in the Surface Pro X performs like 1/3rd the benchmark performance as the M1. I know synthetic benchmarks aren't very meaningful but we have some X's at work and I've played with them and they are all unusually and annoying slow while my Air feels like I'm using a high-wattage desktop.

The value of the M1 Macbook Air is pretty wild right now. The only thing keeping Intel's stock price above water is that Apple will never license this out to Windows laptop makers and Windows has monopolistic lock-ins so not everyone can just switch to Apple, especially businesses, so Intel is safe in the Windows world for now.


No, we're talking about Intel i9 vs M1 Pro/Max which is much more expensive.


> During the Cinebench R23 multi-core test, the Alder Lake laptop was consistently in the 100-watt range, [...] M1 Max’s power draw was 39.7 watts

Ouch.

> MSI GE76 Raider got 6 hours of offline video playback, a far cry from the MacBook Pro’s 17 hours.

Double ouch. That's a huge gap, and 6 is low even in absolute terms.

It's still a nice beefy chip, but still a far cry from the engineering marvel that Apple pulled off with the M1


100 watts, so basically I have no reason to use this over a desktop. The battery life on the m1 macbooks is a game changer. I don't need more power out of my laptop I need it to be cool to the touch and last at least 9 hours with normal usage.


With those laptops, we're back to the days of the luggables. It's a desktop you can carry, with some caveats in terms of performance related to the choice of parts.

Apple is truly playing a different game.


Apple in-housing their chip design was a brilliant move. The A-series processors were a good indication what was coming up, but still the M1 blew me away.

Still waiting to see what they'll do with Mac Pro + M-series. Do they go for a M2 or just slap a dozen M1s in parallel =)


With RTX 3080 Ti though. So yeah - I don't think it can match Apple M1 efficiency but reviews of P-series and U-series Alder Lake mobile CPUs are going to be interesting and what I am looking forward to.


> During the Cinebench R23 multi-core test, the Alder Lake laptop was consistently in the 100-watt range, with spikes between 130 and 140 watts. We haven’t tested the power draw of the M1 Pro/Max ourselves, but AnandTech did using Cinebench R23 and found that the M1 Max’s power draw was 39.7 watts versus over 100 for the 11th-gen MSI GE76 Raider.

Cinebench is a pure CPU test though, so I feel the comparison is more or less apples-to-apples.


I was talking about video playback tests. Unless they disabled discrete GPUs during the test,a laptop using 3080Ti for video playback is not going to be efficient.


This article is about the Intel comparison but isn’t that one Nvidia’s fault?

We all know GPUs burn watts when being heavily used. But when a laptop GPU is just showing a video shouldn’t it be able to scale down pretty far?

If I’m just surfing the web or working in Word I wouldn’t want my laptop GPU using 15-20 watts.

Surely it can get down to like 2 watts right? Or less?


2 watts total? I doubt it. The display is a major drain. I went down this rabbit hole years ago and on a 2012-vintage X220 with an i7 I could get it down to just under 6W with the display on minimum brightness, the wifi and Bluetooth off, and a host of configuration options including but not limited to what's available through Powertop and TLP.

I'm sure it's better than 5W and change nowadays but the display is by far the biggest draw and I don't think you'll get down to 2W.


Ah yeah, maybe GPU only. Don't the MacBooks with the onboard discrete GPU have an integrated Intel GPU as well? I'd assume those ones physically power off the big daddy card when it's not needed.


They used to. That’s how they worked but it ended up more trouble than it’s worth. I think they stopped trying to switch it on/off during the Intel era (not sure) but there are no discrete GPUs in the M1 era so far.


Oh, I meant GPU only. Sorry.

Yeah a well lit display can be a huge drain, but of course I can control that. I would hope WiFi/BT don’t use much relative to CPU/display, but I can’t claim to know.


It would’ve been nice if they specified but it was probably using the Intel iGPU for video playback. Those results would line up with last year’s laptops in integrated mode. Even with comparable hardware the MacBook is literally 3x as efficient.

In my experience if the dGPU is on at all it won’t get 6 hours of anything.


That's fair, I missed that. In that case it sounds like the comparison itself was flawed by comparing two devices in such different perf envelopes.

Nevertheless, it's not a great first showing.


It’s an attempt to show top of the line CPU performance. To get that you need a very power hungry setup on the Intel side and they only win by single digit percentages.

Intel’s lower power chips may be able to match/beat the M1 range’s power draw, but one would assume they can’t keep up that level of performance while doing it.

Apple seems to have the best of both worlds at the moment. It can perform very well or go really low peer with a single chip.


It honestly depends on the benchmarks you're using. Dave2D posted multicore Cinebench scores[0] the other day that compared M1 Max to this same laptop, and it was really a blowout.

Apple's foiree into ARM has been going better than it could have gone, but recommending an ARM chip still comes with several asterisks, which make it kinda hard to recommend for people who don't make a living out of reading/writing words on a screen and highly specific creative work that may-or-may-not benefit from the architecture of the chip. Regardless, they've got their work cut out for them, and it should be interesting to see how Apple responds.

[0] https://youtu.be/VHUF8A2vpos?t=423


> Double ouch. That's a huge gap [6 vs. 17 hours of "offline video playback"], and 6 is low even in absolute terms.

That test is measuring backlight consumption, not SOC efficiency. It's a huge display tuned for gaming vs. Apple's famous system power optimization on a panel about half the size. All CPUs on both systems are effectively at idle here, it says nothing beyond the fact that MSI isn't going to (or capable of, frankly) spend their time optimizing idle power draw on a gaming rig.


The MSI GE76 Raider's 3080TI is nothing Apple can match, so there are factors than other just wattage and battery life.


You’re right. It was what, 2x what the M1 Mac did? If you need GPU you can’t get it on the Mac side.

On the other hand it’s needed. The Intel integrated GPU was 1/2 what the M1 Max did. Apple could theoretically add a discrete GPU too, but they won’t.


About twice the price too.


A lot of people are misinterpreting the results here (understandably, the article is pretty bad).

They're benchmarking the CPU, but measuring the power draw of a whole laptop - including a dedicated 3080ti GPU which will still be drawing a decent amount of power even at idle.

This benchmark is useful for measuring the speed and power draw of /this specific laptop/ vs an m1 MacBook, but not so much this new i9 in general.

Laptop CPU performance also varies massively even across the exact same chip depending on thermal design, so this chip on average could potentially be either much slower or much faster than this benchmark suggests.


Can confirm those video cards draw a lot of power. I have a system76 with an nvidia 1060 (or some such). I can turn it off in and my battery life improves quite substantially. It doesn’t matter what applications are running it draws a lot. Maybe it’s a Linux problem.

On the plus side with that card and the machine plugged in it actually plays steam games quite well.


I’m guessing this is the first laptop Anandtech could get their hands on with this new chip.

That said I’ve been out of the PC space for quite a while. Do mobile discrete GPUs greatly draw much power when mostly idle (such as general desktop work)? Did others give up on switching the discrete GPU on and off a needed as Apple used to do (and gave up on IIRC)?


I believe there are laptops you can buy that do that well, but it's not universal.

Software support in Windows and Linux for swapping between a low power integrated GPU and a dedicated one on demand was rather spotty last time I had a system like that, but things might have changed since then (~6 years ago).


I had one of the Apple ones and while it worked well the software was spotty there too. It tried to guess when the discrete was needed but turned it on far too often. Which quickly made it hot and killed the battery.

Since I rarely gamed I ended up using a 3rd party utility to force the discrete GPU off 98% of the time and the laptop was much better for it.

I understand why Apple did that (power draw when GPU on) and why they gave up (didn’t work well at all).


>That said I’ve been out of the PC space for quite a while. Do mobile discrete GPUs greatly draw much power when mostly idle (such as general desktop work)? Did others give up on switching the discrete GPU on and off a needed as Apple used to do (and gave up on IIRC)?

I recently purchased and returned a Lenovo P1 Extreme Gen 4. It had an 11th generation i9 and a 3080.

With no external monitor attached, the 3080 power consumption in HWinfo was 0 watts doing basic tasks (e.g. surfing HN).This was plugged in or on battery.

If I plugged in an external 4K monitor, the 3080 power consumption was ~23 watts. A few watts less with a 1080 monitor. I think 27W if both were plugged in.

I think a lot of laptops can basically turn off the discrete GPU when it's not being used. Not sure about gaming laptops. There's some quirkiness to this with the GPU not turning always turning off when you unplug an external monitor - a reboot is required.


Thanks for the numbers. I’m glad to hear it can turn off, but I will say I’m surprised it’s minimum draw is so high when it’s on.


In hindsight, one of the thing I didn't check was the GPU draw at idle (with a monitor plugged in) when the laptop was in Balanced Power mode.

Typically if I plugged a monitor in I also had the power cord in. And with the Power plugged in, I had the laptop set to highest performance mode in Windows.

There is a balanced mode which is more conservative with power consumption. I wonder if the GPU idle draw would have been lower.

The laptop I used also had the external TB4/USB-C and HDMI ports wired to the NVIDIA GPU. This means that external monitor = GPU on. I wonder if there are some laptops that can use the iGPU for basic tasks, and only switch to GPU as needed.

There are also laptops with less powerful discrete GPUs that might draw less power at idle.


Based on The Verge's article, this is the laptop that Intel is providing to everyone.


I hadn’t seen that. Thanks, makes sense.


> They're benchmarking the CPU, but measuring the power draw of a whole laptop - including a dedicated 3080ti GPU which will still be drawing a decent amount of power even at idle

The charts in the PCWorld article show the idle power draw at 20W, so couldn't you just subtract that?


I imagine the GPU also draws lower power during idle, so they would both scale at undefined rates when benchmarking


I don't have the data for 3080TI but my GTX1650 consumes between 1-2 W in idle mode. Of course TI is a totally different beast.


I switched about half a year ago from a MacBook Pro with an Intel processor to MacBook Pro with an M1 processor. My overall impression was "when the hell will it run out of power"? I spent the whole working day (without a charger) and it was still at 60%, while I was constantly stressed before when I had to do something without a charger for more than 3-4 hours.

I don't care what any test, but tripling battery life with no visible performance degradation is a huge win in my book.


I replaced an older MBP with the M1 air.

It ran circles around the old machine despite being the lowest spec. Battery is light years ahead. And it doesn’t even have a fan. So it’s dead silent. Never gets hot unlike the old Intel. There isn’t any mechanical component to fail except the hinge and the keyboard.

I was worried I might regret not waiting for the higher spec M1s. Now that just seems totally unnecessary for me.

PS: Work later gave me a high spec 2019 Intel MBP. It’s slower, louder, and much hotter than my Air. And at least 2.5x the price. Amazing.


I had a 2019 Intel MBP. I hated that machine with a vengeance. It was always hot, and because of that always throttled. I live in a hot place (Israel) and during the summer of 2021 the heat caused a massive expansion of the battery which destroyed the laptop.

I went for an M1 in late 2021 after using a Windows machine for a few months and hating every moment of it.

The M1 is simply incomparable. Fast. Silent. It feels futuristic.


Part of it might be new battery effect too. I just popped a new battery in my 2012 and just doing light dev work I could stretch it to 8 hours or so. This thing was doing like 3 hours on a good day before.


Certainly that plays a part. But my M1 Air is now over a year old and I can still easily edit 4K video in iMovie for a few hours and see the battery only drop from 100% to 70%.


Pretty huge! Don't expect miracles long term though. Batteries are still batteries M1 or not and if you do 4k video editing every day, you will notice wear over time. What sucks about the new macs is you need to take it in to replace a battery, where as you used to be able to do it yourself in like 2 mins or less.


I don’t find the same. Chrome seems to eat the battery a lot in my experience.


Yeah, both Chrome and Firefox will easily reduce the battery life of my M1 Pro mac by ~30% compared to Safari. I've "solved" this by sticking to Safari when battery life matters (ex, when I'm not at home) and using FF when I'm docked or close enough to one. With Safari, even my 14" gets ridiculous battery life: I spent almost two continuous days with ~6-8h of SoT each day without charging it once throughout.


I deeply regret getting only 16gb ram in my air for exactly this reason.


I got an 8GB Mac mini. I was broke at the time but needed a replacement in a hurry, and even though my two previous machines had 16GB, 8GB should be fine for web dev work, right?

Yeah, I regret it. I like to have YouTube videos on in the background, and usually it's fine, but for some reason live streams in particular just gobble up RAM until there's sometimes skipping audio and noticeable waits when switching apps.

Interestingly hiding the chat seems to help; I wonder if the YouTube "app" isn't flushing those DOM nodes corresponding to chat messages off the page after a certain amount of time or something. When worse comes to worse, Streamlink comes to the rescue: https://streamlink.github.io

Oh well. Making do for now. And as others have said, the performance (when not RAM-constrained) and noise (or lack thereof) have been blissful.


Can you elaborate?

I don't have a MacBook (but have been considering getting one), but I can't imagine that you need more that 16 GB of RAM to run a browser, no matter how memory-hungry Chrome might be.

Is this really an issue? Maybe you're trolling?

I don't understand how adding more memory is going to fix your performance. It's not like you're spinning up a HDD to swap.

What do you regret?


It depends on the amount of tabs you have open. I have 106 tabs open in FF and it takes up RAM. Activity monitor lists it at 3.28gb though.

Chrome has separate processes and fewer tabs on my machine. Still summing them up seems to reach 3gb.

IntellJ takes 6gb on my machine so it's the biggest individual user of RAM.

I think parent post might be exaggerating but I have the 64gb M1 Max.


The system really starts to bog down with 15 ~ 20 tabs open for things like jira, roam, email, etc. CPU usage will be very low but when swapping is in effect system slowdown is really noticeable. I don't see the same issue with Safari but of course Safari does not have the extensions I rely on.

I love the Air but excited for my 64gb macbook pro arriving in a month.


On my macbook M1 Pro I was able to work for ~11 hours without charging and on Thinkpad P14S GEN 2 AMD (5850U CPU) with a 4K display I can work for ~ 7 hours. So the difference is not that big TBH.


An additional 4 hours is massive.


I know everyone is going to jump the gun and declare how mediocre Intel's offerings are in terms of power efficiency but lets not forget that this Laptop is also running RTX 3080Ti and historically more 5% performance on Alder Lake CPUs requires somehow way more power. Same thing happened during 12900K reviews.

IMO - it would be interesting to see benchmark of P and U series Laptops. Honestly I do not think they are going to beat Apple M1 yet - but having a 14 or 10 core CPUs in <45w envelop will be interesting.

I do not think Intel is even after Apple yet but they look to have caught up and surpassed AMD with this release for sure.

For enthusiast in me - who likes a Laptop that can run Linux without problems, an all Intel 14core Laptop with <45w power budget will be great.


There is no Intel 14core laptop with <45w power budget at the frequencies this benchmark was performed.

i9-12900HK consumes 115W of Turbo Power (PL2) [1]

At 45W it can only do 2.5Ghz ( literally half ) on P cores and 1.8Ghz on E cores.

It would be really interesting to see how this processor works with TB disabled which is really easy thing to do.

[1] https://www.intel.com/content/www/us/en/products/docs/proces...


> There is no Intel 14core laptop with <45w power budget at the frequencies this benchmark was performed.

I didn't say anything about frequencies. :-)

But there is https://www.notebookcheck.net/Intel-Core-i7-1280P-Processor-... which is 14core P series model with power envelop of 28w. I would personally take 5-10% lower performance if it results in decent efficiency.


> I do not think Intel is even after Apple yet but they look to have caught up and surpassed AMD with this release for sure.

Such a beautiful sentence to read. Love the actual competition these days.


> I do not think Intel is even after Apple yet but they look to have caught up and surpassed AMD with this release for sure.

Hmmm, as I've said before on other reviews around 12th gen, it feels like they've just pulled some levers around the power/thermal envelope being targeted (atleast on i9s?). Not really much more they can squeeze out of it that way...

I'd love to see some competition, but this feels like such a short term way to get "back in the game".


Whether you're an Intel fan, an Apple fan, or just anti-Intel (or anti-Apple), I feel like there's not much effort here to see the complete picture.

First, yes Apple Silicon M1 (any variety) is, with 100% certainty more efficient than Intel's Alder Lake 12th generation CPUs. And the performance is often better than Intel's best for specific workloads.

Second, you can look at just a few benchmarks and pick winners. You can look at maximum power draw and pick losers.

Or you can look at a lot of other things. For one, Alder Lake is a big improvement over their previous "10nm" aka "Intel 7" CPUs, both in performance, and in efficiency. But they were coming from a pretty big deficit. Second, Alder Lake seems to be able to draw a lot of power out of the box. But, it also performs almost as well when limited somewhat. For more interesting benchmarks, see how it does limited to 75W[0].

If you want great battery life, and MacOS works for you, there's no reason to join this debate. You win, you get the M1 and all the goodness that comes with it. If you've got a very specific job that requires maximum performance, you really need to see how the performance is on that task, and whether you can afford to trade off on battery life, or just need limited portability.

Of course MacWorld is not incentivized to color Intel in the best light, and neither any Mac faithful. But it's also weird to ignore the massive improvements here over Intel's 11th gen. It's technology and engineering, and it's a better showing than we saw out of Intel for quite some time. They spent the early part of the previous decade making tiny improvements to their same 4 core CPUs over and over. Then they spent the second half making mistakes in their process nodes. Alder Lake isn't perfect, but I appreciate it for what it is.

[0] https://youtu.be/Ur3Y2vxpTWo?t=345


Yea the old narrative of "arm is power efficient, but doesn't actually run real desktop level stuff at that power level" is pretty dead at this point.. it now runs desktop level stuff at 3x lower power than intel's latest chips.. and the potential for apple's chips to keep improving at that same power envelope is definitely there.. so...


Yeah, ARM is fine, as long as Mac meets your needs. No other ARM is close, and as a PC hardware platform, it’s still a giant pain compared to x86.


Seems like Graviton may be there, but of course Amazon isn’t going to sell those to anyone.


No but Graviton is a modified Neoverse which others do sell - eg Ampere


If there was a way to put Altra Max into PC, it would rip and tear threadripper apart.


The ARM ISA are ARM as implemented by a team with decades of low power chip design expertise is possibly not the same thing. There probably is a quantifiably larger (than for raw performance) perf/watt tax with X86, but on the same process I'm not convinced it's all that huge.

Intel were resting on their laurels for way too long. They've risen to the challenge rather than started panic-selling (give up on process), so exciting times are ahead I hope.


everybody loves the standard ARM firmware, BIOS and bootloader. right? right...?


Hold on - it's 2022, and an extremely high end $4000 laptop with the latest Intel CPU and an NVIDIA 3080Ti comes with a display that's... 1080 resolution?

My 2008 17" MacBook Pro has a screen that's 1920 x 1200. My iPhone 6+ is 1920 x 1080.

What am I missing? Is the non-Apple world really fine with stagnant resolutions?


"1080p display with a 360Hz refresh rate" as they are for distinct use.


> Is the non-Apple world really fine with stagnant resolutions?

Exactly one specific version of one exact laptop has 1080p, and you jump to that conclusion? You are missing 99.999% of the picture here!


It’s one laptop. But it’s not a $350 laptop. It’s $4000. It has the top of the line Intel chip and (what I believe) is the top of the line GPU out now.

If any laptop was going to have a good screen I’d think it was this one.

Nope. $1100 Air has a better screen.


> If any laptop was going to have a good screen I’d think it was this one.

Well no... they put a very fast (high refresh) gaming screen in a gaming laptop.

If you want QHD or 4K screens, they are out there. That's not... more than a Google away. So it's a really bad assumption to make.

Here, first google result for "Intel 4K laptop", it's $1500.

https://www.bestbuy.com/site/asus-zenbook-flip-15-q538ei-15-...

Your faulty logic was disproved.


According to The Verge there is another configuration of this machine with double the RAM and a 4K 240Hz for the same price.

Why in the world would you want a 1080p display just to get an extra 120 FPS? Even if you can perceive that difference (which I kind of doubt) you’re not going to get rates that high with your settings up high anyway. In The Verge’s tests only the old CS:GO got above 150 (it got over 400!).

“ In conclusion, putting a 1080p screen on this system is like sticking an MLB player in a Little League game. Sure, he’s gonna look impressive, but what is the point? These are 4K chips, and I’m not just referring to the price of this unit.”


There are people who want extremely high framerates, and they are the people this product is for. They play games that are not CPU-bound below the framerates they want, and they adjust the graphics settings until they achieve those framerates. The fact that they are throwing expensive modern GPU hardware at graphics that may be relatively primitive on a per-frame basis doesn't matter; for them, it's the framerate that matters.


Can’t you just set the game to FHD on a 4K screen and get the same frame rate? Or does the higher resolution still „consume performance“ from the GPU?


I don't think 4K@360Hz is a thing yet.

Beyond that, one imagines there may be some performance and power overhead, and many hardcore high-FPS players will probably prefer a display that natively matches their target resolution. It eliminates unnecessary variables that might affect performance.


Current GPU can't handle 4K120Hz on latest game with higher settings, and 4K on laptop for gaming isn't strongly needed. 1440p240Hz would be another sweet spot but 1080p360Hz is for who prefer the top refresh rate.


Professional CS:GO is played mostly at 1280x720 and 300-400FPS+ There is definitely a market need for this type of setup, whether you think it's a dumb idea or not.


You don't play on the highest settings in competitive shooters even when you are CPU bound and your machine outputs same amount of frames regardless of performance. Higher settings means more distractions for them.


$1100 Air has 60Hz screen, which is equal to oldest LCDs found in laptops. That's not a good screen, that's horrible.

Resolution isn't everything.


The original PCWorld article linked in the Macworld article compares three different laptops (Asus AMD, MSi Intel 11th gen, MSi Intel 12th gen). All three have 1080p resolution displays.

How are three high end laptops from two vendors "exactly one specific version of one exact laptop"?


It's a gaming laptop. The refresh rate is 360hz.

My 7 year old laptop is 1440p, and 4k laptops have been a thing for a long time. Every manufacturer has one.


Yes basically 95% of PC monitors are < 100 DPI.


That's just sad. Personally, I can't go back to displays with low DPI. Too much strain for my eyes.


you would if you used windows. DPI scaling is _horrible_. macos figured this out years ago. it'll take a decade to solve this problem on windows and it'll never be 100% solved, because legacy apps don't support any kind of scaling at all.


I've used Windows 10 on a 4K monitor at 200% for a couple years and I didn't have any issue. macOS' scaling works best, but Windows has been HiDPI ready for a while, and only a small handful of old apps need you to enable the application-scaling option manually. Especially since Windows still runs 15 years old applications that use crappy old APIs, while macOS doesn't.

It's not fantastic by any means, but it's a disservice to call it _horrible_.


200% is easy mode since legacy apps look sort of ok. i'm on a 4k monitor with 150% scaling right now and e.g. task manager (hardly a legacy app) looks absolutely awful.


For gaming refresh rate is generally preferred over resolution.

And this panel has 6 times more frames per second than any of the apple devices you listed.


Best for high refresh rate gaming.


This focus on raw performance makes sense when comparing desktop workstations but for laptops it misses the point and I hope it goes out of fashion.

My MacBook Air runs Jetbrains + a couple containers just fine for almost a couple of work days on battery. Is this i9 faster? Yes but at a 6 hour battery life and 2.9Kg of weight it might as well be a desktop (just save money and buy/build a better desktop or get a console until PC parts are cheaper again).

e: a cursory look suggests that laptop is also around $800 (AUD) more expensive than the (complete overkill for most) M1 it's comparing to, this is absolutely not a win.


> his focus on raw performance makes sense when comparing desktop workstations but for laptops it misses the point and I hope it goes out of fashion.

> a cursory look suggests that laptop is also around $800 (AUD) more expensive than the (complete overkill for most) M1 it's comparing to, this is absolutely not a win.

Regarding both points, the i9 laptop they tested has a 3080Ti in it, which would account for both the price difference and power draw just on its own. Even in CPU-only tests just having the discrete GPU on will chew power.

The article is making a poor comparison here: The customer of the M1 Max is not going to be the same target customer of the MSI Raider series. These are not in any way comparable laptops. We'll have to see what happens when someone gets to test a business-targeted Dell or Lenovo system with the Alder Lake CPUs. The M1 will no doubt win the iGPU tests but power consumption will be a more interesting measure.


Since there is an integrated GPU in the CPU as well, wouldn't CPU-only tests generally keep the discrete GPU turned off? If this thing doesn't effectively use an iGPU to keep power usage down, that's just as valid a criticism -- why spend so much wattage on a component you're not even using significantly?


I am also in the "it may as well be a desktop" camp, though I am willing to accept that there may be people out there for whom this product makes sense. They may number in the low dozens, but surely they do exist.

This is really Apple's own fault for talking up the performance of the M1 Pro/Max so much, but ultimately I don't think it will matter as long as they can keep pace with Intel and AMD. I expect I could count on one hand the number of people who were going to buy an M1 MBP but pivoted to one of these beasts after the 12-series benchmarks dropped.

From my perspective, I could not be more pleased that my (relatively) light and svelte 14" M1 Max, which can go all day and more on a single charge, is even in the same league performance-wise as creaking behemoths like this.


My M1 Max has insane battery life! I already take it for granted but I was just thinking about it the other day.


When I first got my M1 air I had it charged all the way and then had to put it aside for a few days. It wasn’t plugged in.

I came back and it had lost like 3%. My Intel Mac it replaced would have used up most of not all of its battery.

And that’s to say nothing of the crazy battery life in use.


1. The M1 / M1 Pro / Max Single Core Max out at around 5W. On TSMC 5nm, at 3.2Ghz.

2. The AlderLake i9-12900HK, is on Intel 7nm ( Or Intel7 ), runs with a a maximum of 5Ghz. My guess the power for single core on this chip is anywhere from 15W to 20W.

3. The Single core performance between the two are about ~5% to 20% if you are SIMD heavy.

There are lots of people in thread and the article linked comparing the system vs system ( the two laptop ) and CPU vs CPU, the two chip. I felt both of them are misguided. On System level it completely ignore the thermal dissipation which is the limiting factor. You could have fitted a GTX 3080 within a 20W cooling allowance and it will show M1 Max winning in every GPU benchmarks. On the CPU level comparing the TDP of both SoC completely neglect the design choice and components inside SoC. The 100W M1 Max includes NPU, Media Engine, 8 Channel ( Or 16 depending how you want to count them ) memory controller, SSD Controller, FPGA and GPU. It doesn't make sense to compare to i9-12900HK only on CPU and then use the total chip TDP.

And then there is the often repeated MultiCore performance( both on Android, and on PC with AMD CPU ). It is a useless number without knowing the power and core number. MSM likes to throw this number out without giving any context.


The single core power for ADL at 5Ghz will probably be closer to 25-30W (based on the desktop ADL tests), otherwise couldn't agree more.


> The 100W M1 Max includes NPU, Media Engine, 8 Channel ( Or 16 depending how you want to count them ) memory controller, SSD Controller and GPU.

It maxes out at 40W, not 100W (according to the Anandtech measurement).


I am sure the 40W is CPU only, excluding other part of the SoC. The M1 Max GPU alone could push to 60W.


No, they measure it at the wall minus idle usage: https://www.anandtech.com/show/17024/apple-m1-max-performanc...

But you’re right that this is not with the GPU loaded.


It is measured wall minus idle usage during CPU load, which is pretty much CPU load?


They should change the title to "slightly faster at massively larger TDP"


Peak performance is still interesting, no matter what power it draws. Keep in mind that if Apple could make their thing faster by cranking up the TDP, they absolutely would do so, at least in the MacBook Pro, Mac mini, and iMac where it hardly matters, if not in the Air. And perhaps they will do so in a future revision. We're due for new parts from Apple pretty soon.


The original title has an asterisk on the end. It seems to have been stripped at some point.


Hopefully this turns into a situation where consumers and technologists are the winners. Pat Gelsinger was pretty clear on how he felt about the situation with Apple when he joined Intel and it seems clear we'll see Intel continue to improve. Ultimately this competition should product better products and better prices.


It’s always great to have someone come in and give the market leader a good kick in the ass. Causes things to improve so much faster than they would have otherwise.

Remember what AMD releasing the Hammer/Athlon64 architecture did to Intel? Got them off the P4 and eventually lead to the far faster and more efficient Core2 series.


The Ryzen series seems to have done that for Intel too recently. I'm hopeful that this new split-core design Intel is going with pays dividends for them. As a consumer it's good to have robust competition.


Imagine going back 10 or 15 years and telling everyone that in 2022 it is newsworthy when an Intel laptop chip is actually faster than an Apple Mac chip. I feel like most folks would have thought it sounded like bad Apple fan fiction.


The Alder Lake monster review on AnandTech is fun. A desktop replacement laptop: ten pounds! 150 Watts! nVidia RTX! Whips and chains!

https://www.anandtech.com/show/17223/intel-alder-lake-h-core...

It's almost exactly three times as fast in CineBench as my MacBook Air, and cost about 3x as much, weight also about 3x. A useless comparison, but kind of funny to imagine rolling in there with three MacBook Airs, set up some distributed load network, and claiming victory.

The Alder Lake box has 100% more CUDA.

(I couldn't figure out the idle power draw from the review, AnandTech publishes their raw data, so maybe it's in there somewhere. https://www.anandtech.com/bench )


For sure, it's not honorable to beat M1 Max (or M1 Pro, as they share exactly the same CPU part) at high wattage. I am genuinely curious if someone can verify the 35w-case (see https://www.zdnet.com/article/ces-2022-intel-says-it-has-a-m...).

Also, it may derail: it seems that we could never have all good things at the same time:

1. Apple and NVIDIA: previously, what do you expect when two control freaks meet? And now Apple has its in-house GPU, we will never see an all-around Mac anyway.

2. Even if Intel makes M*-like chips, its customer (OEMs) won't afford it anyway: 200 GB/s memory for CPU is simply server grade (e.g., the memory bandwidth is like Xeon Sapphire Rapids'), too expensive for consumer use.

(edited for formatting)


FTA:

> In the end, though, comparing Alder Lake and the M1 ends up being a simple academic exercise that doesn’t amount to much.

There you have it.


Dumb question, but whats the engineering principle behind higher performance when increasing power usage?

Is it because pipelines are clocked faster?


For transistors, power = frequency x voltage^2 x capacitance and since voltage is proportional to frequency it means power ~= frequency^3. That's why Alder Lake is quite efficient at low clocks but crazy power hungry when you let it turbo.


Not sure at what level you want the answer and others are answering at other layers so I'll cover this one. Ultimately, the faster clock means you're discharging and charging things at the gate level more often. Changing this state expends energy. More changing, more energy used per unit time, more power necessary.


Higher clocks of the same part always draw more power, more than proportionally to the increase in clock speed because the voltage in the core also needs to increase to reach the higher clock.


I'm not sure if I'm reading your question correctly, but the general idea behind running really hot and fast is to get tasks done more quickly and return to a lower power state. The "big" cores of chips like the M1 are meant to run "bursty" workloads, where as the the "little" cores are better lightweight longer running tasks (and also take up much less die space, which is another major factor).


I would guess it is some combination of:

- longer pipelines - more silicon dedicated to speculative execution for more Instruction Level Parallelism - bigger caches - More analysis of incoming instruction stream to get more ILP and out of order execution

But yeah, more clock is more power is more heat.


An MSI GE76 is significantly fatter, heavier and hotter than a macbook pro 14 inch.

It's not a fair comparison at all. This is comparing a thin/light laptop with a designed-for-purpose gaming laptop.

Here's an image showing the large heat exhaust vents.

https://storage-asset.msi.com/global/picture/image/feature/n...


You kind-of want this to be true, in the sense that if there isn't incremental improvement in a later product, something is wrong with VLSI design. That said, it's always interesting to look at why things are faster. More interesting than just being faster.

If (eg) Intel are doing something smart in L1 cache, or ensuring no stalls, it would be cool if its applicable to ARM derived hardware. And if not, it would be cool to see how it maxes out, and how a future ARM tick-tock response in time shows benefits in different spaces.

Also, this at LEAST a 2D cost/benefit matrix now. Speed+Power. Sometimes, no point being fastest if it costs more and your battery runs out faster. Remember Apple puts the M1 in portable units, where power/battery is a significant "feature" behind the purchase. Yes, I run my apple mostly plugged in. The time I get off-plug is huge. Faster intel which drains me on a flight isn't a plus.


Even if it is faster than the M1, I'm done with Intel. The M1 is fast enough for my uses and there needs to be more disruption within the consumer chip market. I've been using ThinkPads with Intel CPU's for at least 15 years. My next laptop will be something with an M1 or better.


Don't know first-hand about Alder Lake, but I have an Asus laptop running an AMD 5900HS CPU (with a 3070 GPU), and an M1 Max for work. I can buy 3 Asus laptops for the price of the Macbook, and despite that I still prefer working with it.

Full disclosure, we're mostly a Windows shop so I'm very rarely using the Macbook for actual development (mostly just doing builds and running tests on it, otherwise just use it on a second monitor for browsing and Spotify :)).

Yes, it's subjective; I'm sure in some purely synthetic benchmark the Macbook wins, but in real life they're pretty close. And if I were actually lugging it around, I may prefer the lighter / quieter Macbook (but likely remote-ing into some Windows box or running Parallels would hurt perf enough to hate it).

I can also game on the Asus too :)


I would think that speed would come 5th in my list of desirable feature in a laptop

1. portability 2. battery lifetime 3. cost 4. connectivity

So Intel bragging for speed with such higher consumption and dedicated GPU (as opposed to integrated one). Maybe I have some niche use


Laptops are on a spectrum IMO. You can buy 5kg machines because you expect them to move them once every 2 months.

You can buy a MacBook Air because you want to watch a show while cooking.

These are the situations I have encountered.


at least the alder lake option (+RTX) do offer some option to be used as a slow cooking pad


Intel normally releases lowest end chips first (maybe because it's easier to get volume of them because of binning) but this year flipped that so we all talk about the highest benchmark chips first. Yes, that means chips meant to be paired with dedicated GPU.

Gives one the impression these are the most favorable comparisons with some benchmarks better than Apple. Then when we get to the lower wattage chips Intel will be able to claim they have just as good power draw too.

I'm not sure they'd do that inversion if they knew their chips were overall better.


Performance Per Watt matters to Intel except when it doesn't. But really when you have to dissipate so much heat then the chip will be throttled. Apple's ARM chips are only getting started.


Exactly. Apple is not even trying and Intel only just caught up.

This year, they will find out that the M2 will set them back another 2 years.


> This year, they will find out that the M2 will set them back another 2 years.

Luckily there will be the new zen 4 apus to wipe the floor with them


It amuses me how the decades-long market leader is scrambling to recover from the blow Apple dealt them.

What could have been if the Itanium had worked out. If they had done something good with XScale. If, if...


From the blow AMD dealt them.


Yeah, the M1 is fast and does not get loud when doing heavy tasks BUT the whole system needs polishing by Apple. Last night i brew installed haskell-stack (HEAD) which would compile the whole thing. It takes quite some time and while doing so I was watching a show with Quicktime. The sound would skip every now and then like it was in the early 90ies with Linux and Sound. That was quite irritating and annoying. (14'' M1 Pro)


Also, on my MacBook tonight, the CPU core temperatures reported via Macs Fan Control were ambient temperature. Which seems kinda crazy.

It was idling the CPUs, as it performs the most important task of the hour: downloading another 2GB system update.


The impressive feature of Apple Silicon is its energy efficiency. I have both a Linux laptop and apple laptop - The Intel Linux system eats battery. At times a full charge lasts only an hour; my apple on the other hand lasts all day.


This is a 10nm processor, right ? So a 10mn processor can be as fast as a 5mn processor but at more than double the voltage ? Not so bad. What would be the perf of this processor with TSMC 5nm tech ?


Those numbers are not comparable. In fact, these days they are meaningless.

Intel used to measure a part of a transistor. Worked well until the geometry of the transistor changed and there wasn’t something comparable. So they chose something different that kinda looked right.

TSMC chose something else.

These numbers are meaningless.

Mostly all the fabs really are talking about transistors per mm. But if you look closely, logic density is changing at a different rate than other things like those that make up sram (particularly TSMC).

Those caches are increasing size because 1) new designs want more MB and 2) sram density is not improving as well as logic density so caches are taking up more space in a relative basis if all you do is a die shrink.


Thank you for your answer. I got instantly buried below the "bad perf per watt" comments and thought nobody would see my question.


Whilst consuming over 100 watts versus ~40 watts for the M1 max.


It is funny and sad at the same time to see how desperate Intel is about the fact they're not building the most powerful processors anymore.


Faster, yeah, slightly... but at what price?!


(*) This processor might require a nuclear power plant attached to your laptop.


Somehow I can almost guarantee the M-series will outperform this in real-world usage.


Are these benchmarks on the M1 compiled to ARM or x86?


Nobody cares, x86_64 is a dead API walking


[flagged]


We've banned this account for repeatedly breaking the site guidelines. Please see https://news.ycombinator.com/item?id=30106519.

(I'm just posting this here to increase the odds that you see this, since the comment I replied to at that link is already a few days old.)


Click bait -- per watt ftw




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: