Hacker News new | past | comments | ask | show | jobs | submit login
MacBook Pro M1X benchmarks have leaked (tomsguide.com)
108 points by rbanffy 12 days ago | hide | past | favorite | 137 comments

Note that Tom's Guide is not the same thing as Tom's Hardware despite the same owner. It's where all the clickbait and low quality content of Future plc goes.

That's considerate that they realized they shouldn't compromise the quality of the one site?

Yes it is one of those thing Tomshardware and Anandtech had problems with in their early days. They dont report rumours, only Facts and in-depth reviews. But rumours gets the click and ads revenue.

>only Facts and in-depth reviews.

and manufacturer propaganda. Tomshardware made somewhat famous burning Athlon video (later revealed they used board with disabled/non functioning thermal cutout) during prime Intel pentium 4 payola timeframe.

https://www.tomshardware.com/reviews/hot-spot,365-4.html "AMD did not bless the Thunderbird core with ANY thermal protection whatsoever."

In reality AMD socket certification required thermal cutout, same as Intel for Pentium 3. AMD processors do include thermal diode just like Intel ones.

"Intel's older processor is also equipped with a thermal diode and a thermal monitoring unit"

is a lie. Pentium 3 thermal throttling is performed by the Bios just like in Athlon case. Serendipitously tomshardware picked broken Siemens motherboard for AMD system as recommended by Intel, imagine that.

They published a non-retraction catching Siemens in a lie and showing proper AMD setup safely shutting down. Of course you cant read it because its burried down, excluded from wayback machine and deadlinked http://www.tomshardware.com/column/01q4/011029/index.html

Anandtech wasnt better 20 years ago. They used to publish rave Pentium 3 SSE reviews https://images.anandtech.com/old/cpu/intel-pentium3/Image94.... using a piece of Intel commissioned software as an independent test. https://www.vogons.org/viewtopic.php?f=46&t=65247&start=20#p...

Sounds like the system Buzzfeed has where the clickbait branch feeds the (legitimately good) news branch.

Well, and so they're simply piggybacking on the hard work done - creating a clickbait version on top of quality articles and important topics. Interesting business model - I'm certain they browse Reddit throughout the day for content as well.

Second this. The quality difference between the two is staggering.

Doesn't say a lot. It's mostly speculation, the chip is listed here [1].

Notable info:

Still at 5nm; double the GPU cores. Reports up to 3 displays, so it's safe to assume 2+1 external/internal. 8 High perf cores and 4 little perf cores. PCIE gen 4, no idea about number of lanes.

[1] https://www.cpu-monkey.com/en/cpu-apple_m1x-1898

Comparison: https://www.cpu-monkey.com/en/compare_cpu-apple_m1x-1898-vs-...

I am finding the numbers a little bit too perfect to put much faith into. Single core performance that is exactly identical between the two? Floating point performance exactly double? Seems a bit off.

The little "Buy this processor at Amazon!" icons next to each benchmark make the whole site seem a touch scammy.

The other thing is RAM. Apple still has delusions of gimping it's buyers into 16/32GB for "Pro" level machines. I'd probably buy a 16" MBP with this chip if I could get it with socketed 64/128GB. But Apple's going to solder it to the board and charge a premium even if they did.

>The other thing is RAM. Apple still has delusions of gimping it's buyers into 16/32GB for "Pro" level machines.

I'm pretty sure Apple knows what machines it sells, including to video and audio pros, and knows how many of them buy the 32GB options where they're available (in Intel).

Besides "Pro" is just a branding, not a class of machines designed for specific tasks with any specific guarantees.

Up until now, "pros" did just fine with 16GB or up to 32GB max on their Intel Mac laptops, so I'm not sure how "twice as fast" machines, at lower price, with the same 32GB RAM (but much more efficient use of it, closer to the CPU) will suddenly be a problem.

Especially if hardly any buy PC laptops with 64GB of RAM anyway -- and Apple doesn't, and has never, competed with custom tower PC rig makers, but to people looking at the sweet spot of portability, power, battery life, size, weight.

> "Pro" is just a branding, not a class of machines designed for specific tasks with any specific guarantees.

I agree, and to be fair, there are plenty of professional use cases that are totally serviceable with one external monitor and 16GB of RAM. Not every professional is a YouTuber or big data engineer.

I can't speak for anyone else, but I do like it when the name of a product actually means something, because it gives me confidence in the honesty of the vendor.

Memory latency often has a bigger impact on performance than capacity. Your CPU spends a lot more time waiting around than you'd expect.

With NVMe, I'd rather have 16GB LPDDR4X than 64GB low-JEDEC, high latency DDR3 memory. More RAM is still good in many cases, but it's not vital like it was 15 years ago when you'd end up in swap hell or OOMing constantly over a GB or two.

When I first got a laptop with 8gb of ram 10 years ago, I thought it was an absurd amount, and to be honest it was- ever since I have never wanted for ram on my personal computers.

I'm not quite sure where why people seem to be clamoring for more RAM, it seems like a non issue these days. I don't really game these days, and when I do it's on older "steam sale" titles so maybe gamers need it, but even for dev stuff, I just don't see the issue.

We have a lot of artists who ask for MPBs with 64GB RAM, and when I ask for work samples, there's not even enough memory pressure to swap on their 16GBs. When I ask why they need it, and they always give some variation of "it renders too slowly".

It's not my job to help people pick out computers better suited to do the work they were hired for, but I really enjoy seeing the characteristics of and where the bottlenecks are in new generations of hardware.

Yeah, I'm calling it now - no chance of 128GB in a Macbook Pro. 64GB even is doubtful.

The M1x will almost certainly have 64 gb support, possibly 128. Apple has left Intel Macs in the lineup for customers who need features that the M1 has not replicated, like large RAM capacity and 10 gb Ethernet. The M1 macs are still better for 90% of customers, but the intention is to only fully phase out Intel Macs in a product line when Apple Silicon is better for 100% of customers, as they did with the Air. The M1x will fill out the remainder of the Macbook Pro and Mac Mini lineup, which requires 64 gb support. It is likely (though not a guarantee) that the M1x will also fill out the entire iMac line (excluding the education M1 iMac they have not yet released). That would require 128 gb support.

People seem to be under the mistaken impression that supporting higher memory capacities is hard. It's not. It's trivial. The only reason the M1 doesn't is because of how it's packaged, and they only chose to package it that way because the products they were intending to replace it with only need 2 memory chips. They did what made sense for the M1, and they will do what makes sense for the M1x.

It'd be fun if they got 32GB parts with double the memory bandwidth (which could make sense with the doubled Firestorm and GPU cores).

Do we have perf counters that show how memory starved is M1 under pressure?

"Memory starving" a CPU is almost always a latency issue except with very specific workloads. A double wide memory controller would be very cool if they wanted to make a high performance integrated GPU though.

64GB is available on the current-gen 64GB, so not offering it would be a step backwards.

64GB was available on the previous Mac Mini, but they took a step backwards and capped the new one at 16GB, no reason they won't do the same with the MBP.

The Intel Mac Mini is still sold. If you need, you can get it with 64GB of RAM and 2TB of PCIe SSD.

Granted you'll feel bad if this leak is wrong and Apple shows an M1x with more than 16GB of RAM.

I suspect they'll release a faster mac mini with the M1x chip. They've got tons of space and an oversized PSU, so it would be easy to do.

I literally have a previous Mac Mini with maxed-out RAM thanks to OWC and their RAM upgrades. Also one of the new Macbook Pros.

From my experience with both, I think you might be very unhappy in retrospect if you bet that more RAM is going to make the most recent Intel one come within shouting distance of even an 8GB M1 Mini. The old one was one of the best for single-core performance within its generation, but they're not even the same thing, not even close. Again, I own both machines with the Intel Mini maxed out as far as it can go with RAM.

My gut level reaction is, you could give the new one lots more RAM (in theory) and it wouldn't end up using it. So they're capping them, otherwise people will assume there's a benefit and that the 8GB and 16GB ones aren't workable for heavy processing.

I've also got a suspicion (possibly just a weird cynicism) that this is why the previous-gen Mac Pro runs so huge and expensive: they've overpriced and overdesigned it so hard that it's unattainable for nearly anybody, making it a sort of status object with Veblen pricing, specifically to avoid class action lawsuits when they release newer machines that outperform it at a tenth of the price.

You were never supposed to actually buy the monstro cheese grater Mac Pro or its screen with the thousand dollar stand, and if you did, (a) it's of symbolic value to you and (b) it's your own fault. When the real high performance computer comes out of Apple it'll be at Mac Mini prices, and meant to iPhone-ify desktop computing as a category.

You can still buy the Intel Mini for a reason. I don't think it's an unreasonable assumption to expect a mini that takes more than 16GB of RAM.

They just started the Apple Silicon transition. A bit early for the doom and gloom, no?

Class action lawsuits?

Because there’s a ceiling on RAM capacity? If that’s your requirement there are many people happy to sell you a device that will satisfy you.

I started computing in the days when you could add your own instructions to your computer. I’m not suing anyone though that capability is no longer available

When the user guide prominently includes a wiring diagram for adding the memory necessary to run the product:


Cover:Schematics for a 5V, 3A regulated power supply and a 1K x 8 read/write memory block. The power supply and three such memory blocks can be added to the basic KIM-l microcomputer to provide the 4K RAM required by this assembler. Parts are available from Jameco Electronics.

You can't even get above 16GB with the M1. You can with the i9. It took me MONTHS to receive my 16GB M1.

Three weeks ago I ordered a 1 TB/16GB MBA and picked it up the same day (!)

It was a pleasant shock. I ended up returning it as it turns out I really do need at least 32GB of RAM, but it was VERY difficult to take it back.

Even if this rumor is way overhyped, I still can't wait for a MBP that can take at least 32GB of RAM. I'll be there with bells on!

Benchmarks have shown negligible performance difference between 8 and 16GB models for editing multiple 4K video streams and other pretty intensive tasks. What kind of software do you run?

Well, I got mine in 2 days, so...

Ordered mine end of December, got it last Thursday. Glad you had a better experience!

I'm not even in the US, we usually have worse delivery dates for everything, Apple or not. And the 16MB Pro is considered a "modification", but it was readily available in several resellers.

Ordered mine at the end of November and received it the week of Christmas. M1 Air with 16gb RAM and 1TB drive.

Yup, same specs here. It was due on the 29th of Jan, but somehow got delayed. I've had a few other friends relay the same experience. I wonder how some people are having much better luck. I had the same experience in the Spring last year when I tried to order a MacBook Air for my daughter with 16 Gigs. I left it for a few weeks and then finally went back to Apple and cancelled as it looked to be an eternity before they'd be able to ship it.

Definitely glad other people are having a better experience.

Yeah it's really annoying that 8/256 is the base still. It's been like that for years - I think that's been the same since around 2016.

Would be much better if the base model was 16/512, but that's $500 of nearly pure profit apple would lose.

256 GB is more than enough for many people. Consider work laptops that are mostly used in meetings, or personal laptops for folks who live in the cloud.

I've recently used a 100 GB partition on my personal laptop for client work and it was fine.

(I'm not trying to defend the price of Apple's SSD upgrades.)

It's a textbook example of price discrimination on Apple's part. They know that the 8/256 model is not enough for several classes of Pro users, so they can sell the 16/512 model at a premium (while increasingly locking down their devices to prevent after-market upgrades).

It's just annoying that the others are BTO - you can't just pick up a replacement. I was expecting to for over 3.5k for my new laptop. It was half for the 16gb/1tb

I believe the problem is that apple has stuck with LPDDR memory for performance (33% more memory bandwidth keeps integrated GPUs happy) and to hit power/efficiency targets (variable frequency).

Instead of a DIMM, LPDDR basically puts an entire DIMM on one chip soldered to the board (or package on the M1). LPDDR topologies tend to be limited in how many ranks of memory you can have, which limits the total capacity.

I think if Apple really wanted to mac a "HEDT SoC", they would switch to "quad channel" (256bit wide) memory, which along with the jump to LPDDR5 would let them sell 64GB/128GB machines. The only existing SoCs with wide memory controllers is the Tegra Xavier though.

I'd add that they also appear to lack EEC support.

I wonder if the shorter bus won't somehow reduce the error rates.

Errors can be at the cell level of the memory, given the very high density they are now.

I know. I just wanted a reality check on where erros originate from.

I know. I just wanted a reality check on where erros originate from.

Best. Typo. Evar.

Sometimes my spell checker is confused between English and Portuguese.

Rules change. I've got one of the 16GB RAM Macbook Pros. They've got it acting like it's L2 cache instead of RAM as we know it, so the place where you need more than 128G becomes, well, the SSD, your mass storage.

These things already don't act as if it's RAM as we're used to considering it. They could probably have shipped the first ones with 4G RAM and had 'em act relatively normal… with heavy stuff like 4K video editing. Not great, but normal. So they did 8G RAM to have 'em act more impressive… like doing said editing smoother with fewer dropped frames than far more powerful Intel machines with a lot more RAM.

Do you have any evidence for this claim? The actual concrete performance numbers for the M1 and Zen3 don't agree with you at all:

M1: https://images.anandtech.com/doci/16252/latency-m1_575px.png

Zen3: https://images.anandtech.com/doci/16214/lat-5950.png

If anything the M1's ram has _higher_ latency than the Zen3.

(Can't reply to the sibling comment but Intel's numbers are even better: https://images.anandtech.com/doci/16214/lat-10900.png)

Well exactly how are you defining latency?

Is it random accesses you want to test? Or TLB thrashing?

The R per RV prange benchmark accesses memory randomly, but in a TLB friendly pattern (TLB thrashing real world workloads aren't particularly common). The M1 gets 30ns, the 5960X is 55ns and still climbing at the max array size.

The 5950X does better on "full" random, although that's unclear if that's a faster TLB, or the testing with a different page size. With Linux for TLB thrashing workloads you can always use HUGE pages, not sure if the M1 hardware supports that though.

> like doing said editing smoother with fewer dropped frames than far more powerful Intel machines with a lot more RAM

Just noting Zen3 is not Intel.

>They've got it acting like it's L2 cache instead of RAM....... These things already don't act as if it's RAM as we're used to considering it.

Please stop this non-sense on HN. DRAM is DRAM. L2 Cache is L2 Cache. Doesn't matter how good its 8-Channel LPDDR4X Memory Architecture sitting next to the Die in the same package it is still just DRAM.

As someone that runs Kubernetes on my work laptop, all the "the memory architecture is different, so 8GB is enough" nonsense just doesn't hold up. I need a ton of memory FULL STOP. It sounds just like the nonsense Apple was spewing about MMS messaging on the original iPhone.

Apple is not selling MB Airs or 13" Pros for users who are running massive workloads like Kubernetes. It is the reason they left intel models in the current line up. The M1 models are for more pedestrian users and users with moderate video editing workloads.

It takes my M1 a little while longer to stitch together 4K videos than my 2017 MPB 16". My 2017 MBP screams and throttles down the CPU down to 60%-75% as soon as editing starts.

>I need a ton of memory FULL STOP.

Then buy something else, full stop.

You might be confused. I wasn't asking for purchasing advice. I'm pointing out Apple's history of downplaying something that they have not yet implemented.

Yeah, I heard this too and nope - the M1 is not some magical RAM negating beast. If you actually need more RAM you still need more RAM even with the M1. I ended up returning mine (with much difficulty - me not really wanting to; Apple was more than willing to take it back) because if you need more than 16 you still need more than 16.

If I have 20GB of data and 16GB of RAM, it's simply never going to fit and will have to page to disk. That page will always be slower than RAM.

Even if the RAM were as fast as L1 cache, it still wouldn't do a thing for the SSD bottleneck.

>Reports up to 3 displays, so it's safe to assume 2+1 external/internal.

M1 Macs have been shown to drive 6 displays, so this "3 displays" means "3 4K/5K displays" I supposed.

(I don’t really know what I’m talking about)

I just hope it isn’t internal + Touch Bar + 1 external

Unlikely, the M1 page [0] says "2 displays" and I'm pretty sure the MBP supports internal display + Touchbar + external display. And even that is based on high-res displays, I think it could support more external displays at FHD [1].

[0]: https://www.cpu-monkey.com/en/cpu-apple_m1-1804

[1]: https://www.youtube.com/watch?v=lqINozYxDK8

I wonder if their new GPU somehow is limited on display support (and they basically have to "add cores" to get more displays).

My current 16" MBP has no problem driving a total of five displays and 16,041,600 pixels - and I'd be hard pressed to go back to just two external displays.

It’ll happily drive significant numbers of pixels although the resources used by Window Server do appear to scale somewhat proportionate to what you hook up:


I’ve found a Pro Display (6K) and two other Dell 4K works fine, additional to the internal display (it’s a MBP 16” and not running in clamshell mode).

MacBook 16”: 3072x1920

Pro Display: 6016x3384

Dell U2718Q: 3840x2160 (x2)

Total Pixels: 42,845,184

Modern hardware is frankly amazing. I remember the 22” Trinitron CRTs of years ago and owning and driving one of those was a feat, now we have multiple displays with PPI such that with average visual acuity everything is so incredibly crisp.

That would be a downgrade from current chip, so it's very unlikely.

I was surprised how well the 5800X holds up to the new M1 and M1X. Given the boost I felt with my MBA M1, I thought my NUC8i7HVK felt sluggish in comparison and was considering a new PC build - or moving off a "Linux Desktop" to M1X if it was dramatically better than incumbent desktop processors.

Seems like I can keep Linux and build a 5800X, and have > 32GB RAM to boot.

For those happy with MacOS as a daily drive, the Mac Mini is incredible value vs a full PC build.

To me the core "innovation" in the M series is the reasonable pricing comparing to Intel Macs.

I replaced my dev machine (a fully maxed out 6 core i9 2018 MBP with 32 gigs of memory) with a Mac mini with two upgrades (16G memory, 1TB drive) and I simply can't tell the difference in performance. But I can tell the difference in heat and stability.

And the Mac mini was about 1/3 the price. (I know it's not an equal comparison, but I was always using my laptop with the lid closed, hooked up to 2 4k monitors anyway.)

I will happily upgrade to more power. I don't miss not being able to run virtual Windows because I offloaded that to my home server anyway and RDP into it when needed.

Given the TDP difference between the 5800X and this supposed M1X, it's not really fair though, is it?

5800X is a 105W TDP, the M1X claims to be a 45W TDP.

If you scale by TDP, the M1X is over 2x faster in multicore bench.

M1X: 321 score/watt 5800X: 146 score/watt

From the clickbait title alone, this looks a lot like disguised advertising. Title is only missing "and leaked screenshot #3 will shock you"

Intel should be scared of its own failures. Apple is merely a competitor that didn't fall asleep.

It's kind of amazing how often the folks that pull ahead in a market are the folks that didn't fall asleep.

For example: DOS wasn't technically better than anything else; it was just the one that was being aggressive about adoption.

Or the fact that Sony was really in a position to OWN portable music in perpetuity, but went to sleep. Apple wasn't asleep, and didn't have outsized anxiety about digital music, so the iPod entered a market without a dominant "Digital Walkman".

Harsh. Maybe more like “a competitor who made enough money elsewhere to get into the CPU fight”. Barriers to entry in that market are gargantuan, only a very selected few companies on the planet can afford to make an enemy out of Intel.

Apple has been on the CPU fight for years now - It's just that Intel, like IBM before them, couldn't provide a CPU with the performance/watt they wanted for their macOS lineup.

The developer preview Minis they sold with an iPad Pro CPU were already well past midrange Intel desktop chips.

Also Apple didn't have to do ~80% of what Intel has to do - you know, the actual building of chip fabs as well as chip design.

"merely" LOL.

Intel doesn’t have to build its own fabs. AMD sold its off and is doing fine. Fabless + foundry is an excellent model that lets each side concentrate on its strengths, that works for most companies.

Intel started in an era when you did have to build your own fab. More importantly, even as others were going fabless, Intel continued to use its superiority in manufacturing as an additional advantage over their competition, who had to drop out of the fab business, and against the merchant foundries too.

But now Intel has fallen behind on both sides of the divide. I’m not sure how they will dig out.

Apple doesn't have to build it's own CPU, but often vertical integration gives one a significant advantage.

TSMC is slave to Dutch ASML, whos is the only one on the planet who can make the EUV lithography technology TSMC uses to make the latest chips.

Who manufactures them for Apple?


Wouldn't that make Apple even smarter and more efficient?

Release Date?

Apple can take a few different paths to this “M1X” release. They can

A. Release in Spring. Use the current cpu & node generation tech, and add cores.

B. Release in Fall. Use the now old (2020) cpu & old node, and add cores.

C. Release in Fall. Use the now new (2021) cpu, new node generation - and add cores.

Will be curious to see which path they take.

I’d bet Option B. This would be similar to what’s currently going on with the iPad Pro vs iPad Air

I'd guess option A.

They aren't doing radical redesigns, so ramping up is easy and fast.

They've pulled an Osborne Effect on their Intel devices by announcing they are transitioning in just a couple years. Nobody wants to buy a brand-new second-class product.

Finally, launching their new laptops this fall means they are half-way through their transition time, but just getting started with their most powerful segment.

There are still a lot of people who need Intel, primarily people doing cloud development and in particular those who aren’t using an interpreted language.

I don’t how big this group is as a percentage of Apple customers, but in an absolute sense it’s significant — several $B worth anyway.

Also there are people who won’t buy M1 because they are (probably correctly, though I haven’t had any problems) hearing that not everything works right yet. If my mother needed to replace her MacBook Air today I’d get her an Intel one. I expect her next machine with be Mx though.

I suspect Apple will continue selling Intel-based machines into 2022, and perhaps longer. I doubt they will update them though.

They said the transition would be completed within two years. I imagine they have a pretty good idea how many people are using Bootcamp or Windows VM's and have already decided it isn't significant enough to stop them transitioning.

I assumed they meant every Mac in the product line would have an ARM version. I presume a few of the Intel-based SKUs will remain until demand ebbs, but probably not every one (e.g. perhaps only the 16” MBP and the Pro)

I suspect they won't want Intel hanging around any longer than necessary because it will delay the point where they can stop supporting both architectures in future versions of macOS.

Probably in June with WWDC, if they follow historic trends.

Wouldn't this follow the iPad Pro release date trend of Spring releases.

Will also be curious to see if these new MacBook Pro (M1X) will be the exactly same as what's used in the new iPad Pro.

Pretty small n for most such trends.

Obviously fake. The CPU Monkey "benchmarks" are just the M1 values multiplied.

And the single core performance benchmarks are exactly identical which would mean there would have not been any optimization compared to last gen. Not very realistic.

To be fair this would make a lot of sense. This isn't a new generation of CPU, it's a bigger variant of the existing M1.

i love my mac, can't wait for this upgrade

people like to hate on the OS, but i find it super beautiful and easy to use

i don't know how to describe it, but i feel good when i use my mac

in contrast, everytime i have to boot windows, i feel like i'm oppressed by some corporate, i feel closed, the whole experience is boring, windows needs a serious REBOOT, not a reskin, a whole reboot, including deleting NTFS

Windows is made for corporations, Macs are made for users.

As a Linux and Windows user I feel like a slave whenever I use a Mac.

Apple just doesn't let you do what you want, the way you want to do it. Any limitations that Linux has are purely technical, while Apple clearly limits users for profit. If you happen to be able to accept the limitations, great for you - I'd still feel dirty supporting a company that's clearly working towards a future where you have to pay them to put your own programs on your own devices.

Windows I just use for entertainment now but even for that set of functionality - I couldn't stand having to deal with a finicky Mac. My Mac Pro 2012 (which I bought used) won't even work with certain mice, keyboards or displays.

I agree with most of your concerns, but i guess i'm up to a point where there isn't very much choice for quality ecosystems, and mac provides what i need, so here i am..

On my desktop i dualboot Linux/Windows, windows for games only, and linux for when i want 0 distractions at all and focus on programming

I wish i could nuke my windows partition and have linux/mac side by side, but gaming on linux is still a pain today..

>including deleting NTFS

Imagine drinking so much kool-aid that you literally don't like a filesystem for no reason other than Apple good Microsoft bad.

NTFS is slow compared to EXT4 for my use cases, even slower than APFS

I haven't heard that, my PC runs NTFS on an M2 SSD and I have no complaints about the filesystem. The fastest SSDs get over 3600Mb/s using NTFS.

Is your use case something odd? I wouldn't expect to hit filesystem limits on a laptop anyway.

[1] https://ssd.userbenchmark.com/

non realistic benchmark is not my usecase

compiling projects is, and it is slow with NTFS + slow windows process creation

People should stop comparing Apple Silicon to Intel and start comparing it to AMD.

There is no Macbook with AMD processor, I think that's why it is compared to Intel, that plus Intel being the historic leader

There is no Macbook with AMD processor because Apple didn't want to make it.

Intel is not the leader for 5+ years now.

Let's not get ahead of ourselves. In the notebook space they've been the industry leader for less than 12 months.

This is not the case. The very first Zen laptop parts were far ahead of Intel.

There's plenty of non-Macbooks with AMD processors which are competitive to it.

Comparing to Intel is like comparing to PPC MacBooks - meaningless.

At this point, isn't comparing it to any non-Mac meaningless?

The only way to get an M1 is to buy a Mac anyways.

Unless/until you see a mass exodus of people from x86 based OSes the comparisons don't matter.

Sure, new Macs will be super fast, and that in itself will result in some conversions, but there are plenty of other factors that would keep people from switching to Mac.

At the moment you can buy intel and apple macbooks. They run the same system etc so that's how you can compare the two machines.

Considering that Apple ditched Intel for their own Apple Silicon chips it makes more sense comparing them (for now).

Why? The MacBooks aren't competing against their previous iterations (since you can't buy them anymore), but against their non-Apple competition.

When you'll be buying a new laptop, you won't be choosing between a 2018 MacBook and a 2021 MacBook, but between 2021 MacBook and offerings from Lenovo, Dell, Microsoft and others.

Well, I think the decision is a little more complicated.

If you are upgrading, OS inertia may prevent you from choosing a machine that doesn't run the OS you're currently on. So a lot of Windows users may not even consider a Mac, and vice versa.

The latest 13" x86 MacBook Pro released less than 1 year ago and is still for sale.

There's a Macbook Pro 16" that you cannot get with a M1 Processor. That's the one I have.

So yes, you can compare to the current gen and would be choosing between that.

You sure can still buy the old ones.

I suspect you will be able to for another year or so. See comment from me elsewhere on this thread for my reasoning.

Well, yes, if you look back so to say, you can put an older Intel MBP and a new one with M1 side by side and get a comparison showing the latter fares better. But when you look into the future, if you are a customer considering a purchase of a new laptop with Apple Silicon, they would benefit from a comparison with the top Zen 3 offerins, even though such a comparison might be much trickier than the one with Intel.

The new Zen 3 offerings are indeed tantalizing. Even if you never buy an Apple Silicon Mac, the kick in the pants it has no doubt delivered to both Intel and AMD benefits all of us.

And as a friend keeps reminding me, this is their cheapest entry level processor :)

It'd look less bad for AMD, but still wouldn't look brilliant.

aarch64 allows some tricks that are much harder to pull off on amd64.

AMD still generally outpertforms Apple Silicon despite larger lithographies, so I don't really see it.

AMD Zen 3 outperforms on workloads that can take advantage of having more cores, but for single threaded workloads, it leads in some SPEC subtests and falls behind in others.


> AMD still generally outperforms Apple Silicon despite larger lithographies

The single-core benchmarks I saw put the M1 against CPUs with amd64 cores running SMT (which is fair - SMT extracts about 20% of extra parallelism of an amd64 workload, something the M1 does using a humongous reorder buffer on a single thread) with the AMD (and latest Intel) parts showing a slight advantage.

They do so, however, at a much higher TDP which, if my own computers (all but the server under the desk) are any indication, will cause the x86's to throttle down in a couple seconds while the M1 will continue at full steam.

They have a much higher TDP because you are comparing PC parts that are not optimized for power consumption. Their uncore is made using power-inefficient lithographies to save on cost and contribute a ton to power usage.

They also have a lot more cores.

Per high-performance core, M1 macs are tied or less efficient depending on the workload.

> Per high-performance core, M1 macs are tied or less efficient depending on the workload.

That's not what the benchmarks I see say. AMD's chips with 8 amd64 cores beat the M1, which has 4 big cores and 4 low-power ones. In some benchmarks, the x86 laptop CPUs record higher performance, but at a higher clock and with a higher TDP. I don't have the IPC numbers for either, but with a lot of execution ports, its huge reorder buffer and the less strict memory consistency model, I suspect the IPC to be higher on the M1.

The low performance cores of the M1 go anywhere from useless to actively harmful in critical multi thread workloads.

When you are doing serious work, the M1 is really a four core CPU and nothing else.

M1 cores have higher IPC, yes. They are also bigger and slower. Larger reorder buffers and more flexible memory models mean bigger cores which mean slower clock speeds at the same power consumption.

The M1 also doesn't have SMT, which is an easy +20% IPC.

If you measure the performance per core and the power consumption per core, and of course I'm talking only high performance cores, then last gen x86 laptop CPUs are right there despite higher performance.

Even at double the GPU cores it’ll still get beat by an RDNA2 based Ryzen GPU. The current M1 is about equal to a GTX 1060 performance wise according to benches. Although the Zen3 based RDNA2 APU “Rembrandt” likely won’t be out until 2022 and we’ll have to settle for the Zen2 based Van Gogh in 2021.

A lot of things will probably beat it at performance, but not at lot of them will beat at GTX 1060 as well as Core i5 performance using only 26.5W of power while doing it.

The performance of the M1 is remarkable, but the real killer feature is they managed to push that much performance while reducing power consumption.

That’s great and all but for people who are gamers or doing heavy work with the GPU they’re going to be plugged into power most of the time.

I’d rather have a chip that scales to high performance when plugged in and lower GPU performance when I’m on say an airplane or in a cafe, then one whose GPU performance is a half decade behind.

You're bringing up a limitation of the current tech, that it tethers you to a wall, to speak against this new tech. Does a student who has a gap between classes always want to find an outlet somewhere and deal with their computer sounding like it's about to liftoff? Does the video editor offloading some of their workload to GPU always want to find an outlet?

Apple is selling super quiet hypermobile performance here, I see them as occupying a niche with no real competition.

A lot of those students are using Chromebooks, which will get nothing but not cheaper. Video editors almost always want large monitors and top performance. Show me someone who works as a professional video editor who does it on a laptop screen.

The best argument you could make for the MacBook in this regard is for developers who might be working as nomads, and musicians who often lug their machines to their gigs.

But for someone doing heavy 3D, video, or gaming? I just don't see it. The M1 can't even run Tomb Raider above 30 fps. It's a 2 year old game, and isn't even that impressive. A GTX 1080 will yield 2x-3x that performance: https://www.anandtech.com/show/13400/intel-9th-gen-core-i9-9...

I'm just saying, they got a long way to go before that capture the hardcore gamer or 3D rendering market.

>Show me someone who works as a professional video editor who does it on a laptop screen.

I know a videographer who worked for news wire services who did just this and did his job on a MacBook sometimes on site. Helped him deliver news content faster. I don't think video editing on a MBP directly is as rare as you might assume.

No doubt that Mac's haven't turnee into ideal gaming machines, but I've gamed on ultraportables with a lot less GPU than this, I just never played a game like Tomb Raider.

Is it able to use 26.5W while using both GPU and CPU without degrading performance?

The answer is no. 26.5W is only for the CPU. If you also need to use GPU then it throttles or consumes more.

It has a maximum power consumption of 39W, with an idle consumption of 6.8W. Both measured “on wall”, so including ram, storage and everything else.

Compare it to the previous (Intel) generation that uses 20W idle and 122W under load, while not matching the M1 in neither CPU nor GPU performance.


This does not address wether the GPU throttles in high CPU usage scenarios.

That being said, RAM and storage is pretty negligible. LPDDR4/X uses around 0.2-0.5w, and around 0.1-0.05w for storage.

There is also no reason to compare to Intel. The competition for Macs aren't older macs - newer macs are always going to be better than older macs. The competition is to what they could have been if they went for AMD, and to other computers of similar form-factors.

In both cases, you're comparing to AMD, not Intel.

With regards to performance and power consumption, AMD is not far ahead of Intel. Yes, they have better performance when you factor in all the Intel "bugs" like branch predition, but it's only about 10%-20%, and TDP is more or less the same.

Comparing it to a M1 will yield the same results. The lowest TDP i can could find was 65W for a Ryzen 3, which is still long way from M1 in both power consumption and performance.


Why are you comparing AMD desktop processors with laptop processors on power efficiency? That's incredibly contrived.

Compare the 5900HS with the M1, which is a 35W SoC, just like the M1, and much, much faster than any Rzyen 3.

It’s an SOC. Unless you consider the CPU basically just the ALU and register file (a quite reasonable but long outdated position) it’s all the same power budget for the M series.

It's a simple result. SoC or not, if total power consumption is at X with only the CPU while the GPU is inactive, then it will either be at >X when using both CPU and GPU at the maximum, or be slower in CPU performance.

This holds for SoCs or for systems that manage total power equally.

Side question: how do you object to all the cookie related activities on that website and not just the "legitimate interest" one?

Applications are open for YC Summer 2021

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact