Hacker News new | past | comments | ask | show | jobs | submit login
GFXBench: Apple M1 (gfxbench.com)
268 points by admiralspoo on Nov 13, 2020 | hide | past | favorite | 474 comments



Why are all performance measurements of the M1 done against stuff that is far below state of the art?

So its faster then a 3 generations old budget card, that doesn't run nVidia optimized drivers, over I'm assuming Thunderbolt. So?

So its faster then the last Mac book air, that was old, thermal constrained, and had a chip from Intel that has been overtaken by AMD.

Every test is single core, but guess what, modern computers have multi cores and and hyper threading and that matters.

Apples presentation was full of weasel words like "in its class" "compared to previous models". Fine thats marketing, but can we please get some real, fair benchmarks, against the best the competition has to offer before we conclude that apples new silicon is gift from god to computing?

If you are going to convince me, show me how the CPU stacks up to a top of the line Ryzen/threadripper and run Cinebench. If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.


I'm sure you understand the performance differences between a 10W part with integrated graphics designed for a fanless laptop and a desktop part with active cooling and discrete graphics.

This article from Anandtech on the M1 is helpful in understanding why the M1 is so impressive.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.

You say something like that about a low power fanless design and every tech nerd's first reaction is "bullshit". And now they want to call you on your bullshit.

[1] https://www.apple.com/newsroom/2020/11/introducing-the-next-...


You're misinterpreting what they say. Quote:

"And in MacBook Air, M1 is faster than the chips in 98 percent of PC laptops sold in the past year.1"

There is a subtle difference between "98 percent of laptops sold" and your rephrasing as "2% of laptops you can buy today".

If you doubt the meaning, check out the footnote which refers to "publicly available sales data". You only need sales data if sales volume is a factor in the calculation.

I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).

Here, these nerds should have been pointed at, for example, last year's iPad Pro. Which has no fans (only -boys) and was already at that performance level (https://www.tomsguide.com/us/new-ipad-pro-benchmarks,news-28...).


> I also don't doubt that "every tech nerd's reaction is 'bullshit'", but only because supremely confident proclamations of universal truth that are soon proven wrong is pretty much the defining trait among that community (c. f. various proclamations that solar power is useless, CSI-style super-resolution image enhancement is "impossible" because "the information was lost", "512k should be enough...", "less space than a Nomad..", and everything Paul Graham has said, ever).

Notwithstanding the fact that you do have a point here, polemically phrased as it may somewhat ironically be, I just want to point out that the Paul Graham reference is probably not the best example of the "tech nerd community" trait you're describing. At least this particular community doesn't quite believe that everything Paul Graham say is true; a couple of examples:

* "Let the Other 95% of Great Programmers In": https://news.ycombinator.com/item?id=8799572

* "The Refragmentation": https://news.ycombinator.com/item?id=10826836

I could share a lot more HN discussions, and some of his essays where he pretty much describe the trait you're taking issue with here -- but I'm already dangerously close to inadvertently becoming an example of a tech nerd that believe "everything Paul Graham has said, ever", is absolutely true ;) I don't, and I know for a fact that he doesn't think so either (there's an essay about that too).


Grandparent doesn't understand information theory. True superresolution is impossible. ML hallucination is a guess, not actual information recovery. Recovering the information from nowhere breaks the First Law of Thermodynamics. If grandparent can do it, he/she will be immediately awarded the Shannon Award, the Turing Award, and the Nobel Prize for Physics.


True superresolution is impossible, but a heck of a lot of resolution is hidden in video, without resorting to guesses and hallucination.

Tiny camera shakes on a static scene gives away more information about that scene, it's effectively multisampling of the static scene. (If I have to hunch it, any regular video can "easily" be upscaled 50% without resorting to interpolation or hallucination.)

Our wetware image processing does the same - look at a movie shot at the regular 24fps where people walk around. Their faces look normal. But pause any given frame, and it's likely a blur. (But our wetware image processing likely does hallucination too, so it's maybe not a fair comparison.)


Temporal interpolation is still interpolation.


It's not temporal interpolation. It's using data from different frame(s) to fill in the current frame. It's not interpolation at all. It's using a different accurate data source to augment another.


Are you objecting to the premise that the scene is static?


Wait, why thermodynamics? The first law of thermodynamics is about conservation of energy not of information.


Same thing.


Super resolution can and does work in some circumstances.

By introducing a new source of information (the memory of the NN) it can reconstruct things it has seen before, and generalise this to new data.

In some cases this means hallucinations, true. But in other times (eg text where the NN has seen the font) it is reconstructing what that font is from memory.


But the thing is, in that case the information contained in the images was actually much less than what we are meant to make believe.

So if we are reconstructing letters from a known font we essentially are extracting 8 bits of information from the image. I'm pretty certain that if you distort the image to an SNR equivalent of below 8 bits you will not be able to extract the information.


Recalling memory of something different and using it to interpolate is a form of hallucination.


Wait until panta discovers image compression…


Lossy image compression creates artifacts, which are in a way of form of falsely reconstructed information - information which wasn't there in the original image. Lossless compression algorithms work by reducing redundancy, but don't create information where it wasn't there (thus being very different from super-resolution algorithms).


Not if it’s written text and you are selecting between 26 different letters. It’s a probabilistic reconstruction, but that’s very different to a hallucination.


> CSI-style super-resolution image enhancement is "impossible"

If it ever started to be used in court, Ryan Gosling will have a lot of explaining to do. I hope that day never comes.


You're both right, but they're more right because the subtle difference you mention is the problem they're highlighting: Apple went out of their way to be unclear and create subtle differences in interpretation that would be favorable to Apple, as a company should.

After the odd graphs/numbers from the event, I was worried it was going to be an awful ~2 year period of jumping to conclusions based on:

- Geekbench scores

- "It's slow because Rosetta"

- HW comparisons that compare against ancient hardware because "[the more powerful PC equivalent] uses _too_ much power" implying that "[M1] against 4 year old HW is just the right amount of power", erasing the tradeoff between powerfulness and power consumption

The people claiming this blows Intel/AMD out of the water need to have stronger evidence than comparing against parts launched years ago for budget consumers, then waving away any other alternative based on power consumption.[1]

Trading off power for power consumption is an inherent property of chip design, refusing to consider other chipsets because they have a different set of tradeoffs mean you're talking about power consumption alone, not the chip design.

[1] n.b. this is irrational because the 4 year old part is likely to use more power than the current part. So, why is it more valid to compare against the 4 year old part? Following this logic, we need to find a low power GPU, not throw out the current part and bring in a 4 year old budget part.


I really would suggest reading the Anandtech article linked above - I think it will help to clarify where the M1 stands against the competition.


Hate to draw a sweeping conclusion without any facts like this without elaborating, but, late for dinner :(: it's an absolute _nightmare_ of an article, leaving me little hope we'll avoid a year or two of bickering on this.

IMHO it's much more likely UI people will remember it kinda sucks to have 20 hour battery life with a bunch of touch apps than that we'll clear up the gish-gallop of Apple's charts and publications rushing to provide an intellectual basis for them, without having _any_ access to the chip under discussion, so they substitute iPhone/iPad chips that can completely trade off thermal concerns for long enough for a subset of benchmarks to run, making it look like the the powerfulness/power consumption tradeoff doesn't exist, though its was formerly "basic" to me.


That is not what Apple said. Quote:

"Faster than 98% of PC laptops"

[1]: https://d1abomko0vm8t1.cloudfront.net/article/images/800x800...


My quote was from Apple's MacbookAir page[1], including the footnote.

Your quote is just not very specific. On its face, it could mean every PC Laptop ever produced. I'm somewhat certain that every Apple computing device ever has beaten that standard. Even Airpods and the Macbook chargers might be getting close these days.

[1]: https://www.apple.com/macbook-air/


> Even Airpods and the Macbook chargers might be getting close these days.

I simply can't agree with these logical gymnastics.


2019 laptop sales are 9.3% of laptop sales for years 2010-2019 (https://www.statista.com/statistics/272595/global-shipments-...). The phrase "Faster than 98% of PC laptops" in itself is very general, an it's fair to assume that it's e.g. all laptops currently in use, or ever made - in part since this is a sales pitch, rather than a technical specification item. If we add 1990-2009 laptops to the statistic above, the share of purportedly modern laptops will just shrink, and substantially at that.

Then you have to add on top of that the consideration of what are the laptops people are actually buying and using. I can assure you that neither enterprise clients nor everyone outside of select, relatively spoiled, group people concentrated in just a few areas are regularly buying top of the line workstation or gaming laptops.

The parent is very much right, amongst techies there's an unhealthy tendency for self-righteous adjudication of the narrative or context.


>I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.

Exactly. It's a meaningless number.

They also conspicuously avoid posting any GHz information of any kind. My assumption is that it's a fine laptop, but a bullshit performance claim.


The clock speed of ARM chips that have big.LITTLE cores is not very meaningful. The LITTLE cores can run at lower frequencies than the big cores. The Apple A series (and M series I will say with some confidence) support heterogeneous operation so both sets of cores can be active at once. The big cores can also be powered off if there's no high priority/power tasks.

The cores and frequency can scale up and down quickly so there's not a meaningful clock speed measure. The best number you'll get is the maximum frequency and that's still not useful for comparing ARM chips to x86.

Even Intel's chips don't have frequency measures that are all that useful. Some workloads support TurboBoost but only within a certain thermal envelope. Even the max conventional clock for a chip is variable depending on thermals.

I don't think it's worthwhile faulting Apple for not harping on the M series clock speeds since they're a vector instead of a scalar value.


Nobody is “faulting” Apple. They tend to avoid saying things for weird PR reasons.

In this case, they make impressive sounding, but almost completely meaningless assertions about performance. If anything, they are underselling an impressive achievement.

Intel does provide performance ranges in thermally constrained packages that are meaningful.


> They also conspicuously avoid posting any GHz information of any kind.

Comparing CPUs by GHz is like comparing cars by seeing which car can rev the hardest.


The frequency matters because it would give a far better insight into expected boost behavior & power consumption.

For example when you look up the 15w i5-1035G1 and see 1ghz base, 3.6ghz boost, you can figure out that no, those geekbench results are not representative of the chip running at 15w. It's pulling way, way more because it's in the turbo window, and the gap between base & turbo is huuuuuge.

So right now when Apple claims the M1 is 10w, we really have no context for that. Is it actually capped at 10w? Or is it 10w sustained like Intel's TDP measurements? How much faster is the M1 over the A14? How different is the M1's performance in the fanless Air vs. the fan MBP 13"?

Frequency would give insights into most all of that. It's not useless.


Here's the GeekBench scores for various iPhones [0] so you can answer for yourself. Here's the MacBook Air for comparison [1].

[0] https://browser.geekbench.com/ios_devices/iphone-12

[1] https://browser.geekbench.com/v5/cpu/4651056


The GeekBench tests have revealed that the clock frequency of the M1 cores is 3.2 GHz, both in MB Air and in MB Pro.

Therefore M1 succeeds to have a slightly higher single-thread performance (between 3% and 8%) than Intel Tiger Lake and AMD Zen 3 at only 2/3 of their clock frequency.

The speed advantage of M1 in single-thread is not high enough to be noticeable in practice, but what counts is that reaching the same performance at high IPC and low frequency instead of low IPC and high frequency results in having the same performance as the competitors at a much lower power consumption.

This is the real achievement of Apple and not the ridiculous claims about M1 being 3 times faster than obsolete and slow older products.


Geekbench doesn't monitor the frequency over time. We don't know what the M1 actually ran at during the test, nor what it can boost to. Or if it has a boost behavior at all even.


It is good to be skeptical of that "faster" and that 2% measurement, because there are lots of opportunities to make them ambiguous. But clock frequencies are hardly useful for comparisons between AMD and Intel. They'd be even more useless across architectures.

Benchmarks are as good as it gets. Aggregate benchmarks used for marketing slides are less than ideal, but the problem with marketing is that it has to apply to everyone. Better workload focused benchmarks might come out later... In their defense most people won't look at these, because most people don't really stress their CPUs anyway.


> most people don't really stress their CPUs anyway.

I find that Teams, Slack and VS Code running together stresses my 2019 Macbook Pro 16.


I agree, but it’s a weird measure to leave out when they advertise 16 “neural engine” cores.

Understanding the clock difference between the low energy and performance cores might be an interesting comparator, for example.


> They also conspicuously avoid posting any GHz information of any kind

Is that information actually of any real use when dealing with a machine with asymmetric cores plus various other bits on the chip dedicated to specific tasks [1]?

[1]: https://images.anandtech.com/doci/16226/M1.png


What does ghz have to do with performance across chips? I’m reminded of the old days where AMD chips ran at a lower ghz and outperformed Intel chips running much faster. Intel had marketed that ghz === faster so AMD had to work to get around existing biases.


Even Intel had to fight against it when they transitioned from the P4 designs to Core. They began releasing much lower clocked chips in the same product matrix and it took a while for people to finally accept that frequency is a poor metric for comparing across even the same company’s designs. And I think Apple also significantly contributed to this problem in the mid-late PPC days with “megahertz myth” marketing coupled with increasingly obvious misleading performance claims.


GHz is also a fancy meaningless number that was spawned by Intel's marketing machine. The Ryzen 5000 chips cannot hit the 5GHz mark that Intel ones can, and yet the Ryzen 5000 chips thrashed the Intel ones in every benchmark done by everyone on youtube.

And that's when both the chips are on the same x86 platform. Differences in architecture can never be compared or should be compared with GHz as any kind of basis.

The only place where the GHz is useful for comparison is with products running the same chip


Why is the clock speed relavant? Yes it lower than the highest end x86 chips, but the PPC is drastically higher. They don’t want clueless consumers thinking clock speed matters to performance.


I dont know if you’ve actually done the numbers.. but most laptops on the market, have low to mediocre specs. It would surprise me if more than 2% are pro/enthusiast.


Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.

https://www.newegg.com/Gaming-Laptops/SubCategory/ID-3365

As a Joe Schmoe it's hard to get good figures, but it appears the total laptop market is about $161.952B[1] with the "gaming" laptop segment selling about $10.96B[2]. Since gaming laptops are more expensive this undercounts cheap laptops, but there are other classes of laptop that are going to outperform this mac, like business workstations.

There might be one way to massage the numbers to pull out that statistic somehow, but it is at best misleading.

[1] https://www.statista.com/outlook/15030100/100/laptops-tablet...

[2] https://www.statista.com/statistics/1027216/global-gaming-la...


>Apple didn't specify if they're counting by model or total sales,

If it was Model they would be spinning it. But they said sold in the past year. I dont know how anyone else would interpret it, but in financial and analytics that is very clearly implying unit sold.

Your [1] is Laptop with Tablet, total Laptop market is about $100B, although this year we might see a sharp increase cause of pandemic.

Let say there are $10 Gaming Laptop market. So 10% of Market Value are going to Gaming Laptop. Total Laptop Market includes Chromebook, so if you do ASP averaging I would expect at least 3x ( if not 4x or higher ) difference in Gaming and Rest of Laptop Market. So your hypothesis of "All Gaming Laptop" would be faster than M1 gives you roughly 3.3% of the Marketshare. Not too far off 2%.

And all of that is before we put any performance number into comparison.


"Virtually everything in the Gamer Laptop Category is going to be faster in virtually every measure"

I'd really suggest you look at the CPU benchmarks (single and multi core) for M1 vs Intel / AMD laptops.


The only ones I've seen are from Apple, and I'm always skeptical of benchmarks from manufacturers.


"Virtually everything in the Gamer Laptop Category is going to be faster in virtually every measure"

So this statement is made on the basis of what data?


On the fact that discrete gaming laptops have higher power requirements and better cooling solutions, in turn allowing much faster CPUs to run in them.

That's the most meaningful constraint for mobile CPUs today, after all.


If you're not going to compare Apples to Apples, i.e. if power, cooling and size is a constraint you're not going to care about at all, you might as well count desktop PCs as well.

Apples measurement comes pretty close to comparing "laptops that most people would actually buy". Not sure why it's meaningful that a laptop maker can put out a model that's as thick as a college book, has the very top bins of all parts, sounds like a jet engine when running at max speed, and is purchased by 1% of the most avid gamers.

Oh, and if someone puts out a second model that adds RGB backlit keyboard, but is otherwise equivalent, that should somehow count against Apples achievements, because for some reason counting by number of models is meaningful regardless of how many copies that model sold o_O


So no data except a view that more power and more cooling automatically leads to better performance independent of process, architecture and any other factors?


> Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.

This is what I don't get.. why would you ever assume they meant counting by model? That's a nearly meaningless measurement. How do you even distinguish between models in that measurement? Where do you set the thresholds? The supercharged gaming laptops are absolutely going to dominate that statistics no matter what, because there's a huge number of models, lots of which only differ mostly by cosmetic changes. The margins are likely higher, so they don't need to sell as many of a given model to make it worthwhile to put one out. Does a laptop maker even have to actually sell a single model for it to count? How many do they have to sell for it to count? Does it make sense to count models where every part is picked from the best performing bins, so that you're guaranteed that the model couldn't count for more than a fraction of sales?

Counting by number of laptops actually sold is the only meaningful measurement, at least you have a decent chance of finding an objective way to measure that.

And I thought it was 100% obvious from Apples marketing material what they meant, so I really don't get why anyone is confused about this.


I assume Apple means unit sales. It makes sense that 98% of all laptop units sold are not the fastest.


> Apple didn't specify if they're counting by model or total sales, but virtually everything in the Gamer Laptop category is going to be faster in virtually every measure.

Not a single one will beat it in single core performance.


Just a Question...are they talking about their own laptops too?


I suspect they're not technically saying that, since they're saying "PC laptops." But as Coldtea notes, it's pretty clear the M1-based laptops embarrass all current Intel-based Mac laptops. I'm just not going to fault Apple too much for failing to explicitly say "so this $999 MacBook Air just smokes our $2800 MacBook Pro."


Yes. The M1 scores faster than the last model of Intel using MBP 16".


'98% of laptops sold over the last year' not 'that you can buy' ie not 98% of all models on the market (whatever that means).

And their statement will have been through all sorts of validation before they use it so it's almost certainly not 'bullshit'.


Faster than 98% of the cheapest OEM laptop available!


I just had the thought that the figures could be skewed by education departments all over the country making bulk orders for cheap laptops for students doing remote learning.


Very good point.


Nope, "Faster than 98% of the laptops sold, which includes many of the cheapest OEM laptop available!".

Not the same as "Faster than 98% of the cheapest OEM laptop available!".


Hey if they’re allowed to miss large bits of information for marketing purposes, so am I :)


And there are some truly incredulously crap ones, possibly costing even less than an additional power supply for the MacBook.

That statistic from Apple is just as bad as those.


It beats Apple’s own top of the line intel laptop at a fraction of the cost. According to the tests which have surfaced. That ought to count for something.

Apple has not made pure fantasy claims in the past so why should they now? The trend has been clear. Their ARM chips have made really rapid performance gains.


We don’t even have a significant quantity of people with hardware in hand yet so I’d like to reserve judgement.

At best we have some trite stats about single core performance but I’m interested to see whether or not this maps to reality on some much harder workloads that run a long time. Cinebench is an important one for me...


It's bullshit especially if it's true! That's how advertising generally works, to mislead without explicitly lying.


Old quad-core 2020 Macbook Air was probably faster than 98% of the existing laptops on the market given what specs have most volume sold (<$500).


> every tech nerd's first reaction

Every true tech nerd knows this is fucking impressive ever since iPhones started being benchmarked against Intel and every true tech nerd can tell that this is only going to get better. :)


Yep, people who actually care about tech and hardware are applauding this. Anti-Apple and x86 fanboys are the ones doing everything they can to discount it.


What % of laptops comes with Nvidia 3090 on them? Really, please, do tell me a number, I’ll wait.


>I think Apple brought this on themselves when they announced it would be faster than 98%[1] of the existing laptops on the market. They didn't caveat it with "fanless laptops" or "laptops with 20hr of battery life", it's just supposedly faster than all but 2% of laptops you can buy today.

No, it's faster than all those laptops, not just "fanless" ones. It's actually faster than several fan-using i9 models. And it has already been compared to those.

The grantparent talks about comparisons for GPUs...


> I'm sure you understand the performance differences between a 10W part with integrated graphics designed for a fanless laptop and a desktop part with active cooling and discrete graphics.

The problem here is that the obvious competition is Zen 3, but AMD has released the desktop part and not the laptop part while Apple has released the laptop part and not the desktop part. (Technically the Mini is a desktop, but you know what I mean.)

However, the extra TDP has minimal effect on single thread performance because a single thread won't use the whole desktop TDP. Compare the laptop vs. desktop Zen 2 APUs:

https://www.anandtech.com/bench/product/2633?vs=2635

Around 5% apart on single thread due to slightly higher boost clock, sometimes not even that. Much larger differences in the multi-threaded tests because that's where the higher TDP gets you a much higher base clock.

So comparing the single thread performance to desktop parts isn't that unreasonable. The real problem is that we don't have any real-world benchmarks yet, just synthetic dreck like geekbench.


I'm not so sure the fellow does understand that difference.

Their take reminds me of the Far Side cartoon, where the dog is mowing the lawn, a little irregularly, and a guy is yelling at him, "You call that mowing the lawn?"[1]

[1] https://www.pinterest.com/pin/252623860321491593/


This is true, but I think a lot of folk are assuming that a future M2 or M3 will be able to scale up to higher wattage and match state-of-the-art enthusiast-class chips. That assumption is very much yet to be proven.


This is true, but I think a lot of folk are assuming that a future M2 or M3 will be able to scale up to higher wattage and match state-of-the-art enthusiast-class chips.

Apple wouldn't go down this path if they weren't confident that their designs would scale and keep them in the performance lead for a long time.

Look, the Mac grossed $9 billion last quarter, more than the iPad ($6.7 billion) and more than Apple Watch ($7.8 billion). They've no doubt invested a lot of time and money into this; there's no way, now that they've jettisoned Intel, they haven't gamed this entire thing out. There's too much riding on this.

Yes, Apple's entry level laptops smoke much more expensive Intel-based laptops. But wait until the replacements for the 16-inch MacBook Pro and the iMac and iMac Pros are released.

By then, the geek world would have gone through all phases of grief—we seem deep into denial right now, with some anger creeping in.


> Apple wouldn't go down this path if they weren't confident that their designs would scale and keep them in the performance lead for a long time.

There are two reasons to think this might not be the case. The first is that they could justify continuing to do this to their shareholders based solely on the cost savings from not paying margins to Intel, even if the performance is only the same and not better. Their customers might not appreciate having the transition dumped on them in that case, but Apple has a specific relationship with their customers.

And the second is that these things have a long lead time. They made the call to do this at a time when Intel was at once stagnant and the holder of the performance crown. Intel is still stagnant but now they have to contend with AMD. And with whatever Intel's response to AMD is going to be now that they've finally got an existential fire lit under them again.

So it was reasonable for them to expect to beat Intel's 14nm++++++ with TSMC's 5nm, but what happens now that AMD is no longer dead and is using TSMC too?


The first is that they could justify continuing to do this to their shareholders based solely on the cost savings from not paying margins to Intel, even if the performance is only the same and not better.

You know Apple’s market capitalization is a little over $2 trillion dollars, right? And Apple's gross margins have been in the 30-35% range for many years. This isn't a shareholder issue. They are by far the most profitable computer/gadget manufacturer around.

So it was reasonable for them to expect to beat Intel's 14nm++++++ with TSMC's 5nm, but what happens now that AMD is no longer dead and is using TSMC too?

No matter what AMD does in the short term, they're not going to beat the performance per watt of the M1, let alone the graphics, the Neural Engine and the rest of components of Apple's SoC. It's not just 14nm vs. 5nm; it's also ARM’s architecture vs. x86-64.

Apple has scaled A series production for more than a decade and nobody has caught them yet in iPhone/iPad performance. There were 64-bit iPhones for at least a year before Qualcomm and other ARM licensees could catch up.

There's no evidence or reason to believe it'll be any different with the M series in laptops and desktops.


> You know Apple’s market capitalization is a little over $2 trillion dollars, right?

That's the issue. Shareholders always want to see growth, but when you're that big, how do you do that? There isn't much uncaptured customer base left while they're charging premium prices, but offering lower-priced macOS/iOS devices would cannibalize the margins on existing sales. Solution: Increase the margins on existing sales without changing prices so that profitability increases at the same sales volume.

> No matter what AMD does in the short term, they're not going to beat the performance per watt of the M1

Zen 3 mobile APUs aren't out yet, but multiply the performance of the Zen 2-based Ryzen 7 4800U by the 20% gain from Zen 3 and the multi-threaded performance (i.e. the thing power efficiency is relevant to) is already there, and that's with Zen 3 on 7nm while Apple is using 5nm.

> it's also ARM’s architecture vs. x86-64.

The architecture is basically irrelevant. ARM architecture devices were traditionally designed to prioritize low power consumption over performance whereas x86-64 devices the opposite, but that isn't a characteristic of the ISA, it's just the design considerations of the target market.

And that distinction is disappearing now that everything is moving toward high core counts where the name of the game is performance per watt, because that's how you get more performance into the same power envelope. Epyc 7702 has a 200W TDP but that's what allows it to have 64 cores; it's only ~3W/core.

> Apple has scaled A series production for more than a decade and nobody has caught them yet in iPhone/iPad performance.

Qualcomm never caught Intel/AMD either.


Hmmm,

Ryzen 7 4800U has eight large cores vs 4 large plus 4 small in the M1 and even with your (hypothetical) 20% uplift multicore is just about matching M1. Single core is nowhere near as good.

'Architecture is basically irrelevant' not the biggest factor but not irrelevant - x64 still has to support all those legacy modes and has more complex front end.

You're working very hard to try to deny that Apple has passed AMD and Intel in this bit of the market. We'll have to see what happens at higher TDPs but they clearly have the architecture and process access to do very well.

No idea what Qualcomm has to do with this.


> Ryzen 7 4800U has eight large cores vs 4 large plus 4 small in the M1 and even with your (hypothetical) 20% uplift multicore is just about matching M1.

We're talking about performance per watt. The little cores aren't a disadvantage there -- that's what they're designed for. They use less power than the big cores, allowing the big cores to consume more than half of the power budget and run at higher clocks, but the little cores still exist and do work at high power efficiency. It would actually be a credit to AMD to reach similar efficiency with entirely big cores and on an older process.

> Single core is nowhere near as good.

Geekbench shows Zen 3 as >25% faster than Zen 2 for single thread. Basically everything else shows it as ~20% faster. Geekbench is ridiculous.

> 'Architecture is basically irrelevant' not the biggest factor but not irrelevant - x64 still has to support all those legacy modes and has more complex front end.

This is the same argument people were making twenty years ago about why RISC architectures would overtake x86. They didn't. The transistors dedicated to those aspects of instruction decoding are a smaller percentage of the die today than they were in those days.

> No idea what Qualcomm has to do with this.

The claim was made that Apple has kept ahead of Qualcomm. But Intel and AMD have kept ahead of Qualcomm too, so that isn't saying much.

> You're working very hard to try to deny that Apple has passed AMD and Intel in this bit of the market.

People are working very hard to try to assert that Apple has passed AMD and Intel in this bit of the market. We still don't have any decent benchmarks to know one way or the other.

Half the reason I'm expecting this to be over-hyped is that we keep getting synthetic Geekbench results and not real results from real benchmarks of applications people actually use, which you would think Apple would be touting left and right if they were favorable.


We'll find out soon enough how things stand but just to point out that your first comment on small vs large cores really doesn't work - the benchmarks being quoted are absolute performance not performance per watt benchmarks. Small cores are more power efficient but they do less in a given period of time and hence benchmark lower.


AMD should easily beat Apple in graphics, all they have to do is switch to the latest Navi/RDNA2 microarchitecture. They are collaborating with Samsung on bringing Radeon into mobile devices, surely that will translate into efficiency improvements for laptops too.

> ARM’s architecture vs. x86-64

x86 will always need more power spent on instruction decode, sure, but it's not a huge amount.


AMD should easily beat Apple in graphics…

Perhaps you haven't read the Anandtech article? [1]

    Intel has stagnated itself out of the market, and has lost
    a major customer today. AMD has shown lots of progress
    lately, however it’ll be incredibly hard to catch up to
    Apple’s power efficiency. If Apple’s performance trajectory
    continues at this pace, the x86 performance crown might
    never be regained.
[1]: https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


That's just someone's opinion/prediction. I don't have to agree with it :)


You probably won't agree with this one either: "Intel's disruption is now complete": https://news.ycombinator.com/item?id=25092721


This decision was made probably at least 3 years ago before the full extent of Intel's problems were clear.

So why make this move? It's a long list.

- Greater confidence in TSMC vs Intel to deliver process improvements

- Leverage in house silicon design expertise

- Leverage Apple's collaboration with TSMC

- More modern CPU architecture

- Single architecture across all products (so can run iPhone apps on Mac)

- Ability to incorporate Apple's own IP (e.g. neural engine) in SoC

- Cost savings

- Full control of the hardware design

I've not mentioned performance as this follows from the other factors.

Only one of these benefits would be delivered by moving to AMD.

So some short term transition pain but I think this would be pretty compelling from Apple's perspective when the decision was made.


I suspect the opposite. It is possible that Apple has decided that 640 KB is enough for everyone — that performance level of a tablet is what most common people need from a computer. Which is not entirely untrue, as a decent PC from 10 years ago is still suitable for most common tasks today if you are not into gaming, or virtual machines building other virtual machines. Most people don't really use all their cores and gigabytes. Also, consumers got used to smartphone limitations, so computer can now be presented as overall “bigger and better” mobile device with better keyboard, bigger drive, infinite battery, etc.

If they wanted raw performance in general code, they would stay with what they already had. The switch means that their goal was different.

I guess we'll see hordes of fans defending that decision quite soon.


I suspect the opposite. It is possible that Apple has decided that 640 KB is enough for everyone — that performance level of a tablet is what most common people need from a computer.

Perhaps you haven't been paying attention but this is Apple's third processor transition: 68K to PowerPC to Intel to ARM. Each time was to push the envelope of performance and to have a roadmap that wouldn't limit them in the future.

When the first PowerPC-based Macs shipped, they were so fast compared to what was available at the time, they couldn't be exported to North Korea, China or Iran; they were classified as a type of weapon [1].

The fact the PowerMac G4 was too fast to export at the time was even part of Apple's advertising in the 90s [2].

It's always been part of Apple's DNA to stay on the leading edge, especially with performance.

Apple's strategy has never been to settle for good enough. If that were the case, they wouldn't have spent the last 10+ years designing their own processors and ASICs. Dell, HP, Acer, etc. just put commodity parts in a case and ship lowest-commodity hardware. It shouldn't be a surprise that the M1 MacBook Air blows these guys out of the water.

Anyone paying attention saw this coming a mile away.

I have a quad-core 3.4 GHz Intel iMac and it's pretty clear the MacBook Pro with the M1 is going to be noticeably faster for some, if not all, of the common things I do as a web developer.

We know the M2 and the M3 are in development; I suspect 2021 will really be the year of shock and awe when the desktops start shipping.

[1]: https://www.wired.com/2002/01/thats-a-whole-lot-of-power-mac...

[2]: https://www.youtube.com/watch?v=lb7EhYy-2RE


There seems to be no evidence that Intel will be able to keep up with Apple. The early geek bench results show the M1 laptops beating even the high end Intel Mac ones. And that's with their most thermally constrained chip.

Apple will be releasing something like a M1X next, which will probably have way more cores and some other differences. But this M1 is incredibly impressive for this class of device. Intel has nothing near it to compete in this space.

The bigger question is how well does Apple keep up with AMD and Nvidia for GPUs and will they allow discrete GPUs.


Indeed, but given they are on TSMC 5nm and the apparent strength of the architecture and their team I think most will be inclined to give them the benefit of the doubt for the moment.

Actually biggest worry might be the economics - given their low volumes at the highest end (Mac Pro etc) how do they have the volumes to justify investing in building these CPUs?


I suspect the plan is to redefine computing with applications that integrate GPU (aka massively parallel vector math), plain old Intel-style integer and floating point, and some form of ML acceleration.

So multiple superfast cores are less important for - say - audio/video if much of the processing is being handled by the GPU, or even by the ML system.

This is a difference in kind not a difference in speed, because plain old x86/64 etc isn't optimal for this.

It's a little like the next level of the multimedia PC of the mid-90s. Instead of playing video and sound the goal is to create new kinds of smart immersive experiences.

Nvidia and AMD are kinda sorta playing around the edges of the same space, but I think Apple is going to try to own it. And it's a conscious long-term goal, while the competition is still thinking of specific hardware steppings and isn't quite putting the pieces together.


Good point. Apple dominates a unique workload mix brought on by the convergence of mobile and portable computing. They can benchmark this workload mix through very different system designs.


They might decide they want to fill their datacenters with Apple Silicon CPUs, then the Mac Pro can take the binned ones.


Doubtful. What OS would that run? They removed most server features from macOS in the last few years.


Probably nothing to stop them running linux on M series chips. I'd be a bit surprised actually - suspect we'll see something like a 32 Core CPU which will go into the higher end machines (maybe 2 in the Mac Pros).


Look at this chart https://images.anandtech.com/doci/16226/perf-trajectory_575p... from page 4 of the article above.

It shows the A9-A14 chips compared vs the K-series processors by Intel.


The point of a computer as a workstation is it goes vroom. Computer that does not go vroom will not be effective for use cases where computer has to go vroom. It doesn't matter if battery life is longer or case is thinner. That won't decrease compile times or improve render performance.


The point of a computer as a workstation is it goes vroom.

The M1 is not currently in any workstation class computer.

It is in a budget desktop computer, a throw-it-in-your-bag travel computer, and a low-end laptop.

When an M series chip can't perform in a workstation class computer, then your argument will be valid. But you're trying to compare a VW bug with a Porsche because they look similar.


The "low-end laptop" starts at $1300, is labeled a Macbook Pro, and their marketing material states:

"The 8-core CPU, when paired with the MacBook Pro’s active cooling system, is up to 2.8x faster than the previous generation, delivering game-changing performance when compiling code, transcoding video, editing high-resolution photos, and more"


> It is in a budget desktop computer, a throw-it-in-your-bag travel computer, and a low-end laptop.

I took "budget desktop computer" to be the Mac Mini, "throw-it-in-your-bag travel computer" to be the Macbook Pro, and "a low-end laptop" to be the Macbook Air.

But I agree - the 13" is billed as a workstation and used as such by a huge portion of the tech industry, to say nothing of any others.


None of those are traditional Mac workstation workloads. No mention of rendering audio/video projects, for example. These are not the workloads Apple mentions when it wants to emphasize industry-leading power. (I mean, really, color grading?)

This MBP13 is a replacement for the previous MBP13; but the previous MBP13 was not a workstation either. It was a slightly-less-thermally-constrained thin-and-light. It existed almost entirely to be “the Air, but with cooling bolted on until it achieves the performance Intel originally promised us we could achieve in the Air’s thermal envelope.”

Note that, now that Apple are mostly free of that thermal constraint, the MBA and MBP13 are near-identical. Very likely the MBP13 is going away, and this release was just to satisfy corporate-leasing upgrade paths.


"workstation class" is a made up marketing word. Previous generation macbooks were all used for various workloads and absolutely used as portable workstations. Youre moving the goalposts.


It’s not, if you’ve ever worked with actual workstations.

I’m talking Silicon Graphics, not Dell.


Do not use MacBook Air when you need vroom! Buy computer designed to vroom.


A laptop is not a workstation.


I work with a Lenovo P70 with 64GB of RAM, a quadro M3000 GPU and a 4K monitor plus 2 external monitor ports (HDMI and display port)

I did not choose it, they gave it to me at work, but it's definitely a workstation

It's not a lightweight laptop, but it's a laptop nonetheless


"Mobile workstation" needs to be an official category.

You know. The "laptops" that work about 1 hour, maybe 2 on battery power and require a power brick the size of a VHS cassette to operate.


Ah but according to the Official Category Consortium you’ve just eliminated several products[1] which would presumably be included if the “mobile workstation” moniker was designated based on workload capabilities.

[1]: including the 16” MBP, but certainly not limited to it


Laptop used to be a form factor (it fits on your lap) while very light, very small laptops, were in the notebook and subnotebook (or ultra portable) category.

https://en.m.wikipedia.org/wiki/Subnotebook

P.s. in Italian they are usually referred to as "portatili" (portable) from the 80s portable computers, even though they were not laptops


I usually think of "subnotebook" as implying the keyboard is undersized; a thin and light machine that is still wide enough for a standard keyboard layout is something else.

I think we should bring back the term "luggable" for those mobile workstations and gaming notebooks that are hot, heavy, have 2hr or less of battery life.


Some laptops are workstations. This is not one.


Docked laptop is. With a benefit that if you want to work on the road you can take it without having to think about replicating your setup and copying data over.


Then what is a macbook for? Expensive web browsing? I've been told for a long time that macbooks are for work. Programmers all over use them, surely. Suddenly now none of that applies? To get proper performance you have to buy the mac desktop for triple the price?


Probably because the announced hardware is clearly entry level. The only model line that gets replaced is the MacBook Air which has been, frankly, cheap-is and underpowered for a long time.

So you have a platform that is (currently) embodied by entry level systems that appear to be noticeably faster than their predecessors. Apple has said publicly that they plan to finish this transition in 2 years. So more models are coming - and they'll have to be more performant again.

It seems pretty clear that the play here runs "Look here's our entry level, it's better than anyone else's entry level and could be comparable to midlevel from anyone else. But after taking crap for being underpowered while waiting for intel to deliver we cn now say that this is the new bar for performance at the entry level in these price brackets."


It would be interesting to see the comparison to a Ryzen 7 PRO 4750U, you can find that in a ThinkPad P14s for $60 less than the cheapest macbook air (same amount of ram and ssd size) so that seems like a fair comparison



Assuming that geekbench is reflective of actual perf (I'm not yet convinced) there is also the GPU, and the fact that AMD is sitting on a 20% IPC uplift and is still on 7nm.

So if they release a newer U part in the next few months it will likely best this device even on 7nm. An AMD part with a edram probably wouldn't hurt either.

It seems to me that apple hasn't proven anything yet, only that they are in the game. Lets revisit this conversation in a few years to see if they made the right decision from a technical rather than business perspective.

The business perspective seems clear, they have likely saved considerably on the processor vs paying a ransom to intel.

edit: for the downvoters, google Cezanne, because its likely due in very early 2021 and some of the parts are zen3. So apple has maybe 3 months before another set of 10-15W amd parts drop.


That'll mean an 8c/16t will catch up to a 4+4 core

Apple will have a 8+4 core out soon, and likely much larger after that. Since they're so much more power efficient, they can utilize cores better at any TDP.


Sad to see downvotes on this: it's like there's a set of people hellbent on echoing marketing claims, in ignorance of (what I formerly perceived as basic) chip physics - first one to the next process gets to declare a 20-30% bump, and in the age of TSMC and contract fabs, that's no longer an _actual_ differentiation, the way it was in the 90s.


I'm as sceptical of Apple's marketing claims as anyone but if you're comparing actual performance of laptops that you will be able to buy next week against hypothetical performance of a cpu that may or may not be available next year (or the year after) then the first has a lot more credibility.

PS last I checked AMD was not moving Zen to 5nm until 2022 - so maybe a year plus wait is a differentiation.


Regardless, this competition is great for us consumers! I’m excited to see ARM finally hit its stride and take on the x64 monopoly in general purpose computing.


Apple hardware is far removed from general purpose and is about as propertiarty computing as it comes.


General purpose != free. Totally separate categories.


is a higher score worse? If not it's a sweep in favor of ryzen

https://i.imgur.com/cwpebHk.png


You're making completely the wrong comparison. On the left you have Geekbench 5 scores for the A12Z in the Apple DTK, and on the right you have Geekbench 4 scores for the Ryzen.

The M1 has leaked scores of ~1700 single core and ~7500 multicore on Geekbench 5, versus 1200 and 6000 for the Ryzen 4750U.


how can you tell it's only 6k for the ryzen 4750U on the GB5 tests? there's so many pages and pages of tests I can't sift through all of that to confirm



swznd: this is from the page you linked https://i.imgur.com/JgN8o3m.png am I reading this wrong?


not sure why I'm getting downvoted, I clicked around and found a ryzen that benchmarked at 7.5k...didn't go through even half the benchmarks though



It’s just a shame that the screens in the Lenovo AMD devices doesn’t hold a candle to the MacBooks.


in what way? I've got a levono ideapad with ryzen7 4800u that outperforms my macbook pro 2019 16" by a long shot


What screen performance are you referencing? Refresh rate? Pixel density? Color space? Color accuracy? Backlight bleed? Dead/stuck pixels? Nits? Size?


I misread OP, somehow glossed over the "screen" qualifier - thanks for getting me to reread


> It seems pretty clear that the play here runs "Look here's our entry level

Not quite. The play here is, "Look, here is our 98% of sales laptop". That it's entry level is only an incidental factor. 98% of sales volume is at this price point, and so they get the maximum bang for their buck, the maximum impact, by attacking that one first. Not just because it's the slowest or entriest.

Had they started at the fastest possible one, sure it would have grabbed some great headlines. But wouldn't have had the same sales impact. (And it's icing that the slowest part is probably easiest to engineer.)


> Probably because the announced hardware is clearly entry level.

Yes, but why compare it to an entry card that was released 4 years ago instead of an entry card that's been released in the past 12 months? When the 1050 Ti was released, Donald Trump was trailing Hillary Clinton in the polls. Meanwhile, the 1650 (released April 2020, retails ~$150) is significantly faster than the 1050 Ti. (released October 2016, retailed $140 but can't be purchased new anymore)


The 1050 is still a desktop class card. The M1 is in tiny notebooks and the Mac Mini, none of which even have the space or thermals to house such a card.


The main point was that its a 3 generation old desktop card which is obviously not as efficient as the modern mobile devices.

Lets see what a 3000 series nvidia mobile design does on a more recent process before declaring victory.


NVIDIA doesn't make small GPUs very often. The 1050 uses a die that's 135mm^2, and the smallest GeForce die they've introduced since then seems to be 200mm^2. That 135mm^2 may be old, but it's still current within NVIDIA's product lineup, having been rebranded/re-released earlier this year.


The 1050 isn't just a desktop class card. Plenty of laptops have a 1050 inside.


Apple doesn't make anything entry-level. The entry-level is a $150 ChromeBook, it's not "the cheapest that Apple sells".


The Air IS entry level; it's slow, low resolution, has meager io, etc. It just happens to be at a price point that is not entry level.


Depends on how you define “entry level”. The Porsche Cayman is an entry level Porsche, but starts at $60,000.

I don’t know anyone who would call that an entry level car, but any Porsche-phile would.

The new Air is fast and reasonably high resolution with a ~4K resolution screen. But it is Apple’s entry level laptop.


I concede the new air is likely faster than the ancient i3 they used to put in them.


Here are all the differences between the M1 Air and M1 Pro, from [1]:

    Brighter screen (500 nits vs. 400)
    Bigger battery (58 watt-hours vs. 50)
    Fan (increasing thermal headroom)
    Better speakers and microphones
    Touch Bar
    0.2 pounds of weight (3.0 pounds vs. 2.8 — not much)
The SoC is the same (although the entry-level Air gets 7-core GPUs, that’s probably chip binning). The screen is the same (Retina, P3 color gamut), both have 2 USB-C ports, both have a fingerprint reader.

[1]: https://daringfireball.net/2020/11/one_more_thing_the_m1_mac...


> low resolution > 2560 x 1600

Damn, you've got some high standards.


Competition has 4k displays on their thin and lights. They also don't use macOS which has problems running at resolutions other than native or pixel-doubled. The suggested scalings are 1680 by 1050, 1440 by 900, and 1024 by 640. None of those are even fractions of the native resolution so the interface looks blurry and shimmers. Also, all the suggested scalings are very small so there isn't much screen real-estate.


No it's not. It was designed to be extremely light with many compromises to make that happen. Yeah it got outdated, but that doesn't mean it was entry level.


My Xiaomi laptop is entry level and costed 300 euros including shipping from China and taxes

It still does 8 hours on battery after 3 years

Being entry level performances don't matter much, but it's still a good backup

Bonus point: Linux works great on it

For 13 hundred dollars (30% more in Euros) I can buy a pro machine, one that at least will give me the option of using more than 16GB of RAM

The new Apple Silicon looks good and I love that they are finally shipping some decent GPU, but price wise they're still not that cheap

------

The downvotes are because I said Xiaomi or for something else?

LG sells a 17 inches, 2.9 lb (same weight of the Air) 16GB of RAM (up to 40) and a discrete GPU (Nvidia GTX 1650) for 1,699


Such a close minded view.

You think a fair comparison is a passively heated MacBook Air compared to a top spec PC cpu?

It absolutely destroys it’s competition in performance (low end light notebooks) - why would you think comparing it to a 3090rtx is anything like fair.

Like calling a dell XPS 13 a piece of shit because it can’t keep up with a thread-ripper Titan pc desktop.

The fact you’re even in a place (where the benchmarks destroy in class competition) to complain about how it’s not being put up against top spec pc hardware is testament to how powerful it is.


Lmao, well said. The bias against Apple in HN is strong


> low end light notebooks

The Macbook Air isn't low end in price.


It’s a premium device though. It just comes down to if it’s value for the purchaser.

If you’re looking for a passive thin laptop that’s powerful enough to run pro apps it’s a good option if you have the money to spend.


> If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.

Woot? So you are buying a new Fiat Punto and compare it to the latest spec of a Koenigsegg? What are you even doing?

What we need to know is how these perform against previous MacBooks and potentially Microsoft Surface and Dell XPS. those are competitors.


> Woot? So you are buying a new Fiat Punto and compare it to the latest spec of a Koenigsegg? What are you even doing?

under any other circumstance i’d agree with you. i think we can agree this is not the assertion apple are trying to push in their marketing of the M1.

if Fiat are going to claim their entry level punto is, in real world terms, faster that 98% of all cars sold in the last year, they’re inviting a lot of (fair) comparison.

the 1050 is a budget chip from 3 generations ago. even in the GPU space, nvidia are claiming that their mid range GPU (RTX3080) is outpacing their previous generation top-end GPU (RTX2080ti).

the parent comment has a fair point.


But the 1050 is a specialized GPU vs the general-purpose M1, and besides, the 1050 is from only three years ago. So what do you think the relationship between XTX6080 and and M4 will look like three years in the future?


I’d like to see comparisons of Tensorflow-gpu operations. Kind of like how Apple used to compare Photoshop filter or Final Cut performance across computers.


When both of them have nearly the same price...i take a Tesla not the Punto


Is there a Tesla that lasted 20 years and after 500 thousand Kms is still functioning with little or no maintenance?

Switching to Apple has a cost, switching to Apple with Apple silicon has an even higher cost.

It all depends what you use your computer for, if you buy a Tesla you probably don't depend on your car, people buying entry level hw are people who don't need something fancy, they need a tool and good enough it's enough.

To reverse your analogy, if they have the same price, I take a computer that I can upgrade and actually own over an Apple


>Is there a Tesla that lasted 20 years and after 500 thousand Kms is still functioning with little or no maintenance?

Whe dont know yet.

>To reverse your analogy, if they have the same price,

I was thinking that Apple is the Punto, Apple never had a Koenigsberg.


> Whe dont know yet

So no

> I was thinking that Apple is the Punto, Apple never had a Koenigsberg.

The Punto is reliable, durable, and easy to modify

And Koenigsberg is a burg in Germany, maybe you meant Koenigsegg?


>So no

You have a hell of a Fiat Punto

>The Punto is reliable, durable, and easy to modify

If you don't drive on salted streets..the frame oxidize under your butt.

>And Koenigsberg is a burg in Germany, maybe you meant Koenigsegg?

Yeah, but honestly i don't give a damn.


Well, you can tweak a Fiat Punto to go as fast as 300km/h, you can't mess with Apple HW in any way though

You're stuck with what you bought :)

(in the article: Fiat Uno - 30 years old Fiat model - with Lamborghini engine runs at 320km/h)

https://www.automotorinews.it/2020/05/27/fiat-uno-motore-lam...


Do you see a lot of laptops with threadripper CPUs and 3090 RTX graphics cards?

I sure don't.

"Best-in-class" isn't a weasel word – it's recognition that no, this $1000 laptop is not going to be the fastest computer of all time. Just faster than other products similar to it.


> If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.

You can't really compare a laptop SoC with a dedicated graphics cards like the 3090 RTX. One is using a battery on a laptop and the other is plugged in to a power source with dedicated cooling.

Yes, Apple is claiming this is a better solution but that's mostly for laptops. While they did release a Mac mini, they still haven't released an Apple SoC Mac Pro or iMac. Those would be fair game for such comparison.


> If you are going to convince me, show me how the CPU stacks up to a top of the line Ryzen/threadripper and run Cinebench. If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.

These are chips for their low end, entry level products. Why on earth would you think that would be an appropriate comparison? That's absolutely absurd. They aren't putting these in iMacs or Mac Pros.

They aren't putting these in their high end MacBook Pros (they put it in their low end 13" pro, which has always been only a minor step up from the Air - the higher end 13" pro was more powerful, which they haven't replaced yet.)

Apple's "faster than 98% of laptops sold" line is obviously nonsense, like I'm sure it's technically correct - most laptops sold are cheap, low end things - but that's not a particularly huge achievement. I'm not sure why this line in particular is inviting people to make such ridiculous statements in response though.


> If you want to convince me about the graphics/ML capabilities, compare it to a 3090 RTX using running Vulkan/Cuda.

You're seriously suggesting we should compare the integrated graphics in a fanless bottom-of-their-line laptop that starts at $999 to a $1,500 graphics card ?

(Let's be brutally clear - that graphics card can do precisely squat on it's own. No CPU, no RAM, no PSU, no display, no keyboard, etc. etc. etc.)

Surely that makes no sense in any universe.


Sure let's compare. Let's see how fast a 3090 is on 10W.


Brilliant response.

It's a passively cooled laptop that's super thin and can crunch through 4k video editing and serious audio workflows.

We'd have shit our pants about this kind of leap 10 years ago. It really is impressive stuff.


Right answer here. I am pretty excited about a passively cooled device power efficient device that performs on-par, or better than, a 3-year-old, super heavy, gamer laptop like an Alienware r2/3. Those machines are still capable of running relatively impressive PC-VR, way ahead of what we see today in the standalone Oculus Quest2. Stick one (or preferably 2) of these things in a VR headset please Apple.


Hey, take a breather if it helps. No one here is responsible for convincing you of anything. But you sound a little upset.

If it helps, take into consideration performance:power ratios. None of your scenarios are fair otherwise, and I personally haven't seen anyone here claim the M1 will outperform everything. Hence, "in its class."

Maybe you saw some errant comments on PCMag.com claiming the M1 was the be all end all of computing?

Good luck.


> So its faster then a 3 generations old budget card, that doesn't run nVidia optimized drivers

This is a key point. Nvidia GPUs have not been supported in macOS since Mojave, so this seems like an apples-oranges comparison. Unless the benchmark for M1 was also run on Mojave (unlikely), then there's 3 years worth of software optimization potentially unaccounted for.

Possibly a more realistic comparison is between the M1 and the AMD Radeon 5300M. Shows between a 10-40% deficit in performance: https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=798775...

That said still an impressive showing given the TDP and the fact it's an integrated GPU vs. a dedicated GPU. It seems to hint that with enough GPU cores and a better cooling solution, it's not unreasonable to see these replacing the AMD Radeon 5500M/5600M next year in the MBP 16 and iMac lineups.

EDIT: pasted wrong compare link


Because M1 is their entry-level, laptop class offering.

It makes zero sense to compare them to high-end desktop CPUs and GPUs.


1050 Ti is below entry level at this point though.


The 1050 Ti was the premium discrete GPU option in the XPS 15 in 2018. That only got upgraded to the 1650 last year. Maybe that's as much a ding on Dell as anything else, but either way, lots of us are still rocking those laptops and they're hardly "below entry level".


Nah. I have this laptop too and at the time, the 1050TI was considered underwhelming but "well this laptop isn't for gaming, it's a business laptop". The contemporary Surface Book 2 had a 1060 with almost double the performance and people were kind of pissed.


The Surface Book 2 with 16GB RAM and 512GB SSD: $2499.

The new MacBook Air with 16GB RAM and 512GB SSD: $1399.

Geekbench: https://browser.geekbench.com/v5/cpu/compare/4679429?baselin...


Well thanks for posting CPU benchmarks in a discussion about GPUs.


Wasn't the point. I also posted RAM and SSD numbers.


Comparisson could be against either an MX250/MX350 if they wanted to compare to nvidia (however that's not integrated in a SoC-like manner) or the AMD Vega on Renoir, or Intel Graphics on Ice/Tiger Lake (I honestly lost track of what intel calls their iGPUs these days, they went back and forth between confusing naming conventions, but it's the CPU gen/model that's important anyway).


> Comparisson could be against either an MX250/MX350

MX350 is the same chip as the 1050.


The memory bandwith, amount of shader units, TMUs, fill rate, etc is different (slower) on the MX350. While they are surely the same archiecture, the MX350 is lower tier than the 1050.

And in this case, it'd make for a larger difference between Apples GPU and the nvidia. But then again, the M1 is a mobile SoC and the 1050 is a desktop GPU. So we shouldn't even be comparing them to being with.


Again Nvidia had how many yrs of evolution in GPU before they reached 1050 Ti? Everyone starts at some point.


Sure, but if you read the comments here you'd think the M1 GPU was state of the art or something.


>Because M1 is their entry-level, laptop class offering. It makes zero sense to compare them to high-end desktop CPUs and GPUs.

Sure It makes zero sense to compare them to high-end desktop or laptops, since it is used in devices with lacking the main attribute of Personal computer - the ability to control it and install OS of your choice.

Therefore it falls into another category like phones, ipads and other toys just with attached keyboard.


I’m sure I’ll get downvoted too, probably more than you, probably even attracting sympathetic upvotes for your own comment but... I really cannot express how much I don’t care. I’ve booted multiple OSes for learning, for fun, and for software support when the software wasn’t available for my preferred OS. But I use my computer for work (and some web browsing). Other than games, macOS has all the software I could think to need. I don’t like rebooting anyway. I can’t think of a scenario where I would ever need a device with another OS that wouldn’t be supplied by an employer. Some freedoms are sacrosanct (and if it ever comes to pass that macOS absolutely prevents installing unblessed end-user apps which deal only with public and supported APIs, that would be my deal breaker), but some freedoms seem so abstract and theoretical in their impact that I just can’t give them weight beyond a thought exercise. “You can’t install any other OS besides this one we offer that meets all your needs” is just... hardly even a thing I would even give much thought.


You’ll be able to install an OS of your choice on this one, likely (as long as your choice is MacOS or Linux).


Fine, Compare it against a Asus zephyrus g14.


It’s marketed as a gaming laptop and has half the battery life.

Not sure it’s necessarily the best comparison.


Half the battery on a higher powered CPU (the 4800H is 45W and the 4900HS is 35W if I recall correctly), plus a discrete graphics GPU (on top of the iGPU) and active cooling vs passive cooling, plus a 120Hz display on the G14 (and probably Apples is 60Hz).

So a more power-hungry notebooks battery lasts half as long, but still in the double digits. Not quite a surprise there.


Sure. Can we use performance per watt or did you have another measure in mind?


The M1 is a mobile chip. Why compare it against the high-end?

It seems pretty ridiculous to put the 10 watt M1 CPU up against big time GPUs that require several hundred watts.

Regardless, all the benches anyone desires will be out next week.


Also missing the fact it's manufactured by the same company that manufacture Ryzen. It's good stock.


If it will only be available in phones then it should clearly be compared to phone SoCs. If it will be in laptops too it should be compared to laptops/PC too of course.


I don’t know of any laptops with an RTX 3090 GPU or a Threadripper, let alone in a MacBook Air thin form factor running on 10 watts with 15 hour battery life.


Even in phones, even those phones offered by a single vendor, there’s a pretty wide performance range targeting a generally accepted (if evolving) set of market/form factor/use case categories. “Phone” is so broad a category as to be meaningless for comparison. “Laptop” as well. It’s just as silly to compare an ultraportable to a 17” 7lb gaming laptop as to compare an upscale ultraportable to a budget Chromebook.


What laptop runs a 3090 with a Threadripper at 10W?


The M1 is clearly the low-end Apple Silicon chip, given the computers that Apple is putting it in: the MacBook Air, the Mac mini, and the two-port MBP. Why in the world should we demand that this chip be able to blow the doors off a "top of the line Ryzen"? This is a mobile CPU that's holding it own against CPUs like that in single-core performance. "Yeah, but I bet a 64-core Ryzen would just blow it away in multi-core, so who is Apple kidding? Pshaw." Really?


How is it relevant how it stacks upto a thread ripper and 3090 rtx? You are comparing an ultra book with a large high performance workstation pc then. That makes absolutely no sense.


I don’t know why this particular comparison is noteworthy, but this is not a top-of-the-line CPU or GPU, is not intended to be, and those comparisons would be meaningless. It’s a 10W part for lower-end devices.


Apple decided to call it "PRO", so I think its fair to treat is as such.


It’s far faster than the crap ol’ Intel GPU found in current 13” pros.


Ah yes, the famous Apple MacBook Air Pro, available never in history



Not all Pro people need to run PUBG at 144hz on their ultra portable machine, in fact I'd even go as far to say that it's not a requirement for Pro.


How is comparing an internal GPU with a dedicated external 3090 RTX GPU fair ?


Seems pretty obviously to show what mainstream GPU it is closest to in performance. Especially for the first benchmark, the figures are suspiciously close.

It seems amazing that Apple can make its own desktop chips now - imagine if the origin Apple had an apple chip instead of a 6502! OTOH, everyone used to design their own CPUs, like Sun and Acorn. Wozniak too.


Come on man, a 1050TI was a reasonably powerful graphics card. It's amazing that we now have an integrated graphics processor in a low power chip that can match its performance.


I'd like to see someone run our framework benchmarks project [1] on M1 versus something like an Asus G14 (Ryzen 4900HS). It's not graphics, but the ability to run web frameworks is of interest to me.

[1] https://github.com/TechEmpower/FrameworkBenchmarks/wiki/Tech...


So when Apple m1 goes toe to toe with top cpus you all scream “apples to oranges” but when it comes to INTEGRATED GPU ON 10WATTS OF POWER it suddenly makes total sense for you to compare it to 350watt dedicated products?

Like.. what the hell were you thinking typing this comment?


This is how you know that Apple haters are running scared.


I'm not a Apple hater. I used to run a Mac.

And I'm not scared, I'm sad. I would love to have 3 or more open competitive desktop/laptop platforms (Windows, Linux and MacOS) but my view is that the release of the M1 makes that not very likely.

The M1 is almost identical to the A14, a great chip that was designed for mobile devices. It is designed to run one application at a time, don't do much compute or graphics, have a fixed set of memory and no hardware attachments. It does that brilliantly.

The problem is that a chip designed for a computer have different priorities. The need for low power consumption is less important. Even in a laptop, after 10h, Id rather take performance then more battery time. In a Computer you want more threads, expandable memory, PCI Gen 4.0, discreet graphics, GPU Compute, Raytracing, support for fast networking (the Mac Mini used to have 10Gig lan), multiple display support, expandable storage, and in a stationary computer preferably you want to be able to upgrade parts.

If we look at the M1 compare it to the A14, we can see that Apple has made almost no modifications to the architecture to make it fit in a computer rather then a mobile device. They didn't add more cores, they didn't add support for PCI, GPUs, Ethernet, or anything else that makes it more like a Computer CPU.

Maybe this is besides the point, but to me it was very telling that they didn't update the physical design of any of the 3 computers. The thermal envelope of the M1 Should make it possible to make some much sexier designs, then the old designs we where given. It tells me a lot about how Apple allocates its resources.

This paired with Apples messaging, tells me that Apple is not interested in making Macs that are competitive with PCs on the things that make you choose to use a computer over a mobile device. It feels to me like Apple has moved to Apple Silicon on the mac because its convenient for them to have a single platform, rather than it being the right design for the products.

Apple have neglected the mac and especially the pro segment for a long time. There was always a chance they would get their act together and build reasonably priced, top speced machines, that could compete with Linux and Windows boxes, but with this move, I cant see them put the resources behind their silicon to compete with AMD, nVidia, and Intel when it comes to Performance and features.

So, no I'm not scared, I'm sad that Apple has taken themselves out of the running.


If you used to run a Mac, then you'd know that they didn't redesign the outside of any Macs when they switched to Intel. It signals continuity of the platform. And besides, they just redesigned the Air two years ago. What were you expecting?

And yes, they did add more cores. There's PCIe 4. There's support for Ethernet. There's also Thunderbolt and HDMI.

You're extrapolating the three lowest-end Macs up through the entire product line. They aren't going to keep the higher-end Intel Mac mini available if they don't think it (and its specs) serves a purpose. They aren't going to invest everything they did in the new Mac Pro just to drop it after a year.

And if the performance of Apple's chips over the last decade makes you think they won't put the resources behind their silicon to compete, I don't know what to tell you.


> I'm not a Apple hater. I used to run a Mac.

This doesn't exclude you from being a mac hater. In fact judging from the comments you read on every post on HN that is remotely related to Apple, that seems more likely than the typical blind hatred you see on the internet broadly.

> And I'm not scared, I'm sad. I would love to have 3 or more open competitive desktop/laptop platforms (Windows, Linux and MacOS) but my view is that the release of the M1 makes that not very likely.

Based on what, exactly? Very few people have their hands on an M1 mac and there hasn't been any reviews of the systems from third parties yet.

> If we look at the M1 compare it to the A14, we can see that Apple has made almost no modifications to the architecture to make it fit in a computer rather then a mobile device. They didn't add more cores, they didn't add support for PCI, GPUs, Ethernet, or anything else that makes it more like a Computer CPU.

You don't know what they did or didn't do to adapt the M1 for computers. There has to be some degree of PCI-E support since they are using thunderbolt ports. There is ethernet on the mac mini with the M1 chip. It remains to be seen whether or not supporting discrete GPUs will be a thing and that'll largely depend on how their GPU scales. The computers that use discrete GPUs in macs right now haven't released an Apple silicon variant yet.

> Maybe this is besides the point, but to me it was very telling that they didn't update the physical design of any of the 3 computers. The thermal envelope of the M1 Should make it possible to make some much sexier designs, then the old designs we where given. It tells me a lot about how Apple allocates its resources.

I think it is beside the point. There's no reason to delay new chips just for the sake of being in sync with a new redesign of the chassis. It may or may not happen next year with the iMac or Macbook Pro 16, we don't know that yet.

> Apple have neglected the mac and especially the pro segment for a long time. There was always a chance they would get their act together and build reasonably priced, top speced machines, that could compete with Linux and Windows boxes, but with this move, I cant see them put the resources behind their silicon to compete with AMD, nVidia, and Intel when it comes to Performance and features.

They only released their entry level computers. You have nothing to base this argument on. There have only been leaked benchmarks all of which seem favorable towards Apple. We won't know what it means until people start getting them and testing them.


You don't know what they did or didn't do to adapt the M1 for computers. There has to be some degree of PCI-E support since they are using thunderbolt ports. There is ethernet on the mac mini with the M1 chip.

Also, from the Docker issue that was posted here yesterday, we know that the A14 does not have support for virtualization and M1 does (I presume the ARM equivalent of VT-X, etc.).


You completely sidestep the fact that benchmarks show the air outperforming all the other laptops Apple sell today with intel chips. How can you not see that as an achievement?

None of the computers getting the M1 today previously let you put in PCI cards or do any of the stuff you claim is needed other than expand the memory.

A14 only made to do one task at a time is just not true. iOS itself runs multiple tasks and writing software utilizing multiple threads is something you do all the time in iOS. I have written more multithreaded code on iOS than on a desktop. Why? Because using grand central dispatch is a must when dealing with all sorts of low latency web APIs you access. I don’t have that issue when building desktop apps.

These Apple laptops have always had few ports. Don’t blame M1 for that.

I do however wish the Mac Mini had more ports.


I'm genuinely puzzled as to what Apple would have to have done to convince you with the M1 given the machines it was designed for. It's an 8 core chip that runs in a fanless laoptop and outperforms all of the competition in that form factor.

We haven't seen what the desktop versions with much higher TDPs and discrete graphics will look like yet but you've written them off?


Please don't do that on HN.


AnandTech ran SPEC2006 on A14, and M1 should be faster.


What was “old” about the previous MacBook Air? It was ice lake. Also, the new one is even more “thermally constrained” because it doesn’t have a fan. What’s your point?

It seems odd to be so cynical about a drastic performance increase yoy and a breakthrough in performance per watt. I bet if it wasn’t Apple you would be a little more excited.


> that doesn't run nVidia optimized drivers

I think that’s the important point to consider. Nvidia is non grata with Apple, and that rift happened way way before Metal. These drivers have no chance of performing representatively.


Try changing it to compare the M1 against a 2080. This is only one generation behind yes? If I could just paste a pic in I would, but I will let you do the work yourself. It is similar. Amazing IMO, and I am no longer an apple fan.


> Amazing IMO

Apple could nearly buy Nvidia with their cash in hand alone, is it that surprising? I'd be more surprised if they fucked it up completely.

That they have actually chose to do it is impressive, although highly worrying to me from a software freedom perspective. What are the odds of Apple releasing drivers for any of their hardware?


I guess then you'll be demanding that we pit Japan's Fugaku against the best of Threadripper too huh?


Exactly, but I am not surprised. It has always been this deep seated suspicion and denial of Apple achievements by Apple haters. I remember debating with someone claiming the iPhone was not innovation by Apple because they had not made the touch sensor. It was like “they just got lucky”


I need a HDMI port on my mbp more than I need more raw cpu performance. Maybe it is only me.


Also Ryzen only got better on its 3rd gen release last year. Back in 2017, the R7 1700x can't compete with the 8700k on single thread performance.


Don't know why this is getting downvoted but it's actually true https://www.cpubenchmark.net/singleThread.html (Ctrl+F 1700x then 8700k)


It's a gift from God alright. It's the start of a full lock down. You no longer own your device. God does.


1050 Ti is a high end graphics GPU...


Compared to the AMD Radeon Pro 5500M, the base GPU in the 16" MacBook Pro: https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...

Compared to the AMD Radeon Pro 5600M, the +$700 upgrade in the top of the line MacBook Pro 16": https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...

Note that the Onscreen numbers are capped at 60fps on OS X, so ignore any Onscreen results at 59-60fps.


Considering that there’s no difference between the 1050ti in the OP and the 5500M that PragmaticPulp posted I’m inclined to say this test sucks. Userbenchmark.com shows there should be a substantial (38%) improvement between those two. Take these early results with a HUGE grain of salt because they smell fishy.

https://gpu.userbenchmark.com/Compare/Nvidia-GTX-1050-Ti-vs-...


Well, two things there.

1: Userbenchmark.com is terrible and nobody should use it for anything. At least their CPU side of things is hopelessly bought & paid for by Intel (and even within the Intel lineup they give terrible & wrong advice), maybe the GPU side is better but I wouldn't count on it.

2: The real question there isn't "why is the 1050 Ti not faster?" it's "how did you run a 1050 Ti on MacOS in the first place, since Nvidia doesn't make MacOS drivers anymore and hasn't for a long time?"


> Userbenchmark.com is terrible and nobody should use it for anything. At least their CPU side of things is hopelessly bought & paid for by Intel (and even within the Intel lineup they give terrible & wrong advice), maybe the GPU side is better but I wouldn't count on it.

To provide some elaboration on this: Their overall CPU score used to be 30% single, 60% quad, 10% multicore. Last year around the launch of zen 2 they gave it an update. Which makes sense; the increasing ability of programs to actually scale beyond four cores means that multicore should get more importance. And so they changed the influence of multicore numbers from 10% to... 2%. Not only was it a blatant and ridiculous move to hurt the scores of AMD chips, you got results like this, an i3 beating an i9 https://cdn.mos.cms.futurecdn.net/jDJP8prZywSyLPesLtrak4-970...

And there was some suspicious dropping of zen 3 scores a week ago, too, it looks like.


The one that really made it screaming obvious is the description of the Ryzen 5 5600x still somehow recommends the slower 10600k https://cpu.userbenchmark.com/AMD-Ryzen-5-5600X/Rating/4084

And they added some "subjective" metric so even when an AMD CPU wins at every single test, the Intel one can still be ranked higher.

There's a reason they've been banned from most major subreddits. Including /r/Intel.


Why should I care about a subreddit? They are all probably moderated by the same poeple. It could be one person got offended and happens to be a mod


I don’t see that as evidence of blatant bias for Intel. The site is just aimed at helping the average consumer pick out a part, and I think the weighting makes sense.

Most applications can only make use of a few CPU-heavy threads at a time, and these systems with with 18 cores will not make any difference for the average user. In fact, the 18 core behemoth might actually feel slower for regular desktop usage since it’s clocked lower.

If you are a pro with a CPU-heavy workflow that scales well with more threads, then you probably don’t need some consumer benchmark website to tell you that you need a CPU with more cores.


But lots of things do use more than 4 cores, with games especially growing in core use over time. Even more so if you want to stream to your friends or have browsers and such open in the background. To suddenly set that to almost zero weight, when it was already a pretty low fraction, right when zen 2 came out, is clear bias.

> In fact, the 18 core behemoth might actually feel slower for regular desktop usage since it’s clocked lower.

It has a similar turbo, it won't.


The amount of processes running on a windows OS reached 'ludicrous speed' many years ago. Most of these are invisible to the user, doing things like telemetry, hardware interaction, and low level and mid level OS services.

A quick inspection of the details tab in my task manager shows around 200 processes, only half of which are browser.

And starting a web browser with one page results in around half a dozen processes

Every user is now a multi-core user.


Re #2, the Nvidia web drivers work great if you're on High Sierra


Regarding 2. I think that none of those benchmarks were run on MacOS. Their benchmark tool seems to be Windows-only https://www.userbenchmark.com/ (click on "free download" and the MacOS save dialogue will inform you that the file you are about to download is a Windows executable).


The gfxbench link in the OP of the M1 vs. GTX 1050 Ti days that the 1050 was tested on MacOS. That's what I was referring to.


1. Today I learned something new. Still, can’t let great be the enemy of good. It may be imperfect but it’s the source I used. Do you have a better source I can replace it with?

2. That’s a good question and I don’t have an answer for that.


Sure, great is the enemy good etc , but the allegation here and in other threads is that these benchmarks are bad. Or, worse, inherently and deliberately biased.

As for a better source, I don't know with the M1 being so new, but that's no reason to accept bad data, if this benchmark actually is as bad as others here are saying.


Please don't use userbenchmark for anything. Site is so misleading that it's banned from both r/amd and r/intel.


Looks pretty good against Intel's new Xe Max:

https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...

It's not the 4-6x in raw graphics improvement advertised but at 10W vs Xe Max's 25W for just the GPU, the M1 getting 50% more fps, that's still 3.75x in perf/watt.


I wouldn't brag, when comparing Metal to OpenGL.

Let alone - let's really not forget that this is an HBM type architecture. As a complete package it seems awesome, but we can argue for ages about the performance of GPU cores with no end result.


> this is an HBM type architecture

You'd think so, but seems most people[1] think it's just LPDDR wired to the SoC using the standard protocol, inside the same package. (Though it might use > 1 channel I guess?)

[1] eg in the spec table here https://www.anandtech.com/show/16235/apple-intros-first-thre... - an interesing thinb in the mini spec table is also the 10x downgrade of the ethernet speed.


LPDDR channels are weird. https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de... shows "8x 16b LPDDR4X channels"

Which would be the same width as dual channel DDR4 – 8x16 == 2x64 :)

Also it's completely ridiculous that some people think it might be HBM. The presentation slide (shown in both anandtech articles) very obviously shows DDR style chips, with their own plastic packaging. That is not how HBM looks :) Also HBM would noticeably raise the price.


Damn, I would happily take the 10-20% performance hit to avoid having the laptop turn into a jet engine as soon as I connect it to a monitor.


You can have that trade off already by disabling turbo boost.


Unfortunately it’s the GPU that causes the issue, not the CPU.

Whatever bus video runs over is wired through the dedicated GPU, so integrated is not an option with external monitors connected. That by itself would be fine, except for whatever reason, driving monitors with mismatched resolutions maxes the memory clock on the 5300M and 5500M. This causes a persistent 20W power draw from the GPU, which results in a lot of heat and near-constant fan noise, even while idle. As there isn’t a monitor in the world that matches the resolution of the built-in screen, this means the clocks are always maxed (unless you close the laptop and run in clamshell mode).

The 5600M uses HBM2 memory and doesn’t suffer from the issue, but a £600 upgrade to work around the issue is lame, especially when I don’t actually need the GPU, you just can’t buy a 16” without one.

Disabling turbo boost does help a little, but it doesn’t come close to solving it.


My memory is hazy on this but I did come across an explanation for this behaviour. At mismatched resolutions or fractional scaling (and mismatched resolutions are effectively fractional scaling) macOS renders the entire display to a virtual canvas first. This effectively requires the dGPU.

Your best bet is to run external displays at the MBPs resolutions and because that is not possible/ideal you are left with choices of running at 1080p/4k/5k. macOS no longer renders crisp on 1080p so 3840x2160 is the last remaining widely available and affordable choice while 5K is still very expensive.


Hardly anyone makes 5K displays - I have a pair of HP Z27q monitors made in late 2015 that are fantastic, but I had to get them used off eBay because HP and Dell both discontinued their 5K line (Dell did replace it with an 8K, but that was way out of my budget).

Part of the reason for 5K’s low popularity was its limited compatibility: they required 2xDP1.2 connections in MST mode. Apple’s single-cable 5K and 6K monitors both depend on Thunderbolt’s support for 2x DP streams to work. I’m not aware of them being PC-compatible monitors at native resolution yet.

I love 5K - but given a bandwidth boost I’d prefer 5K @ 120Hz instead of 8K @ 60Hz.


I am a bit curious to know why this specific problem has been appearing in various Nvidia lineups in the beginning of the decade, and is reappearing now.


You should be able to easily downclock and undervolt the GPU persistently.


I thought the 5300M was the base GPU. That's what I have anyway.

https://gfxbench.com/compare.jsp?benchmark=gfx50&did1=907542...


We need this compared with the RX 580 running in the blackmagic egpu.

That’s the most relevant CPI compare, given it is the entry level apple endorsed way to boost Mac gpu.

It also helps understand the value of the 580 and Vega on pre-m1 macs.


Also the M1 is built on TSMC 5nm.

The AMD Radeon Pro 5600M is built on TSMC 7nm.


Looks like there's interesting "offscreen" optimizations that might need to be re-implemented for M1, IIUC.


Offscreen benchmarks just mean that they are run at a fixed resolution and not limited by vsync. These benchmarks are better for comparing one GPU to another. Onscreen can be artificially limited to 60fps and/or run at the native resolution of the device which can hugely skew the results (A laptop might show double the benchmark speed just because it has a garbage 1366x768 display).


I think those are the same link


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: