1. M1 is a super fast laptop chip. It provides mid-range desktop performance in a laptop form factor with mostly fanless operation. No matter how you look at it, that's impressive.
2. Apple really dragged their feet on updating the old Intel Macs before the transition. People in the Mac world (excluding hackintosh) were stuck on relatively outdated x86-64 CPUs. Compared to those older CPUs, the M1 Max is a huge leap forward. Compared to modern AMD mobile parts, it's still faster but not by leaps and bounds.
But I agree that the M1 hype may be getting a little out of hand. It's fast and super power efficient, but it's mostly on par with mid-range 8-core AMD desktop CPUs from 2020. Even AMD's top mobile CPU isn't that far behind the M1 Max in Geekbench scores.
I'm very excited to get my M1 Max in a few weeks. But if these early Geekbench results are accurate, it's going to be about half as fast as my AMD desktop in code compilation (see Clang results in the detailed score breakdown). That's still mightily impressive from a low-power laptop! But I think some of the rhetoric about the M1 Max blowing away desktop CPUs is getting a little ahead of the reality.
Hell, this Geekbench is faster than a desktop 125 watt 11900k. It's faster than a desktop 105 watt 5800x.
Apple intentionally played to the competition here. They know AMD/Intel reach some performance level X and released CPUs that perform no greater than X * 1.2. They know they are in the lead since they are paying TSMC for first dibs on 5nm, but they didn't blow their load on their first generation products.
Intel will release Alder Lake and catch up, AMD will reach Zen4 and catch up and Apple will just reach into their pocket and pull out a "oh here's a 45 watt 4nm CPU with two years of microarch upgrades" and the 2022 MBP 16 will have Geekbench scores of ~2200 and ~17000.
There's a de facto industry leader in process technology today -- TSMC. Apple is the only one willing to pay the premium. They also have a much newer microarch design (circa 2006ish) vs AMD and Intel's early 90s designs. That's a 10-20% advantage (very rough ballpark estimate). The also are on arm which is another 10-20% advantage for the frontend.
The big deal here is that this isn't going to change until Intel's process technology catches up. And, hell, I bet at that point Apple will be willing to pay enough to take first dibs there as well.
AMD will never catch up since we know they don't care to compete against Apple laptops and thus won't pay the premium for TSMC's latest art. Intel might not even care enough and let Apple have mobile/laptop market first dibs on their latest node if Apple is willing not to touch the server market. Whether or not they'd agree on the workstation MacPro vs 2 slot Xeon workstation market would be interesting.
It might be a long time before it makes sense to buy a non-Apple laptop.
...if you only care about the things that Apple laptops are good at. Almost nobody needs a top-of-the-line laptop to do their tasks. Most things that people want to do with computers can be done on a machine that is five to ten years old without any trouble at all. For example I use a ThinkPad T460p, and while the geekbench scores for its processor are maybe half of what Apple achieves here (even worse for multicore!), it does everything I need faster than I need it.
Battery life, screen quality, and ergonomics are the only thing most consumers need to care about. And while Macs are certainly doing well in those categories, there are much cheaper options available that are also good enough.
The T460 has knock off battery replacements floating around but that’s not exactly reassuring.
Granted: it works for you (and me, actually, I’m one of those people who likes to use an old thinkpad; x201s in my case though I mostly use a dell precision these days) but people will buy new laptops- that’s a thing. The ergonomics of a Mac are pretty decent and the support side of it is excellent.
If you don’t need all that power: that’s what the MacBook Air is for, which is basically uncontested at its performance/battery life/weight.
If you need the grunt, most of the m1 pro and max offer is GPU.
You’re going to think it’s Apple shills downvoting you: it’s not likely to be that. The argument against good low wattage cpus is just an inane and boring one.
This is weird. I feel like I’m talking to someone who has a fixed opinion against something. It’s good for _everyone_ that these chips are as fast as the best chips on the market, have crazy low power consumption and the cost for new is comparable.
Intel have been overcharging for more than a decade when innovation stagnated.
Honestly, I’m not so hot on Apple (FD I am sending this from an iPhone), I prefer to run Linux on my machines but I would not advocate everyone to do that. Just like I wouldn’t advise people to buy old shoes because it’s cheaper. These machines are compelling even for me, a person who relishes the flexibility of a more open platform — I can not imagine myself not recommending them to someone who just uses office suites or communication software. The M1 is basically the best thing you can buy right now for the consumer; and the cost is equivalent for other business machines such as HPs elitebooks or dells latitude or xps lineup.
And for power users: the only good argument you can make is that your tools don’t work for it or you don’t like macos.
If you’re arguing a system to be worse: you’ve lost.
A second hand laptop has much less advantage to doing that.
I think this is a false economy.
“The poor man pays twice”
But regardless: the cost isn’t outrageous when compared to the Dell XPS/latitude or HP Elitebook lines (which are the only laptops I know of designed to last a 5y support cycle).
If you’re buying a new laptop, I don’t think I could recommend anything other than an M1 unless you don’t like Apple or MacOS. Which is fair.
> “The poor man pays twice”
I'm still using an X-series Thinkpad I bought used in 2011. I had another laptop in-between but it was one of these fancy modern machines with no replaceable parts and it turned out 4 GB RAM is not enough for basic tasks.
6 years is also beyond the service life of a(ny) machine.
If I look at 3 year old MacBook airs they’re selling for £600 on eBay, which is, what, half of the full cost. Not great for an already old machine with only a few good years left.
I guess you might save a bit of money using extremely old hardware and keeping it for a while. But this is a really poor argument against an objectively good evolutionarily improved cpu in my opinion.
That was the case for many decades. I think it’s no longer nearly the case. I’ve got a USB/DP KVM switch on my desk and regularly switch between my work laptop (2019 i9 MBPro) and my personal computer (2014 Dell i7-4790, added SSD and 32GB).
Same 4K screens, peripherals, everything else. I find the Dell every bit as usable and expect to be using it 3 years from now. I wouldn’t be surprised if I retire the MacBook before the Dell.
https://www.cpubenchmark.net/compare/Intel-i9-9980HK-vs-Inte... shows the Mac to have only a slight edge and that’s a 2 year old literal top of line Mac laptop vs a mid-range commodity office desktop from 6 years ago bought with < $200 in added parts. (Much of what users do is waiting on the network or the human; when waiting on the CPU, you’re often waiting on something single-threaded. Mac is < 20% faster single-threaded.)
20W parts vs 84W parts.
Honestly, I'm not sure what we're discussing anymore. If you don't need (or want) an all round better experience then that's on you.
But don't go saying that these things are too expensive or that the performance isn't there. Because it is.
If Apple had released something mediocre I'd understand this thread, but this is a legitimately large improvement in laptop performance, from GPU, to memory, to storage, to IO, to single threaded CPU performance.
Everyone kept bashing AMD for not beating Intel in single thread.
Everone bashed Apple for using previous Gen low TDP intel chips.
Now Apple has beaten both AMD and Intel in a very good form factor, and people still have a bone to pick.
Please understand that your preference is yours, these are legitimately good machines, every complaint that anyone had about macbooks has been addressed. Some people will just never be happy.
I’ve got no bone to pick with Apple and am not making any broad anti-Apple or anti-M1 case. (I decided to [have work] buy the very first MBPro after they un-broke the keyboard and am happy with it.)
Of the five to eight topics you raise after your first two sentences, I said exactly zero of them.
That's why companies aren't giving out 5+year old laptops/desktops.
(well, I suppose some do, but big companies simply wouldn't)
I assume the whole context of the thread and assumed you were defending the parent.
(1) older macbooks are identical to mediocre new laptops in performance & price
(2) medicore laptops are very cheap for what they do
(3) desktops are far more economical when you need power.
If you spec out a laptop to be both powerful, light weight, and as beautiful as a MBP then you're going to pay a real premium. Paying for premium things is not the default.
I'm still on 2013 MBP which doesn't show any signs of deterioration (except battery life). It's got retina, fast SSD, ok-ish GPU (it can play Civ V just fine).
I'd gladly pay for a guarantee that the machine will not break for the next 10 years - I think it will still be a prefectly usable laptop in 10 years from now.
If you get the best, and keep it for a while then even though it won't be bleeding edge anymore it'll still be in the middle of mediocre.
When it comes to computers, mediocre is actually pretty usable. A $600 computer can do pretty much everything, including handling normal scale non-enterprise software development. I didn't really realize it until I went back to school for science, but many projects are bound by the capacity of your mind and not the speed of your CPU.
If I do need computing power, I use a desktop.
The thing you are denying is that people have both needs and wants. Wants are not objective, no matter how much you try to protest their existence. There is no rational consumer here.
There are inputs beyond budget which sometimes even override budget (and specific needs!) Apple has created desirable products that even include some slightly cheaper options. The result is that people will keep buying things that they don't really need, but they'll likely still get some satisfaction. I don't suggest that this is great for society, the environment, or many other factors - but, it's the reality we live in.
People buy it brand new because its small, lightweight, attractive, reliable, long-lasting hardware with very low depreciation, great support, and part of Apple's ecosystem. Cost is not the same as value and the value of your dollar is much greater with these.
I think you're correct. But also the majority of people will buy brand new ones either way. And a lot of them will spend much more than they should too.
I am not mocking you, but here the case is that people do not know what true mobility for laptops is, that they literally can leave power brick behind, not to think about whether the battery lasts or not and use it freely during the day everywhere. This has been impossible until now, there has always been the constraint of do I really need to open my laptop, what if it dies, where is the power plug. As soon as masses realize that this is now no worry, everyone wants and needs 15+ hours battery life.
Same laptop could do 9+ consistently for me in Windows and remained quiet unless I was actually putting load on it.
The reading I've done on the Framework laptops makes it sound like this situation has not improved, or at least not anywhere near enough to compete with Windows.. this has effectively ruled them out of the running on a replacement laptop for me.
An M1 based Macbook sure is looking appealing these days. I can live with macOS.
Not everyone needs decent battery life, but some of us do.
And yet I hear otherwise.. maybe your referencing the original review units that didnt run on 5.14.
> Not everyone needs 15+ hours of battery life.
Low wattage is not only about battery life. It is mostly about requiring less power for the same work. However you look at it, it is good for everyone. Now that Apple has shown that this can be done, everyone else will do the same.
Also, it is "if you need that power and need it with laptop formfactor". Again, impressive, but desktops/servers work just as well for most people.
I would say that for a light laptop user, the main reasons to upgrade are:
- displays: make a big difference for watching youtube, reading, etc. You can't really compare a 120 Hz XDR retina display with a 10 year old display.
- webcam: makes a big difference when video conferencing with family, etc.
- battery life: makes a big difference if you are on the go a lot. My 10 year old laptop had new something like 4 hours battery life. Any new laptop has more than 15h, some over 20h.
- fanless: silent laptops that don't overheat are nice, nicer to have on your lap, etc.
- accelerators: some zoom and teams backgrounds use AI a lot, and perform very poorly on old laptops without AI accelerators. Same for webcams.
If you talk about perf, that's obviously workload dependent, but having 4x more cores, that are 2-4x faster each, can make a big difference. I/O, encryption, etc. has improved quite a bit, which can make a difference if you deal with big files.
Still, you can get most of this new stuff for 1000$ in a macbook air with M1. Seems like a no brainer for light users that _need_ or _want_ to upgrade. If you don't want to upgrade, that's ok, but saying that you are only missing 2x better performance is misleading. You are missing a lot more stuff.
I’ve got a T470 with a brand new 400nits 100% sRGB and like 80% AdobeRGB screen. You can even get 4K screens with awesome quality for the T4xx laptops with 40-pin eDP.
With 17h battery life even on performance mode.
With a new, 1080p webcam.
With 32GB of DDR4-2400 RAM
With 2TB NVMe Storage.
With 95Wh replaceable batteries, of which I can still get brand new original parts and which I can replace while using the laptop.
for a total below 500$.
If I'd upgrade the top-of-the-line T480 accordingly, I'd still be below 800$ and performance that's not that far off anymore.
I don’t even think Sandybridge (Intel 2011) CPUs support h264 decode- a pretty common requirement these days for zoom, slack, teams and video streaming sites such as YouTube.
Maybe, but the fat client-thin client pendulum has swung back in favor of thin clients to the point that CPU performance is generally irrelevant (it kind of has to be, since most people browse the Web with their phones). As for games, provided you throw enough GPU at the problem acceptable performance is still absolutely achievable, but that's not new either.
>a pretty common requirement these days for zoom, slack, teams and video streaming sites such as YouTube
It really isn't: from experience the hard line for acceptable performance is "anything first-gen Core iX (Nehalem) or later"- Core 2 systems are legitimately too old for things like 4K video playback, however. The limiting factor on older machines like that is ultimately RAM (because Electron), but provided you've maxed out a system with 16GB+ and an SSD there's no real performance difference between a first-gen Core system and an 11th-gen Core system for all thin client (read: web) applications.
That said, it's also worth noting that the average laptop really took a dive in speed with Haswell and didn't start getting the 10% year-over-year improvements again until Skylake because ultra-low-voltage processors became the norm after that time and set the average laptop about 4 years back in speed compared to their standard-voltage counterparts: those laptops genuinely might not be fast enough now, but the standard-voltage ones absolutely are.
That was a real difference.
But in 2021, people still buy laptops 1/2 as slow as other models to do the same work. Heck people go out of their way to buy iPad pros which are half as slow as comparable laptops.
Considering that, I think a ten year old machine is pretty competitive as an existing choice.
I think you’re right that people buy slow laptops. But I think that often comes from a place of technical illiteracy and willingness to spend.
Put simply: they can’t often comprehend the true value of a faster system and opt to be more financially conservative.
Which I fully understand.
Macbook Air 2020: 1733 on geekbench. Priced @ about 1,849.00 fully speced for 13-inch model.
That's what I mean by comparable tablets are more expensive than laptops. You have to pay a lot more because it has a dual form factor (like the Microsoft Surfacebooks).
Exactly, no 100x, no 10x, just half. That is very noticeable but "extreme" sounds like much more.
> I don’t even think Sandybridge (Intel 2011) CPUs support h264 decode-
Correct, but unrelated to CPU speed and instead an additional hardware component. That is a fair argument, just like missing suitable HDMI standards, usb c, etc.. However, again that is not about speed but features.
The one thing that really isn't usable anymore is Aperture/Lightroom. And missing docker because my CPU is too old (but docker it still works in VMs ...) is a pain.
I'm not a heavy user but that machine can easily handle xcode Objc/c++ programming quite handily.
What I don't like about Apples devices is the keyboard, they don't provide a equally good trigger point (resistance and feedback) and the keycaps aren't concave (not leading fingers). The quality problems and the questionable Touchbar are problem, too. Lenovo did that before Apple and immediately stopped it, they accept mistake far quicker. I still suspect both - Apple and Lenvo - tried to save money with a cheap touchbar instead of more expensive keys.
But there is the performance? First, Apple only claims to be fast. What matter are comparisons of portable applications and not synthetic benchmarks. Benchmarks never mattered. Secondly, Apple uses a lot of money (from the customers) to get the best chip quality in industry from TMSC.
What we have are the choice between all-purpose computers from vendors like Lenovo, Dell or System67 or computing appliances from Apple. I say computing appliance and not all-purpose computer, because I'm not aware of official porting documentation for BSD, Linux or Windows. More importantly MacOS hinders free application shipment not as worse as iOS but it is already a pain for developers, you need to use Homebrew for serious computing tasks.
Finally the money?
Lenovo wants 1.000 - 1.700 Euro for a premium laptop like ThinkPad X13/T14 with replacement parts for five years, public maintenance manuals and official support for Linux and Windows.
Apple wants 2.400 - 3.400 for no maintenance manuals, no public available replacement parts and you must use MacOS. Okay, the claim it is faster. Likely it is.
You buy performance with an exorbitant amount of money but with multiple drawbacks? Currently I'm still using a eight year old ThinkPad X220 with Linux. The operating-system I want and need, excellent keyboard, comfortable trackpoint and a good IPS-Screen. I think the money was well spent - for me :)
Please, browsing web has always been pain on old hardware.
Also, people argue in bad faith all the time and a lot of people for whom it isn’t an acceptable way of browsing the web would pretend it is anyway to win an argument.
You, Sir, are a very utilitarian consumer. Most consumers care about being in the "In" crowd, i.e., owning the brand that others think is cool. Ideally, it comes in a shiny color. That's it. The exact details are just camouflage.
edit: oh the mba battery is still a miracle compared to the dell's, btw
For the price, there isn't something comparable in all respects.
They already don't make sense as the M1 isn't a "general purpose" CPU like Intel or AMD that support multiple operating systems, and even the development of new ones. Instead, the M1 is a black-box that only fully supports macOS - that's a crippling limitation for many of us.
Some people need the ability to repair or upgrade, or the freedom to install any software they need. Not to mention that so far we have seen comparison of the M1 "professiona lline" to a gaming laptop; professionals deal with quadro cards since the RTXs are driver limited for workstation duties, and speaking of gaming on OSX makes no sense.
For me it might be a long time since I can even consider buying another Apple product.
Source: Apple CPU Gains Grind To A Halt And The Future Looks Dim As The Impact From The CPU Engineer Exodus To Nuvia And Rivos Starts To Bleed In - https://semianalysis.com/apple-cpu-gains-grind-to-a-halt-and...
If they can't innovate, all they can do is keep increasing the core count ... don't think that'll help them compete with future AMD / Intel / ARM or RISC cpus in the long term.
1. Biggest long term partner.
2. Someone who competes with you as a manufacturer.
Not a hard decision!
Can't tell if sarcastic but until you can run Linux on the M1 I don't see any reason to buy an Apple laptop.
It could be 10x faster than the competition but with OSX it would still feel like a net productivity loss, from having to deal with bugs and jank in the software.
A laptop that can do some light gaming is not a niche requirement, and ultimately Apple decided to completely turn it's back on that market with the ARM transition.
I'll never buy an Apple computer, but I can't help but be impressed with what they've achieved here.
However, it would be surprising if Apple's new 5nm chip didn't beat AMD's older 7nm chip at this point. Apple specifically bought out all of TSMC's 5nm capacity for themselves while AMD was stuck on 7nm (for now).
It will be interesting to see how AMD's new 6000 series mobile chips perform. According to rumors they might be launched in the next few months.
Make no mistake, the M1 is a truly solid processor. It has seriously stiff competition though, and I get the feeling x86 won't be dead for another half decade or so. By then, Apple will be competing with RISC-V desktop processors with 10x the performance-per-watt, and once again they'll inevitably shift their success metrics to some other arbitrary number ("The 2031 Macbook Pro Max XS has the highest dollars-per-keycap ratio out of any of the competing Windows machines we could find!")
It's a bit unfair to compare multicore performance of a chip with 8 cores firing full blast against another 8 core chip with half of them being efficiency cores.
The M1 Max (with 8 performance cores) multicore performance score posted on Geekbench is nearly double the top posted multicore performance scores of the 5800U and 4800U (let alone single core, which the original M1 already managed to dominate).
It'll be interesting to see how it goes in terms of performance-per-a-watt. Which is what really matters in this product segment. The graphs Apple presented indicated that this new line up will be less efficient than the original M1 at lower power levels, but they'll be able to hit it out of the park at higher power levels. We'll have to wait to see the results from the likes of Anandtech to get the full story though.
Personally, I'd love to see a MacBook 14 SE with an O.G. M1, 32 GB memory, a crappy 720p webcam, and no notch. I'd buy as many of those as they'd sell me.
I'm curious to see how the M1 Pro compares to the M1 Max. They are both very similar processors with the main differences being the size of the integrated GPU and the memory bandwidth available.
The gap between the Ideapad 4800U and the base model 14 inch MacBook Pro is a bit wider, but you'll also get a better display panel and LPDDR5-6400 memory.
We'll have to see how the lower specced M1 Pros perform, but it's hardly clear cut.
Edit: I just looked up the price of the cheapest MacBook Pro with a M1 Max processor and it's about 70% more expensive than the Ideapad 4800U. However, with double the memory and much better quality memory and a better display and it seems roughly about 70% better multithreaded performance in Geekbench workloads. Furthermore, you may get very similar performance on CPU bound workloads with the 10 core M1 Pro, the cheapest of which is only 52% more expensive than the Ideapad 4800U.
Intel otoh, depends on if they can gut all the MBAs infesting them.
For general purpose compute, so such luck unfortunately. Performance and efficiency follows process technology to a first order approximation.
But 5x average performance gain at the same TDP doesn't mean you do 32x as much computation for the same amount of power. Except in AMD marketing world. But it sounds good!
Like, even bearing in mind that that's coming from a Bulldozer derivative on GF 32nm (which is probably more like Intel 40nm) 5x gain in actual computation efficiency is still a lot, and it's actually even more in CPU-based workloads, but AMD marketing can't help but stretch the truth with these "challenges".
In a compute focused cloud environment you might be able to have most of your hardware pegged by compute most of the time, but outside of that CPUs spend most of their time either very far under 100% capacity, or totally idle.
In order to actually calculate real efficiency gains you'd probably have to measure power usage under various scenarios though, not just whatever weird math they did here.
I can't see any possible charitable explanation for this stupidity. MBAs and marketing department run amok.
it's possible that the implication here is similar, that AMD does a tensor accelerator or something and they hit "30x" but you end up with similar speedups to NVIDIA's tensor accelerator implementation.
Have to see if they can not only catch up, but keep up.
I had a Lenovo ThinkPad with a 4750U (which is very close to the 4800U) and the M1 is definitely quite a bit faster in parallel builds. This is supported by GeekBench scores, the 4800U scores 1028/5894, while the M1 scores 1744/7600.
If AMD had access to 5nm, the CPUs would probably be more or less or par. Well, unless you look at things like matrix multiplication, where even a 3700X has trouble keeping up with the Apple AMX co-processor with all 3700X's 8 cores fully loaded.
But tbh, it doesn't seem the new hotness in chips is single core CPU, it's about how fancy you spend the die space in custom processors, in which case the M1 will always be tailored to Apple (and presumably Mac users') specific use-cases...
The chances of actually getting a only a single core working on something are slim with multitasking, I had to purpose build stuff - hardware and kernel/etc for CPU mining a half decade ago to eliminate thermal and pre-emption on single threaded miners.
Single thread performance has been stagnant forever because with Firefox/chrome and whatever the new "browser app" hotness is this month your going to be using more than 1 core virtually 100% of the time, so why target that. Smaller die features means less tdp which means less throttling which means faster overall user experience.
I'm glad someone is calling out the M1 performance finally.
Also, I'm not sure what's up with Geekbench results on MacOS, but here's a 5950x in an iMac Pro that trounces all the results we've mentioned here somehow.
MacOS, being unix based, has a decent thread scheduler - unlike windows 10/11, which is based on windows NT and 32bits, and never cared about anything other than single core performance until very very recently.
They look to be on par to me. Things will be less murky if and when Apple finally scale this thing up to 100+ watt desktop class machines (Mac Pro) and AMD move to the next gen / latest TSMC process.
In my view Intel and AMD have more incentive to regain the performance crown than Apple do at maintaining it. At some point Apple will go back to focusing on others areas in their marketing.
But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.
Now, this is a new version of the M1, but it's an incremental 1-year improvement. It'll be very slightly faster than the old gen. By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be. Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU. You know, to get actual work done, professionally. But yeah, I do see their pro battery is 11 hours while mine usually dies at 9. Interesting how I got my computer plugged in most of the time though...
Is that really true? I don't have any intricate chip knowledge, but it rings false. Whether ARM is coming from a phone background or the Xeon from a server background, what matters in the end is the actual chip used. Maybe phone-derived chips even have an advantage because they are designed to conserve power whereas server chips are designed to harvest every little ounce of performance. IDK a lot about power states in server chips, but it would make sense if they aren't as adapted to rapidly step down power use as a phone chip.
Now, you might be happy with a hot leaf-blower and that's fine. But I would say the market is elsewhere: silent, long-running, light notebooks that can throw around performance if need be, you strike me as an outlier.
Pro laptops should have a beefy CPU, great screen, really fast SSD, long battery life, lots of RAM which (presumably) your notebook features, but the new M somethings seemingly as well. But in the end, people buy laptops so they can use them on their lap occasionally. And I know my HP is getting uncomfy hot, the same was said about the intel laptops from Apple I think.
Apple doesn't need to have the one fastest laptop out there, they need a credible claim to punching in the upper performance echelon - and I think with their M* family, they are there.
>Apple doesn't need to have the one fastest laptop out there
correct. My complaint, which I have reiterated about 50 times to shiny iphone idiots on here who don't do any real number crunching for work, is when the industry calls "mid tier" something that apple calls "pro" - apple is deceiving the consumer with marketing. The new laptops are a competition to Dell's Latitude and XPS lines. Not their pro lines. Those pro laptops weigh 7lb, and have a huge, loud fan exhaust on the back so they can clock at 5GHz for an hour. They have 128GB of RAM - ECC RAM, because if you have that much RAM w/o ECC, you have a high chance of bit errors.
There are many things you can do to speed up your stuff, if you waste electricity. The issue is not that apple doesn't make a good laptop. It's that they're lying to the consumer. As always. Do you remember when they marketed their acrylic little cube mini-desktop? It was "a supercomputer." They do this as a permanent tactic - sell overpriced underperforming things, and lie with marketing. Like using industry standard terms to describe things not up to that standard.
Relax, no one is forcing you to use Apple products.
Also, there is a TON of pro Mac users. If we define ’pro’ as getting paid for work done on Macs..
Not to mention M1 emulates x86 pretty darn well..
Probably not faster than an M1 Pro and definitely not faster than the M1 Max.
Your machine doesn't have a 512-bit wide memory interface running at over 400GB/s.
Does the Xeon processor in your laptop have 192KB of instruction cache and 24MB of L2 cache?
Every ARM instruction is the same size, enabling many instructions to be in flight all at once, unlike the x86-64 architecture where instructions vary in size and you can't have nearly as many instructions in flight at once.
Apples-to-apple: at the same chip frequency, an M1 has higher throughput than a Xeon and most any other x86 chip. This is basic RISC vs. CISC stuff that's been true forever. It's especially true now as increases in clock speeds has dramatically slowed and the only way to get significantly more performance is by adding more cores.
On just raw performance, I'd take the 8 high-performance cores in an M1 Pro vs. the 6 cores in your Xeon any day of the week and twice on Sunday.
And of course, when it comes to performance per watt, there's no comparison and that's really the story here.
Now, this is a new version of the M1, but it's an incremental 1-year improvement.
If you read AnandTech  on this, you'll see this is not the case—there have been huge jumps in several areas.
Incremental would have resulted in the same memory bandwidth with faster cores. And 6 high-performance cores vs. the 4 in the original M1.
Except Apple didn't do that—they doubled the number to 8 high-performance cores and doubled the memory width, etc. There were 8 GPU cores on the original M1 and how you can get up to 32!
Apple stated the Pro and the Max have 1.7x of the CPU performance of Intel's 8-core Core i7-11800H with 70% lower power consumption. There's nothing incremental about that.
By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be.
Pro can mean different things to different people. For professional content creators, these new laptops are super professional. Someone could take off from NYC and fly all the way to LA while working on 16-inch MacBook Pro with a 120 MHz mini LED 7.7 million pixel screen that can display a billion colors in 4k or 8k video—battery only.
If you were on the same flight working on the same content, you'd be out of power long before you crossed the Mississippi while the Mac guy is still working. At half the weight of your laptop but a dramatically better display and performance when it comes to video editing and rendering multiple streams of HDR video.
The 16-inch model has 21 hours of video playback which probably comes in handy in many use cases.
Here's a video of a person using the first generation, 8 GB RAM M1 Mac to edit 8K video; the new machines are much more capable: https://youtu.be/HxH3RabNWfE.
Compared to previous macs and igpus - an nvidia gpu will still run circles arounnd this thing
Nvidia very likely has leading top end performance still, but "running circles around this thing" is probably not a fair description. Apple certainly has a credible claim to destroy Ampere in terms of power per watt - just limiting themselves in the power envelope still. (It's worth noting that AMD's RDNA2 already edges out Ampere in performance per watt - that's not really Nvidia's strong suit in their current lineup).
- which in the footnote is shown to compare the M1 Max to this laptop with mobile RTX 3080: https://us-store.msi.com/index.php?route=product/product&pro...
: There's a lot of things wrong with in how vague Apple tends to be about performance, but their unmarked graphs have been okay for general ballpark estimates at least.
Not for loading up models larger than 32GB it wouldn't. (They exist! That's what the "full-detail model of the starship Enterprise" thing in the keynote was about.)
Remember that on any computer without unified memory, you can only load a scene the size of the GPU's VRAM. No matter how much main memory you have to swap against, no matter how many GPUs you throw at the problem, no magic wand is going to let you render a single tile of a single frame if it has more texture-memory as inputs than one of your GPUs has VRAM.
Right now, consumer GPUs top out at 32GB of VRAM. The M1 Max has, in a sense, 64GB (minus OS baseline overhead) of VRAM for its GPU to use.
Of course, there is "an nvidia gpu" that can bench more than the M1 Max: the Nvidia A100 Tensor Core GPU, with 80GB of VRAM... which costs $149,000.
(And even then, I should point out that the leaked Mac Pro M1 variant is apparently 4x larger again — i.e. it's probably available in a configuration with 256GB of unified memory. That's getting close to "doing the training for GPT-3 — a 350GB model before optimization — on a single computer" territory.)
You could throw a TB of memory in something and it won't get any faster or be of any use for 99.99% of use cases.
Large ML architectures don't need more memory, they need distributed processing. Ignoring memory requirements, GPT-3 would take hundreds of years to train on a single high end GPU (on say a desktop 3090 which is >10x faster than m1) which is why they aren't trained that way (and why NVidia has the offerings set up the way they do).
>That's getting close to "doing the training for GPT-3 — a 350GB model before optimization — on a single computer" territory.
Not even close... not by a mile. That isn't how it works. The unified memory is cool but its utility is massively bottlenecked by the single cpu/gpu it is attached to.
It's just that we mostly use GPUs for embarrassingly-parallel problems, because that's mostly what they're good at, and humans aren't clever enough by half to come up with every possible way to map MIMD problems (e.g. graph search) into their SIMD equivalents (e.g. matrix multiplication, ala PageRank's eigenvector calculation.)
The M1 Max isn't the absolute best GPU for doing the things GPUs already do well. But its GPU is a much better "connection machine" than e.g. the Xeon Phi ever was. It's a (weak) TPU in a laptop. (And likely the Mac Pro variant will be a true TPU.)
Having a cheap, fast-ish GPU with that much memory, opens up use-cases for which current GPUs aren't suited. In those use-cases, this chip will "run circles around" current GPUs. (Mostly because current GPUs wouldn't be able to run those workloads at any speed.)
Just one fun example of a use-case that has been obvious for years, yet has been mostly moot until now: there are database engines that run on GPUs. For parallelizable table-scan queries, they're ~100x faster still than even memory databases like memSQL. But guess where all the data needs to be loaded into, for those GPU DB engines to do their work?
You'd never waste $150k on an A100 just to host an 80GB database. For that price, you could rent 100 regular servers and set them up as memSQL shards. But if you could get a GPU-parallel-scannable 64GB DB [without a memory-bandwidth bottleneck] for $4000? Now we're talking. For the cost of one A100, you get a cluster of ~37 64GB M1 Max MBPs — that's 2.3TB of addressable VRAM. That's enough to start doing real-time OLAP aggregations on some Big-Ish Data. (And that's with the ridiculous price overhead of paying for a whole laptop just to use its SoC. If integrators could buy these chips standalone, that'd probably knock the pricing down by another order of magnitude.)
Mindlessly throwing more memory does encompass diminishing returns in 99.99% of use cases because extra memory will inflict a very large number of TLB misses during the page fault processing or during the context switching which will slow memory access down substantially unless:
1) the TLB size in each of the L1/L2/… caches is increased; AND
2) the page size is increased, or the page size can be configured in the CPU.
Earlier versions of MIPS CPU's had a software controlled, very small sized TLB and were notorious for being slow with the memory access. Starting with A14, Apple has increased an already massive TLB, which was on top of the page size having been increased from 4kB to 16kB:
«The L1 TLB has been doubled from 128 pages to 256 pages, and the L2 TLB goes up from 2048 pages to 3072 pages. On today’s iPhones this is an absolutely overkill change as the page size is 16KB, which means that the L2 TLB covers 48MB which is well beyond the cache capacity of even the A14» .
It would be interesting to find out whether the TLB size is even larger in M1 Pro/Max CPU's.
We've had AMD APU's for years, you can shove 256GB of RAM in there. But noone cares because a huge chunk of memory attached to a slow GPU is useless.
The ML folks are finding ways to consume everything the HW folks can make and then some.
Does Apple have any ISV certified offerings? I can't find one. I suspect Apple will never win the Engineering crowd with the M1 switch... so many variable go into these systems builds and Apple just doesn't have that business model.
Even with these crazy M1's, I still have doubts about Apple winning the Movie/Creative market. LED walls, Unreal Engine, Unity are being used for SOOOO much more than just games now. The hegemony of US centric content creation is also dwindling... budget rigs are a heck of lot easier to source and pay for than M1's in most parts of the world.
True, but the point here is that M1 is able to achieve outstanding performance per watt numbers compared to Nvidia or Intel.
There are absolutely use-cases where this is going to enable new ways of looking at content and give more control and ability to review stuff in the field.
Having a higher performance per watt numbers also implies less heat from M1's perspective. This means that even if someone isn't doing CPU/GPU heavy tasks, they are still getting better battery life since power isn't being wasted on cooling by spinning up the fans.
For some perspective, My current 2019, 16inch i7 MBP gets warm even if I leave it idling for 20 - 30 mins and I can barely get ~4hrs of battery life. My wife's M1 macbook air stays cool despite being fanless, and lasts the whole day with similar usage.
The point is performance per watt matters a lot in a portable device, regardless of its capabilities.
I am not associated with that guy. In fact I bought one for even my macmini. Get my m1 macmini to avoid all these hot air.
If you run biotcamp windows has registry to disable that and also system setting to limit to 99% (but seem still hot) for my playing with Vr and fs2020 using external egpu.
I worked in the content creators business making videos, photos and music and frankly the need for 15 hours of battery is a (very cool indeed) glamorous IG fantasy.
In reality even when we were really on the move (I used to follow surfers on several isolated beaches in South Europe) the main problem were the phones' batteries - using them in hotspot mode seriously reduce their battery life - and we could always turn on the van's engine and use the generator to recharge electronic devices.
Post processing was done plugged to the generator.
Because it's better to sit comfortably to watch hours of footage or hundreds of pictures.
I can't imagine many other activities that are equally challenging for a mobile setup.
It’s not just about rendering any more.
Now you can get performance off a battery for your entire work day for less money than the competition (if reports are to be believed).
In this scenario, would you render things in a cafe? Why not?
honestly, as a traveler and sometimes digital nomad, the real question is "why yes?"
There is no real reason to work in a cafe, except because it looks cool to some social audience.
Cafes are usually very uncomfortable work places, especially if you have to sit for hours looking at the tiniest of the details as one often does when rendering.
It’s like when the iPad came out and had a camera. “Who is going to lug around an iPad to take pictures!?!?”
But that’s exactly what I started seeing people do. Pull out iPads and take snaps.
Perhaps ML but that’s all proprietarized on CUDA so it’s unlikely.
Perhaps Apple could revive OpenCL from the ashes?
I was offered a large external monitor by my employer, but I turned it down because I didn't want to get used to it, and working in different locations is too critical to my workflow. But I'd love to see how people with more than 2 external displays are actually using them enough to justify the space and cost (not being facetious, I really would).
First - dedicated to personal Chrome profile, Discord, etc.
Second - dedicated to screen share - VS Code, terminal, JIRA, etc.
Third - Work Chrome Profile for email/JIRA/web browsing, note-taking (shoutout to https://obsidian.md), Slack.
I could certainly get by with fewer monitors, and do so when I am mobile, but I really enjoy the screen real estate when at my desk at home.
1. Web Conferencing Content
2. Web Conferencing participants video
3. Screen where I multitask in parallel
4. VDI session to a customer's environment for tests
My friend uses 5 monitors, and I would too if I wasn't mandated to use an iJoke computer at work.
Teams, browser and IDE mandate a minimum of 3 displays.
Where the M1 architecture really shines is the collaboration between CPU, GPU, memory, SSD, and other components on the SoC. The components all work together within the same 5nm silicon fabric, without ever having to go out to electrical interconnects on a motherboard. Thereby saving power, heat, etc.
What you lose in repairability/upgradability, you gain in performance on every front. That tradeoff is no different than what we chose in our mobile devices. If repairability and upgradability are more important to you, then definitely don't buy a device with an Apple M1; absolutely buy a Framework laptop (https://frame.work).
There was a Twitter post doing the rounds which I cannot locate now as my Twitter-search-foo is not strong enough. :-(
To summarise the gist of it: The post was made by someone on the product development team for the newly released MacBook Pro models, they referred to it as multiple years in the making.
So it may well be Apple were dragging their feet for good reason. They knew what was coming and did not want to invest further in Intel related R&D and did not want to end up with warehouses full of Intel-based devices and associated service parts.
Maybe my expectations are different; but my 16" MacBook Pro has a Core i9-9880H, which is a 19Q2 released part - it's not exactly ancient.
Get an x86-64 laptop with a recent fastest Ryzen, install Linux on it. You're gonna see better performance for most practical things than your Mac M1, practically fanless. For half the price.
Power-per-watt remains Apple's competitive advantage and therefore battery life is 1.5-2x there.
I think the question remains, like it had before these new chips: do you want MacOS and the Apple ecosystem? If you do, they're obviously a good choice (even a great choice with these new chips). The less value you get from that ecosystem, the less you will get from these laptops. For nearly everything else, Linux will be the better choice.
And for a laptop chip those are pretty much the two things that matter (not melting your lap and not being crazy loud).
And yes, it's that portability through incredible battery life that is the other advantage. I've owned many Windows laptops and the Dell M3800 that came pre-loaded with Ubuntu, and none of them came close in this regard. All other laptops I have used needed to be plugged in for most use which does severely limit portability. This does not. It also doesn't get hot. The fans never turn on. It's a game changer.