Hacker News new | past | comments | ask | show | jobs | submit login

> I just can't figure out what I'm missing on the "M1 is so fast" side of things.

Two reasons:

1. M1 is a super fast laptop chip. It provides mid-range desktop performance in a laptop form factor with mostly fanless operation. No matter how you look at it, that's impressive.

2. Apple really dragged their feet on updating the old Intel Macs before the transition. People in the Mac world (excluding hackintosh) were stuck on relatively outdated x86-64 CPUs. Compared to those older CPUs, the M1 Max is a huge leap forward. Compared to modern AMD mobile parts, it's still faster but not by leaps and bounds.

But I agree that the M1 hype may be getting a little out of hand. It's fast and super power efficient, but it's mostly on par with mid-range 8-core AMD desktop CPUs from 2020. Even AMD's top mobile CPU isn't that far behind the M1 Max in Geekbench scores.

I'm very excited to get my M1 Max in a few weeks. But if these early Geekbench results are accurate, it's going to be about half as fast as my AMD desktop in code compilation (see Clang results in the detailed score breakdown). That's still mightily impressive from a low-power laptop! But I think some of the rhetoric about the M1 Max blowing away desktop CPUs is getting a little ahead of the reality.




You're missing out on the fact that Apple didn't release a 12980hk or 5980hx competitor. These are 30 watt chips that trounce the competition's 65 watt (e.g. the 12980hk and 5980hx) and beyond chips.

Hell, this Geekbench is faster than a desktop 125 watt 11900k. It's faster than a desktop 105 watt 5800x.

Apple intentionally played to the competition here. They know AMD/Intel reach some performance level X and released CPUs that perform no greater than X * 1.2. They know they are in the lead since they are paying TSMC for first dibs on 5nm, but they didn't blow their load on their first generation products.

Intel will release Alder Lake and catch up, AMD will reach Zen4 and catch up and Apple will just reach into their pocket and pull out a "oh here's a 45 watt 4nm CPU with two years of microarch upgrades" and the 2022 MBP 16 will have Geekbench scores of ~2200 and ~17000.

There's a de facto industry leader in process technology today -- TSMC. Apple is the only one willing to pay the premium. They also have a much newer microarch design (circa 2006ish) vs AMD and Intel's early 90s designs. That's a 10-20% advantage (very rough ballpark estimate). The also are on arm which is another 10-20% advantage for the frontend.

The big deal here is that this isn't going to change until Intel's process technology catches up. And, hell, I bet at that point Apple will be willing to pay enough to take first dibs there as well.

AMD will never catch up since we know they don't care to compete against Apple laptops and thus won't pay the premium for TSMC's latest art. Intel might not even care enough and let Apple have mobile/laptop market first dibs on their latest node if Apple is willing not to touch the server market. Whether or not they'd agree on the workstation MacPro vs 2 slot Xeon workstation market would be interesting.

It might be a long time before it makes sense to buy a non-Apple laptop.


> It might be a long time before it makes sense to buy a non-Apple laptop.

...if you only care about the things that Apple laptops are good at. Almost nobody needs a top-of-the-line laptop to do their tasks. Most things that people want to do with computers can be done on a machine that is five to ten years old without any trouble at all. For example I use a ThinkPad T460p, and while the geekbench scores for its processor are maybe half of what Apple achieves here (even worse for multicore!), it does everything I need faster than I need it.

Battery life, screen quality, and ergonomics are the only thing most consumers need to care about. And while Macs are certainly doing well in those categories, there are much cheaper options available that are also good enough.


This is a really useless comparison. A 10 year old laptop will be extremely slow compared to any modern laptop and the battery will have degraded.

The T460 has knock off battery replacements floating around but that’s not exactly reassuring.

Granted: it works for you (and me, actually, I’m one of those people who likes to use an old thinkpad; x201s in my case though I mostly use a dell precision these days) but people will buy new laptops- that’s a thing. The ergonomics of a Mac are pretty decent and the support side of it is excellent.

If you don’t need all that power: that’s what the MacBook Air is for, which is basically uncontested at its performance/battery life/weight.

If you need the grunt, most of the m1 pro and max offer is GPU.

You’re going to think it’s Apple shills downvoting you: it’s not likely to be that. The argument against good low wattage cpus is just an inane and boring one.


Not everyone needs 15+ hours of battery life. I'd argue that most people don't. Laptops were extremely popular when the battery life was two hours. Now even older laptops get 5h+.


So what is the trade off you think you’re making?

This is weird. I feel like I’m talking to someone who has a fixed opinion against something. It’s good for _everyone_ that these chips are as fast as the best chips on the market, have crazy low power consumption and the cost for new is comparable.

Intel have been overcharging for more than a decade when innovation stagnated.

Honestly, I’m not so hot on Apple (FD I am sending this from an iPhone), I prefer to run Linux on my machines but I would not advocate everyone to do that. Just like I wouldn’t advise people to buy old shoes because it’s cheaper. These machines are compelling even for me, a person who relishes the flexibility of a more open platform — I can not imagine myself not recommending them to someone who just uses office suites or communication software. The M1 is basically the best thing you can buy right now for the consumer; and the cost is equivalent for other business machines such as HPs elitebooks or dells latitude or xps lineup.

And for power users: the only good argument you can make is that your tools don’t work for it or you don’t like macos.

If you’re arguing a system to be worse: you’ve lost.


The tradeoff I'm making is money vs capability. My argument is that most people don't need the capabilities offered by brand new, top of the line models. A used laptop that is a couple of years old is, I think, the best choice for most people.


A new M1 laptop is likely to last 4-5 years as a good specification of machine.

A second hand laptop has much less advantage to doing that.

I think this is a false economy.

“The poor man pays twice”

But regardless: the cost isn’t outrageous when compared to the Dell XPS/latitude or HP Elitebook lines (which are the only laptops I know of designed to last a 5y support cycle).

If you’re buying a new laptop, I don’t think I could recommend anything other than an M1 unless you don’t like Apple or MacOS. Which is fair.


> I think this is a false economy.

> “The poor man pays twice”

I'm still using an X-series Thinkpad I bought used in 2011. I had another laptop in-between but it was one of these fancy modern machines with no replaceable parts and it turned out 4 GB RAM is not enough for basic tasks.


Also the M1 runs near silent or in case of the Air actually silent. I would pay an extra 1000 just for that alone. Turned out the Air was barely more than that in total. Which other laptop does that?


The trade in value for my 6 year old MacBook Air is 150 dollars. Old computers depreciate so fast, that you can afford to buy ten of them for the price of one new computer.


Looks like they’re selling for more than twice that on ebay.co.uk though. And considering MacBook airs are $1000~ devices that’s really high.

6 years is also beyond the service life of a(ny) machine.

If I look at 3 year old MacBook airs they’re selling for £600 on eBay, which is, what, half of the full cost. Not great for an already old machine with only a few good years left.

I guess you might save a bit of money using extremely old hardware and keeping it for a while. But this is a really poor argument against an objectively good evolutionarily improved cpu in my opinion.


> 6 years is also beyond the service life of a(ny) machine.

That was the case for many decades. I think it’s no longer nearly the case. I’ve got a USB/DP KVM switch on my desk and regularly switch between my work laptop (2019 i9 MBPro) and my personal computer (2014 Dell i7-4790, added SSD and 32GB).

Same 4K screens, peripherals, everything else. I find the Dell every bit as usable and expect to be using it 3 years from now. I wouldn’t be surprised if I retire the MacBook before the Dell.

https://www.cpubenchmark.net/compare/Intel-i9-9980HK-vs-Inte... shows the Mac to have only a slight edge and that’s a 2 year old literal top of line Mac laptop vs a mid-range commodity office desktop from 6 years ago bought with < $200 in added parts. (Much of what users do is waiting on the network or the human; when waiting on the CPU, you’re often waiting on something single-threaded. Mac is < 20% faster single-threaded.)


Yeah, desktops vs laptops.

20W parts vs 84W parts.

Honestly, I'm not sure what we're discussing anymore. If you don't need (or want) an all round better experience then that's on you.

But don't go saying that these things are too expensive or that the performance isn't there. Because it is.

If Apple had released something mediocre I'd understand this thread, but this is a legitimately large improvement in laptop performance, from GPU, to memory, to storage, to IO, to single threaded CPU performance.

Everyone kept bashing AMD for not beating Intel in single thread.

Everone bashed Apple for using previous Gen low TDP intel chips.

Now Apple has beaten both AMD and Intel in a very good form factor, and people still have a bone to pick.

Please understand that your preference is yours, these are legitimately good machines, every complaint that anyone had about macbooks has been addressed. Some people will just never be happy.


I was commenting only on whether “6 years is beyond the service life of any machine”, which I tried to make clear with my quoting and reply.

I’ve got no bone to pick with Apple and am not making any broad anti-Apple or anti-M1 case. (I decided to [have work] buy the very first MBPro after they un-broke the keyboard and am happy with it.)

Of the five to eight topics you raise after your first two sentences, I said exactly zero of them.


Oh. Sorry, in a very broad sense the support contract on any machine is only 5 years. After which you're basically living on borrowed time.

That's why companies aren't giving out 5+year old laptops/desktops.

(well, I suppose some do, but big companies simply wouldn't)

I assume the whole context of the thread and assumed you were defending the parent.


As the parent, I'd like to say... my entire argument isn't that top-of-the-line laptops are too expensive for what they do but rather that

(1) older macbooks are identical to mediocre new laptops in performance & price

(2) medicore laptops are very cheap for what they do

(3) desktops are far more economical when you need power.

If you spec out a laptop to be both powerful, light weight, and as beautiful as a MBP then you're going to pay a real premium. Paying for premium things is not the default.


> 6 years is also beyond the service life of a(ny) machine.

I'm still on 2013 MBP which doesn't show any signs of deterioration (except battery life). It's got retina, fast SSD, ok-ish GPU (it can play Civ V just fine).

I'd gladly pay for a guarantee that the machine will not break for the next 10 years - I think it will still be a prefectly usable laptop in 10 years from now.


No delamination issues with the screen?


> I guess you might save a bit of money using extremely old hardware and keeping it for a while.

If you get the best, and keep it for a while then even though it won't be bleeding edge anymore it'll still be in the middle of mediocre.

When it comes to computers, mediocre is actually pretty usable. A $600 computer can do pretty much everything, including handling normal scale non-enterprise software development. I didn't really realize it until I went back to school for science, but many projects are bound by the capacity of your mind and not the speed of your CPU.

If I do need computing power, I use a desktop.


6 years? I afraid that's just not the case. I have a cheap 2013 dell laptop that is all i need for office 2017 and a few other things that just work better in windows than linux (zoom/webex/office/teams). I gave about $150 for that thing and another $50 to double the ram. It's fine for what I need and use it with very little lag. I'll admit I cheated a little bit and put in an 256GB SSD drive that I have laying around.


Trade-in value on electronics is way lower than resale value. I’ve sold a couple 2014/2015 MacBook Pros this year for $700+ and probably could’ve gotten more had I held out.


Was that via ebay? I really want to sell my devices. Since work-from-home became a norm, I'm struggling to find actual reasons to use a laptop.


No, it was to coworkers. Going on eBay completed auctions, I could’ve gotten around $800-1k for similar spec / condition machines.


This is fine and people who have budget limits have options both new and used. It seems like this has been the case for quite a while although things like the pandemic probably impacted the used market (I haven't researched that.)

The thing you are denying is that people have both needs and wants. Wants are not objective, no matter how much you try to protest their existence. There is no rational consumer here.

There are inputs beyond budget which sometimes even override budget (and specific needs!) Apple has created desirable products that even include some slightly cheaper options. The result is that people will keep buying things that they don't really need, but they'll likely still get some satisfaction. I don't suggest that this is great for society, the environment, or many other factors - but, it's the reality we live in.


That's why most people buy the bottom of the line models. The base Macbook Air is the most common purchase, and the best choice for most people.

People buy it brand new because its small, lightweight, attractive, reliable, long-lasting hardware with very low depreciation, great support, and part of Apple's ecosystem. Cost is not the same as value and the value of your dollar is much greater with these.


> My argument is that most people don't need the capabilities offered by brand new, top of the line models. A used laptop that is a couple of years old is, I think, the best choice for most people.

I think you're correct. But also the majority of people will buy brand new ones either way. And a lot of them will spend much more than they should too.


> Not everyone needs mobile internet. I'd argue that most people don't. Mobile phones were extremely popular when they didn't have any internet connectivity.

I am not mocking you, but here the case is that people do not know what true mobility for laptops is, that they literally can leave power brick behind, not to think about whether the battery lasts or not and use it freely during the day everywhere. This has been impossible until now, there has always been the constraint of do I really need to open my laptop, what if it dies, where is the power plug. As soon as masses realize that this is now no worry, everyone wants and needs 15+ hours battery life.


Unless those usecases are already covered by other devices that they have. A couple of years ago I might have wanted to use my laptop all day so that I can check the internet, listen to music etc. But today I can just use my phone for that.


Here's a different take to that: If I refuse to use Windows as my OS on my Dell Inspiron 7559 I have to live with like 2 hours of battery life and a hot lap because power management doesn't work properly. So much as watching a YouTube video under Linux makes it loud and hot.

Same laptop could do 9+ consistently for me in Windows and remained quiet unless I was actually putting load on it.

The reading I've done on the Framework laptops makes it sound like this situation has not improved, or at least not anywhere near enough to compete with Windows.. this has effectively ruled them out of the running on a replacement laptop for me.

An M1 based Macbook sure is looking appealing these days. I can live with macOS.

Not everyone needs decent battery life, but some of us do.


> The reading I've done on the Framework laptops makes it sound like this situation has not improved.

And yet I hear otherwise.. maybe your referencing the original review units that didnt run on 5.14.


It's possible, and when actively in the market I'll always revisit such things.


> The argument against good low wattage cpus is just an inane and boring one.

> Not everyone needs 15+ hours of battery life.

Low wattage is not only about battery life. It is mostly about requiring less power for the same work. However you look at it, it is good for everyone. Now that Apple has shown that this can be done, everyone else will do the same.


for me it's the heat and lack of fan noise. I know that's not something that bothers some people but I thoroughly enjoy (the lack) of it.


Define "extremely". You get maybe a factor 2 or so, not 10 or 100. Is that nicer? Yes, sure. Is it necessary? No, older stuff is perfectly sufficient for most people.

Also, it is "if you need that power and need it with laptop formfactor". Again, impressive, but desktops/servers work just as well for most people.


Saying a 10 year old laptop is 2x or 4x slower than a new laptop is just a tiny part of the big picture.

I would say that for a light laptop user, the main reasons to upgrade are:

- displays: make a big difference for watching youtube, reading, etc. You can't really compare a 120 Hz XDR retina display with a 10 year old display.

- webcam: makes a big difference when video conferencing with family, etc.

- battery life: makes a big difference if you are on the go a lot. My 10 year old laptop had new something like 4 hours battery life. Any new laptop has more than 15h, some over 20h.

- fanless: silent laptops that don't overheat are nice, nicer to have on your lap, etc.

- accelerators: some zoom and teams backgrounds use AI a lot, and perform very poorly on old laptops without AI accelerators. Same for webcams.

If you talk about perf, that's obviously workload dependent, but having 4x more cores, that are 2-4x faster each, can make a big difference. I/O, encryption, etc. has improved quite a bit, which can make a difference if you deal with big files.

Still, you can get most of this new stuff for 1000$ in a macbook air with M1. Seems like a no brainer for light users that _need_ or _want_ to upgrade. If you don't want to upgrade, that's ok, but saying that you are only missing 2x better performance is misleading. You are missing a lot more stuff.


But that's the thing. You can upgrade all that.

I’ve got a T470 with a brand new 400nits 100% sRGB and like 80% AdobeRGB screen. You can even get 4K screens with awesome quality for the T4xx laptops with 40-pin eDP.

With 17h battery life even on performance mode.

With a new, 1080p webcam.

With 32GB of DDR4-2400 RAM

With 2TB NVMe Storage.

With 95Wh replaceable batteries, of which I can still get brand new original parts and which I can replace while using the laptop.

for a total below 500$.

If I'd upgrade the top-of-the-line T480 accordingly, I'd still be below 800$ and performance that's not that far off anymore.


2011 desktop CPUs will perform about half as well as a modern laptop one.

I don’t even think Sandybridge (Intel 2011) CPUs support h264 decode- a pretty common requirement these days for zoom, slack, teams and video streaming sites such as YouTube.


>2011 desktop CPUs will perform about half as well as a modern laptop one.

Maybe, but the fat client-thin client pendulum has swung back in favor of thin clients to the point that CPU performance is generally irrelevant (it kind of has to be, since most people browse the Web with their phones). As for games, provided you throw enough GPU at the problem acceptable performance is still absolutely achievable, but that's not new either.

>a pretty common requirement these days for zoom, slack, teams and video streaming sites such as YouTube

It really isn't: from experience the hard line for acceptable performance is "anything first-gen Core iX (Nehalem) or later"- Core 2 systems are legitimately too old for things like 4K video playback, however. The limiting factor on older machines like that is ultimately RAM (because Electron), but provided you've maxed out a system with 16GB+ and an SSD there's no real performance difference between a first-gen Core system and an 11th-gen Core system for all thin client (read: web) applications.

That said, it's also worth noting that the average laptop really took a dive in speed with Haswell and didn't start getting the 10% year-over-year improvements again until Skylake because ultra-low-voltage processors became the norm after that time and set the average laptop about 4 years back in speed compared to their standard-voltage counterparts: those laptops genuinely might not be fast enough now, but the standard-voltage ones absolutely are.


Yes… only half as well as modern laptop. Back on the era of Moore’s law, a new machine would be 32x as fast as a ten year old model.

That was a real difference.

But in 2021, people still buy laptops 1/2 as slow as other models to do the same work. Heck people go out of their way to buy iPad pros which are half as slow as comparable laptops.

Considering that, I think a ten year old machine is pretty competitive as an existing choice.


> iPad pros which are half as slow as comparable laptops.

Uh, what?

iPads are using comparable CPUs to the M1 (some even use the M1) and are some of the fastest CPUs for rendering JavaScript on the planet.

I think you’re right that people buy slow laptops. But I think that often comes from a place of technical illiteracy and willingness to spend.

Put simply: they can’t often comprehend the true value of a faster system and opt to be more financially conservative.

Which I fully understand.


IPad Pro 2021: 1118 on geekbench[1] priced at @ 2,199.00 fully specced at the 12.9 inch model with keyboard.

Macbook Air 2020: 1733 on geekbench[2]. Priced @ about 1,849.00 fully speced for 13-inch model.

That's what I mean by comparable tablets are more expensive than laptops. You have to pay a lot more because it has a dual form factor (like the Microsoft Surfacebooks).

[1]https://browser.geekbench.com/v5/cpu/10527696

[2]https://browser.geekbench.com/v5/cpu/10527696


Sandy Bridges actually do have hardware decoding, but this isn't really a CPU feature, since it's a separate accelerator more akin to an integrated GPU. FWIW sites like Youtube seem to prefer the most bandwidth-efficient codec regardless of whether hardware decoding is available.


> about half

Exactly, no 100x, no 10x, just half. That is very noticeable but "extreme" sounds like much more.

> I don’t even think Sandybridge (Intel 2011) CPUs support h264 decode-

Correct, but unrelated to CPU speed and instead an additional hardware component. That is a fair argument, just like missing suitable HDMI standards, usb c, etc.. However, again that is not about speed but features.


I'm still using my 12 (!) year old 17" MacBook Pro as daily work machine. Yes, it's not the fastest computer but for my usage it works. Granted, starting IntelliJ needs some time, but coding still works well (and compiling big codebases isn't done locally).

The one thing that really isn't usable anymore is Aperture/Lightroom. And missing docker because my CPU is too old (but docker it still works in VMs ...) is a pain.


I'm still using my 2014 16" MacBook Pro too - the SSD has made such a dramatic difference to the performance of machines that I think in general they age much better than previously.

I'm not a heavy user but that machine can easily handle xcode Objc/c++ programming quite handily.


The keyboard of the ThinkPad is and was always great and the TrackPoint is a bonus on top of it for some of us. The other part of Input/Output is a good screen. Only after I/O the performance is of matter.

What I don't like about Apples devices is the keyboard, they don't provide a equally good trigger point (resistance and feedback) and the keycaps aren't concave (not leading fingers). The quality problems and the questionable Touchbar are problem, too. Lenovo did that before Apple and immediately stopped it, they accept mistake far quicker. I still suspect both - Apple and Lenvo - tried to save money with a cheap touchbar instead of more expensive keys.

But there is the performance? First, Apple only claims to be fast. What matter are comparisons of portable applications and not synthetic benchmarks. Benchmarks never mattered. Secondly, Apple uses a lot of money (from the customers) to get the best chip quality in industry from TMSC.

What we have are the choice between all-purpose computers from vendors like Lenovo, Dell or System67 or computing appliances from Apple. I say computing appliance and not all-purpose computer, because I'm not aware of official porting documentation for BSD, Linux or Windows. More importantly MacOS hinders free application shipment not as worse as iOS but it is already a pain for developers, you need to use Homebrew for serious computing tasks.

Finally the money?

Lenovo wants 1.000 - 1.700 Euro for a premium laptop like ThinkPad X13/T14 with replacement parts for five years, public maintenance manuals and official support for Linux and Windows.

Apple wants 2.400 - 3.400 for no maintenance manuals, no public available replacement parts and you must use MacOS. Okay, the claim it is faster. Likely it is.

You buy performance with an exorbitant amount of money but with multiple drawbacks? Currently I'm still using a eight year old ThinkPad X220 with Linux. The operating-system I want and need, excellent keyboard, comfortable trackpoint and a good IPS-Screen. I think the money was well spent - for me :)


> Most things that people want to do with computers can be done on a machine that is five to ten years old without any trouble at all.

Please, browsing web has always been pain on old hardware.


You cannot win this argument because for some people Lynx in a terminal is an acceptable way of browsing the web.

Also, people argue in bad faith all the time and a lot of people for whom it isn’t an acceptable way of browsing the web would pretend it is anyway to win an argument.


> Battery life, screen quality, and ergonomics are the only thing most consumers need to care about.

You, Sir, are a very utilitarian consumer. Most consumers care about being in the "In" crowd, i.e., owning the brand that others think is cool. Ideally, it comes in a shiny color. That's it. The exact details are just camouflage.


Yea I agree. Implicit in my statement is `for the people in the market Apple is targeting`. You aren't going to buy a $4000 computer and you'll be happy with a clunker. I, and many people in my situation, don't feel there are options for $4000 laptops right now.


surprised by your experience with an older laptop. I have both a 4 years old dell (forget it's name right now, for biz uses) and a 2015 MBA (which makes it 6 years old now), and they are both getting SLOW. In a vacuum - they are fine, they pull through. but they are notably slow machines. (i prefer windows but can't stand the dell and find myself keep going back to the mba - although youtube sucks on that machine with chrome)

edit: oh the mba battery is still a miracle compared to the dell's, btw


I bought the T460p last year. It replaced a 2009 Macbook pro with spinning rust. That machine was fine too, but it stopped getting updates years ago so I had to replace it. I hope that Linux will support the T460p for longer.


A lot of people don't have a desktop so a very powerful laptop that just works (tm) well for software development and can be lugged around effortlessly is ideal. I haven't owned a desktop myself in years and many engineers I know have been letting their rigs collect dust as the laptop just does it all.

For the price, there isn't something comparable in all respects.


> It might be a long time before it makes sense to buy a non-Apple laptop.

They already don't make sense as the M1 isn't a "general purpose" CPU like Intel or AMD that support multiple operating systems, and even the development of new ones. Instead, the M1 is a black-box that only fully supports macOS - that's a crippling limitation for many of us.


> It might be a long time before it makes sense to buy a non-Apple laptop.

Some people need the ability to repair or upgrade, or the freedom to install any software they need. Not to mention that so far we have seen comparison of the M1 "professiona lline" to a gaming laptop; professionals deal with quadro cards since the RTXs are driver limited for workstation duties, and speaking of gaming on OSX makes no sense.

For me it might be a long time since I can even consider buying another Apple product.


That's a lot of if's and but's that ignores a very important fact - most of Apple's top engineers from their CPU division have already quit Apple ...

Source: Apple CPU Gains Grind To A Halt And The Future Looks Dim As The Impact From The CPU Engineer Exodus To Nuvia And Rivos Starts To Bleed In - https://semianalysis.com/apple-cpu-gains-grind-to-a-halt-and...

If they can't innovate, all they can do is keep increasing the core count ... don't think that'll help them compete with future AMD / Intel / ARM or RISC cpus in the long term.


It seems Intel seems to have recognized that and hedged their bets by securing the remaining 3nm capacity [1]

1: https://www.techradar.com/news/intel-locks-down-all-remainin...



Frankly don't believe this. Who do you give majority of leading node capacity to?

1. Biggest long term partner.

2. Someone who competes with you as a manufacturer.

Not a hard decision!


TSMC had long said they were happy to have Intel as a larger customer, including on leading edge nodes… if there was long-term commitment from Intel.


Of course but that’s not the same as giving them the majority of the leading edge at the expense of Apple which is what the article implies.


I have an amd laptop and wife has an M1. Mine has twice RAM, twice SSD, more ports, is 10% lighter and 20% cheaper.


And your AMD laptop can run many OSes, while your wife will forever be stuck on macOS on the Apple M1 laptop whose hardware is designed to be hard to repair and upgrade ...


I tried to work with the apple laptop, it's very good in many aspects, but I needed a program and it can't be installed without Apple ID. I think there are many programs like that for the Mac, so I passed.


great post, thank you. curious though - you mention only fabrication advantages. what about the in-house design team? meaning - what about the talent?


>It might be a long time before it makes sense to buy a non-Apple laptop.

Can't tell if sarcastic but until you can run Linux on the M1 I don't see any reason to buy an Apple laptop.

It could be 10x faster than the competition but with OSX it would still feel like a net productivity loss, from having to deal with bugs and jank in the software.


They aren't the only one willing to pay the premium: they specifically bought exclusivity to the newest node and have for years.


> It might be a long time before it makes sense to buy a non-Apple laptop.

A laptop that can do some light gaming is not a niche requirement, and ultimately Apple decided to completely turn it's back on that market with the ARM transition.


The architecture isn't to blame; it's the 32-bit stuff that breaks a fair amount of Steam games, and Vulkan/graphics engines lacking, both of which are...Apple being Apple


The architecture is ultimately to blame for "you can't log into Windows to fix Apple being Apple" however.


I'll take frame.work over apple any day.


The fact that this beats AMDs top laptop CPU is actually a huge deal. And that's before considering battery life and thermals.

I'll never buy an Apple computer, but I can't help but be impressed with what they've achieved here.


Don't get me wrong: It's impressive and I have huge respect for it. I also bought one.

However, it would be surprising if Apple's new 5nm chip didn't beat AMD's older 7nm chip at this point. Apple specifically bought out all of TSMC's 5nm capacity for themselves while AMD was stuck on 7nm (for now).

It will be interesting to see how AMD's new 6000 series mobile chips perform. According to rumors they might be launched in the next few months.


This definitely is a factor. Another thing that people frequently overlook is how competitive Zen 2 is with M1: the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance.

Make no mistake, the M1 is a truly solid processor. It has seriously stiff competition though, and I get the feeling x86 won't be dead for another half decade or so. By then, Apple will be competing with RISC-V desktop processors with 10x the performance-per-watt, and once again they'll inevitably shift their success metrics to some other arbitrary number ("The 2031 Macbook Pro Max XS has the highest dollars-per-keycap ratio out of any of the competing Windows machines we could find!")


> This definitely is a factor. Another thing that people frequently overlook is how competitive Zen 2 is with M1: the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance.

It's a bit unfair to compare multicore performance of a chip with 8 cores firing full blast against another 8 core chip with half of them being efficiency cores.

The M1 Max (with 8 performance cores) multicore performance score posted on Geekbench is nearly double the top posted multicore performance scores of the 5800U and 4800U (let alone single core, which the original M1 already managed to dominate).

It'll be interesting to see how it goes in terms of performance-per-a-watt. Which is what really matters in this product segment. The graphs Apple presented indicated that this new line up will be less efficient than the original M1 at lower power levels, but they'll be able to hit it out of the park at higher power levels. We'll have to wait to see the results from the likes of Anandtech to get the full story though.

Personally, I'd love to see a MacBook 14 SE with an O.G. M1, 32 GB memory, a crappy 720p webcam, and no notch. I'd buy as many of those as they'd sell me.

I'm curious to see how the M1 Pro compares to the M1 Max. They are both very similar processors with the main differences being the size of the integrated GPU and the memory bandwidth available.

https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...

https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...


The M1 Max isn't competing with the 4800u, considering that it's starting price is ~10x that of the Lenovo Ideapad most people will be benching their Ryzen chips with. I don't think it's unfair to compare it with the M1, since it's still more expensive than the aforementioned Ideapad. Oh, and the 4800u came out 18 months before the M1 Air even hit shelves. Seeing as they're both entry-level consumer laptops, what might you have preferred to compare it with? Maybe a Ryzen 9 that would be more commonplace in $1k+ laptops?


It's hard to compare CPUs based on the price of the products they're packaged in. There are obviously a lot of other bits and bobs that go into them. However, it's worth noting that a MacBook Air is $300 cheaper than the list price for the Ideapad 4800U with an equivalent RAM and storage configuration. So by your logic, is it fair to compare the two? Perhaps, a 4700U based laptop would be a fairer comparison?

The gap between the Ideapad 4800U and the base model 14 inch MacBook Pro is a bit wider, but you'll also get a better display panel and LPDDR5-6400[1] memory.

We'll have to see how the lower specced M1 Pros perform, but it's hardly clear cut.

[1] https://www.anandtech.com/show/17019/apple-announced-m1-pro-...

Edit: I just looked up the price of the cheapest MacBook Pro with a M1 Max processor and it's about 70% more expensive than the Ideapad 4800U. However, with double the memory and much better quality memory and a better display and it seems roughly about 70% better multithreaded performance in Geekbench workloads. Furthermore, you may get very similar performance on CPU bound workloads with the 10 core M1 Pro, the cheapest of which is only 52% more expensive than the Ideapad 4800U.


AMD is already planning not only 20% YoY performance improvements for x86 but now has 30x efficiency plan for 2025. I think x86 is in it for much longer than a decade.

Intel otoh, depends on if they can gut all the MBAs infesting them.


30x efficiency is specifically for 16 bit FP DGEMM operations, and it is in the context of an HPC compute node including GPUs and any other accelerators or fixed function units.

For general purpose compute, so such luck unfortunately. Performance and efficiency follows process technology to a first order approximation.

https://www.tomshardware.com/news/amd-increase-efficiency-of...


also bear in mind that AMD's standards for these "challenges" have always involved some "funny math", like their previous 25x20 goal, where they considered a 5.02x average gain in performance (10x in CB R15 and 2.5x in 3D Mark 11) at iso-power (same TDP) to be a "32x efficiency gain" because they divided it by idle power or some shit like that.

But 5x average performance gain at the same TDP doesn't mean you do 32x as much computation for the same amount of power. Except in AMD marketing world. But it sounds good!

https://www.anandtech.com/show/15881/amd-succeeds-in-its-25x...

Like, even bearing in mind that that's coming from a Bulldozer derivative on GF 32nm (which is probably more like Intel 40nm) 5x gain in actual computation efficiency is still a lot, and it's actually even more in CPU-based workloads, but AMD marketing can't help but stretch the truth with these "challenges".


To be fair idle power is really important for a lot of use cases.

In a compute focused cloud environment you might be able to have most of your hardware pegged by compute most of the time, but outside of that CPUs spend most of their time either very far under 100% capacity, or totally idle.

In order to actually calculate real efficiency gains you'd probably have to measure power usage under various scenarios though, not just whatever weird math they did here.


That's not really being fair, because the metric is presented to look like traditional perf/watt. And idle is not so important in supercomputers and cloud compute nodes which get optimized to keep them busy at all costs. But even in cases where it is important, averaging between the two might be reasonable but multiplying the loaded efficiency with the idle efficiency increase is ludicrous. A meaningless unit.

I can't see any possible charitable explanation for this stupidity. MBAs and marketing department run amok.


Yep 100% agree with you - see my last sentence. Just trying to clarify that the issue here isn't that idle power consumption isn't important, it's the nonsense math.


Wow that's stupid, I didn't look that closely. So it's really a 5x perf/watt improvement. I assume it will be the same deal for this, around 5-6x perf/watt improvement. Which does make more sense, FP16 should already be pretty well optimized on GPUs today so 30x would be a huge stretch or else require specific fixed function units.


it's an odd coincidence (there's no reason this number would be related, there's no idle power factor here or anything) but 5x also happens to be about the expected gain from NVIDIA's tensor core implementation in real-world code afaik. Sure they advertise a much higher number but that's a microbenchmark looking at just that specific bit of the code and not the program as a whole.

it's possible that the implication here is similar, that AMD does a tensor accelerator or something and they hit "30x" but you end up with similar speedups to NVIDIA's tensor accelerator implementation.


I've seen tensor cores really shining in... tensor operations. If your workload can be expressed in convolutions, and are matching the dimensions and batching needs of tensor cores, there's a world of wild performance out there...


Alder lake is looking really good in leaked benchmarks. I definitely think Intel is down, but not out.

Have to see if they can not only catch up, but keep up.


Where can I find these benchmarks?



the 4800u stands toe-to-toe with the M1 in a lot of benchmarks, and consistently beats it in multicore performance

I had a Lenovo ThinkPad with a 4750U (which is very close to the 4800U) and the M1 is definitely quite a bit faster in parallel builds. This is supported by GeekBench scores, the 4800U scores 1028/5894, while the M1 scores 1744/7600.

If AMD had access to 5nm, the CPUs would probably be more or less or par. Well, unless you look at things like matrix multiplication, where even a 3700X has trouble keeping up with the Apple AMX co-processor with all 3700X's 8 cores fully loaded.


During my time, the Mac line has switched CPUs three times, why would Apple not switch to RISC-V if that one is really so much better than ARM?


well....M1 Max single core also beating a 5950x

But tbh, it doesn't seem the new hotness in chips is single core CPU, it's about how fancy you spend the die space in custom processors, in which case the M1 will always be tailored to Apple (and presumably Mac users') specific use-cases...


My 5950x compiles the Linux kernel 10 seconds faster than a 40 core xeon box, in like 1/10th the power envelope.

The chances of actually getting a only a single core working on something are slim with multitasking, I had to purpose build stuff - hardware and kernel/etc for CPU mining a half decade ago to eliminate thermal and pre-emption on single threaded miners.

Single thread performance has been stagnant forever because with Firefox/chrome and whatever the new "browser app" hotness is this month your going to be using more than 1 core virtually 100% of the time, so why target that. Smaller die features means less tdp which means less throttling which means faster overall user experience.

I'm glad someone is calling out the M1 performance finally.


You actually get better single core performance out of a 5900x than a 5950x. The more cores the AMD CPU has, generally the more they constrain the speed it can perform at on the top end. In this case, the 5950x is 3.5GHz and the 5900x is 3.7GHz. The 5800x is even slightly faster than that, and there's some Geekbench results that show single core performance faster than the score listed here, but at that point the multi-core performance isn't winning anymore.

Also, I'm not sure what's up with Geekbench results on MacOS, but here's a 5950x in an iMac Pro that trounces all the results we've mentioned here somehow.[1]

1: https://browser.geekbench.com/v5/cpu/6034871


>that trounces all the results we've mentioned here somehow.[1]

MacOS, being unix based, has a decent thread scheduler - unlike windows 10/11, which is based on windows NT and 32bits, and never cared about anything other than single core performance until very very recently.


If that's true it puts a lot of comparisons into question. That windows multiprocessing isn't as good as MacOS doesn't matter to a lot of people that run neither. There's not a lot of point in using these benchmarks to say something about the hardware if the software above it but below the benchmark can cause it to be off by such a large amount.


Most comparisons have always been questionable. Main reason MacOS gets away with charging so much more for similar hardware and still dominates the productivity market is it squeezes so much more performance (and stability) out of equivalent hardware. Just check the geekbench top multithread scores, windows starts around the 39th _page_ - and thats for windows enterprise.


That's not necessarily true, single core boost on a 5950x is higher than the 5900x (4.9GHz vs 4.8GHz).


That's obviously a Hackintosh or a VM result.


Geekbench score for a 5900x (12 core) against the M1 Max (10 core) score already linked:

https://browser.geekbench.com/v5/cpu/compare/10517471?baseli...

They look to be on par to me. Things will be less murky if and when Apple finally scale this thing up to 100+ watt desktop class machines (Mac Pro) and AMD move to the next gen / latest TSMC process.

In my view Intel and AMD have more incentive to regain the performance crown than Apple do at maintaining it. At some point Apple will go back to focusing on others areas in their marketing.


no it doesnt, you must have misread the benchmark scores.


Geekbench.com says m1 max 1700, 5950x 2200


It wouldn't be a surprise that a CPU that can't run most of the software out there (because that software is x86) and has ditched all compatibility and started a design from scratch, can beat last-gen CPUs from competitors. For specific apps and workloads, for which it has accelerators.

But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.

Now, this is a new version of the M1, but it's an incremental 1-year improvement. It'll be very slightly faster than the old gen. By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be. Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU. You know, to get actual work done, professionally. But yeah, I do see their pro battery is 11 hours while mine usually dies at 9. Interesting how I got my computer plugged in most of the time though...


>Because when you want a super fast chip, you don't design up from a freaking cell phone CPU. You design down from a server CPU.

Is that really true? I don't have any intricate chip knowledge, but it rings false. Whether ARM is coming from a phone background or the Xeon from a server background, what matters in the end is the actual chip used. Maybe phone-derived chips even have an advantage because they are designed to conserve power whereas server chips are designed to harvest every little ounce of performance. IDK a lot about power states in server chips, but it would make sense if they aren't as adapted to rapidly step down power use as a phone chip.

Now, you might be happy with a hot leaf-blower and that's fine. But I would say the market is elsewhere: silent, long-running, light notebooks that can throw around performance if need be, you strike me as an outlier.

Pro laptops should have a beefy CPU, great screen, really fast SSD, long battery life, lots of RAM which (presumably) your notebook features, but the new M somethings seemingly as well. But in the end, people buy laptops so they can use them on their lap occasionally. And I know my HP is getting uncomfy hot, the same was said about the intel laptops from Apple I think.

Apple doesn't need to have the one fastest laptop out there, they need a credible claim to punching in the upper performance echelon - and I think with their M* family, they are there.


You actually have it correct. When you start with an instruction set designed to conserve power, you don't get "max power." The server chips were designed with zero power considerations in mind - the solution to "too much power" is simply "slap a house-sized heatsink on it."

>Apple doesn't need to have the one fastest laptop out there

correct. My complaint, which I have reiterated about 50 times to shiny iphone idiots on here who don't do any real number crunching for work, is when the industry calls "mid tier" something that apple calls "pro" - apple is deceiving the consumer with marketing. The new laptops are a competition to Dell's Latitude and XPS lines. Not their pro lines. Those pro laptops weigh 7lb, and have a huge, loud fan exhaust on the back so they can clock at 5GHz for an hour. They have 128GB of RAM - ECC RAM, because if you have that much RAM w/o ECC, you have a high chance of bit errors.

There are many things you can do to speed up your stuff, if you waste electricity. The issue is not that apple doesn't make a good laptop. It's that they're lying to the consumer. As always. Do you remember when they marketed their acrylic little cube mini-desktop? It was "a supercomputer." They do this as a permanent tactic - sell overpriced underperforming things, and lie with marketing. Like using industry standard terms to describe things not up to that standard.


I’ll happily take my quiet, small, and cool MacBook and number crunch in a data center infinitely more powerful then your laptop. Guess that makes me a shiny iPhone idiot.

Relax, no one is forcing you to use Apple products.


Intel tried to ”design down” their uArch.

Also, there is a TON of pro Mac users. If we define ’pro’ as getting paid for work done on Macs..

Not to mention M1 emulates x86 pretty darn well..


But here I am, with a pretty thin and very durable laptop that has a 6 core Xeon in it. It gets hot, it has huge fans, and it completely obliterates any M1 laptop. I don't mean it's twice as fast. I mean things run at 5x or faster vs an M1.

Probably not faster than an M1 Pro and definitely not faster than the M1 Max.

Your machine doesn't have a 512-bit wide memory interface running at over 400GB/s.

Does the Xeon processor in your laptop have 192KB of instruction cache and 24MB of L2 cache?

Every ARM instruction is the same size, enabling many instructions to be in flight all at once, unlike the x86-64 architecture where instructions vary in size and you can't have nearly as many instructions in flight at once.

Apples-to-apple: at the same chip frequency, an M1 has higher throughput than a Xeon and most any other x86 chip. This is basic RISC vs. CISC stuff that's been true forever. It's especially true now as increases in clock speeds has dramatically slowed and the only way to get significantly more performance is by adding more cores.

On just raw performance, I'd take the 8 high-performance cores in an M1 Pro vs. the 6 cores in your Xeon any day of the week and twice on Sunday.

And of course, when it comes to performance per watt, there's no comparison and that's really the story here.

Now, this is a new version of the M1, but it's an incremental 1-year improvement.

If you read AnandTech [1] on this, you'll see this is not the case—there have been huge jumps in several areas.

Incremental would have resulted in the same memory bandwidth with faster cores. And 6 high-performance cores vs. the 4 in the original M1.

Except Apple didn't do that—they doubled the number to 8 high-performance cores and doubled the memory width, etc. There were 8 GPU cores on the original M1 and how you can get up to 32!

Apple stated the Pro and the Max have 1.7x of the CPU performance of Intel's 8-core Core i7-11800H with 70% lower power consumption. There's nothing incremental about that.

By ditching Intel, what apple did is making sure their pro line - which is about power, not mobility, is no longer a competitor, and never will be.

Pro can mean different things to different people. For professional content creators, these new laptops are super professional. Someone could take off from NYC and fly all the way to LA while working on 16-inch MacBook Pro with a 120 MHz mini LED 7.7 million pixel screen that can display a billion colors in 4k or 8k video—battery only.

If you were on the same flight working on the same content, you'd be out of power long before you crossed the Mississippi while the Mac guy is still working. At half the weight of your laptop but a dramatically better display and performance when it comes to video editing and rendering multiple streams of HDR video.

The 16-inch model has 21 hours of video playback which probably comes in handy in many use cases.

Here's a video of a person using the first generation, 8 GB RAM M1 Mac to edit 8K video; the new machines are much more capable: https://youtu.be/HxH3RabNWfE.

[1]: https://www.anandtech.com/show/17019/apple-announced-m1-pro-...


Sorry but this entire post reads (skims) like you're playing top trumps with chip specs.


the main reason it does it is because apple bought up all the 5nm capacity though. amd is running at 7nm still. so impressive because they could afford to do that I guess.


You need to consider the larger target group of professionals. It's really GPU capabilities that blow everything away. If you don't plan to use your MacBook Pro for video/photo editing or 3D modeling, then a M1 Pro with the same 10-core CPU and 16-core Neural Engine has all you need and costs less. Unless I'm missing something I don't think there much added benefit from the added GPU cores in your scenario, unless you want to go with the maximum configurable memory.


> GPU capabilities that blow everything away

Compared to previous macs and igpus - an nvidia gpu will still run circles arounnd this thing


Not so sure about that "running circles around". While the M1 Max will not beat a mobile RTX 3080 (~same chip as desktop RTX 3070), Apple is in the same ballpark of its performance [1] (or is being extremely misleading in their performance claims [2]).

Nvidia very likely has leading top end performance still, but "running circles around this thing" is probably not a fair description. Apple certainly has a credible claim to destroy Ampere in terms of power per watt - just limiting themselves in the power envelope still. (It's worth noting that AMD's RDNA2 already edges out Ampere in performance per watt - that's not really Nvidia's strong suit in their current lineup).

[1]: https://www.apple.com/v/macbook-pro-14-and-16/a/images/overv... - which in the footnote is shown to compare the M1 Max to this laptop with mobile RTX 3080: https://us-store.msi.com/index.php?route=product/product&pro...

[2]: There's a lot of things wrong with in how vague Apple tends to be about performance, but their unmarked graphs have been okay for general ballpark estimates at least.


Definitely impressive in terms of power efficiency if Apples benchmarks (vague as they are) come close to accurate. Comparing the few video benchmarks we are seeing from the M1Max to leading Nvidia cards I'm still seeing about 3-5x the performance across plenty of workloads (Id consider anything >2x running circles).

https://browser.geekbench.com/v5/compute/3551790


> an nvidia gpu will still run circles arounnd this thing

Not for loading up models larger than 32GB it wouldn't. (They exist! That's what the "full-detail model of the starship Enterprise" thing in the keynote was about.)

Remember that on any computer without unified memory, you can only load a scene the size of the GPU's VRAM. No matter how much main memory you have to swap against, no matter how many GPUs you throw at the problem, no magic wand is going to let you render a single tile of a single frame if it has more texture-memory as inputs than one of your GPUs has VRAM.

Right now, consumer GPUs top out at 32GB of VRAM. The M1 Max has, in a sense, 64GB (minus OS baseline overhead) of VRAM for its GPU to use.

Of course, there is "an nvidia gpu" that can bench more than the M1 Max: the Nvidia A100 Tensor Core GPU, with 80GB of VRAM... which costs $149,000.

(And even then, I should point out that the leaked Mac Pro M1 variant is apparently 4x larger again — i.e. it's probably available in a configuration with 256GB of unified memory. That's getting close to "doing the training for GPT-3 — a 350GB model before optimization — on a single computer" territory.)


Memory != Speed

You could throw a TB of memory in something and it won't get any faster or be of any use for 99.99% of use cases.

Large ML architectures don't need more memory, they need distributed processing. Ignoring memory requirements, GPT-3 would take hundreds of years to train on a single high end GPU (on say a desktop 3090 which is >10x faster than m1) which is why they aren't trained that way (and why NVidia has the offerings set up the way they do).

>That's getting close to "doing the training for GPT-3 — a 350GB model before optimization — on a single computer" territory.

Not even close... not by a mile. That isn't how it works. The unified memory is cool but its utility is massively bottlenecked by the single cpu/gpu it is attached to.


I don't disagree that there are many use cases for which more memory has diminishing returns. But I would disagree that those encompass 99.99% of use cases. Not all problems are embarrassingly-parallel. In fact, most problems aren't embarrassingly parallel.

It's just that we mostly use GPUs for embarrassingly-parallel problems, because that's mostly what they're good at, and humans aren't clever enough by half to come up with every possible way to map MIMD problems (e.g. graph search) into their SIMD equivalents (e.g. matrix multiplication, ala PageRank's eigenvector calculation.)

The M1 Max isn't the absolute best GPU for doing the things GPUs already do well. But its GPU is a much better "connection machine" than e.g. the Xeon Phi ever was. It's a (weak) TPU in a laptop. (And likely the Mac Pro variant will be a true TPU.)

Having a cheap, fast-ish GPU with that much memory, opens up use-cases for which current GPUs aren't suited. In those use-cases, this chip will "run circles around" current GPUs. (Mostly because current GPUs wouldn't be able to run those workloads at any speed.)

Just one fun example of a use-case that has been obvious for years, yet has been mostly moot until now: there are database engines that run on GPUs. For parallelizable table-scan queries, they're ~100x faster still than even memory databases like memSQL. But guess where all the data needs to be loaded into, for those GPU DB engines to do their work?

You'd never waste $150k on an A100 just to host an 80GB database. For that price, you could rent 100 regular servers and set them up as memSQL shards. But if you could get a GPU-parallel-scannable 64GB DB [without a memory-bandwidth bottleneck] for $4000? Now we're talking. For the cost of one A100, you get a cluster of ~37 64GB M1 Max MBPs — that's 2.3TB of addressable VRAM. That's enough to start doing real-time OLAP aggregations on some Big-Ish Data. (And that's with the ridiculous price overhead of paying for a whole laptop just to use its SoC. If integrators could buy these chips standalone, that'd probably knock the pricing down by another order of magnitude.)


Again, there is a huge memory bandwidth bottleneck. It's ddr versus gddr and hbm. It's not even close. The m1 will be slower.


Cerebras is a thing.


And at least an order of magnitude more expensive than an A100, if not two orders


I think you put extra zero in A100 price


> I don't disagree that there are many use cases for which more memory has diminishing returns. But I would disagree that those encompass 99.99% of use cases. Not all problems are embarrassingly-parallel. In fact, most problems aren't embarrassingly parallel.

Mindlessly throwing more memory does encompass diminishing returns in 99.99% of use cases because extra memory will inflict a very large number of TLB misses during the page fault processing or during the context switching which will slow memory access down substantially unless:

1) the TLB size in each of the L1/L2/… caches is increased; AND

2) the page size is increased, or the page size can be configured in the CPU.

Earlier versions of MIPS CPU's had a software controlled, very small sized TLB and were notorious for being slow with the memory access. Starting with A14, Apple has increased an already massive TLB, which was on top of the page size having been increased from 4kB to 16kB:

«The L1 TLB has been doubled from 128 pages to 256 pages, and the L2 TLB goes up from 2048 pages to 3072 pages. On today’s iPhones this is an absolutely overkill change as the page size is 16KB, which means that the L2 TLB covers 48MB which is well beyond the cache capacity of even the A14» [0].

It would be interesting to find out whether the TLB size is even larger in M1 Pro/Max CPU's.

[0] https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


I think we’re losing the perspective here that Apple is not in the business of selling chips, but rather in the business of selling laptop to professional who would never even need what you describe


"Right now, consumer GPUs top out at 32GB of VRAM. The M1 Max has, in a sense, 64GB (minus OS baseline overhead) of VRAM for its GPU to use."

We've had AMD APU's for years, you can shove 256GB of RAM in there. But noone cares because a huge chunk of memory attached to a slow GPU is useless.


No way you could train, but if they could squeeze a bit more RAM onto that machine, you could actually do inferencing using the full 175B parameter GPT-3 model (vs. one of its smaller, e.g., 13B parameter versions [1] - if I could get my hands on the parameters for that one I could run it on my MBP 14 in a couple of weeks!).

The ML folks are finding ways to consume everything the HW folks can make and then some.

[1] https://arxiv.org/abs/2005.14165


Terrible, terrible take. GPU ram is about 10x faster than ddr in practical usage, so your worloads can finish and a new batch can be transferred over pcie faster than the apple would finish the first pass.


Yes, Nvidia GPU's are a major reason I switched to PC about 3 years ago. That and I can upgrade RAM and SSD's myself on desktops and laptops. The power from professional apps like Solidworks, Agisoft metashape, and some Adobe products with a Nvidia card and drivers is like night and day with a Mac at the time I switched.

Does Apple have any ISV certified offerings? I can't find one. I suspect Apple will never win the Engineering crowd with the M1 switch... so many variable go into these systems builds and Apple just doesn't have that business model.

Even with these crazy M1's, I still have doubts about Apple winning the Movie/Creative market. LED walls, Unreal Engine, Unity are being used for SOOOO much more than just games now. The hegemony of US centric content creation is also dwindling... budget rigs are a heck of lot easier to source and pay for than M1's in most parts of the world.


Anything that can run rings around it is unlikely to be running on a battery in a laptop, at least for any reasonable length of time.


Not to mention will stretch beyond reason the idea of a "mobile device" with an unwieldily weight and thickness.


> Compared to previous macs and igpus - an nvidia gpu will still run circles arounnd this thing

True, but the point here is that M1 is able to achieve outstanding performance per watt numbers compared to Nvidia or Intel.


Are you really rendering in a cafe that you need on the go GPU performance?


Editing photos or reviewing them before getting back home to know if you need to re-shoot, reviewing 8K footage on the fly, applying color grading to get an idea of what the final might look like to know if you need to re-shoot, re-light or change something...

There are absolutely use-cases where this is going to enable new ways of looking at content and give more control and ability to review stuff in the field.


Adding to all the usecases listed by other commenters.

Having a higher performance per watt numbers also implies less heat from M1's perspective. This means that even if someone isn't doing CPU/GPU heavy tasks, they are still getting better battery life since power isn't being wasted on cooling by spinning up the fans.

For some perspective, My current 2019, 16inch i7 MBP gets warm even if I leave it idling for 20 - 30 mins and I can barely get ~4hrs of battery life. My wife's M1 macbook air stays cool despite being fanless, and lasts the whole day with similar usage.

The point is performance per watt matters a lot in a portable device, regardless of its capabilities.


Try disable turbo and search around there is an utility that keep no turbo and using hours still 50 degree.

I am not associated with that guy. In fact I bought one for even my macmini. Get my m1 macmini to avoid all these hot air.

If you run biotcamp windows has registry to disable that and also system setting to limit to 99% (but seem still hot) for my playing with Vr and fs2020 using external egpu.


For the content creator class that needs to shoot/edit/upload daily, while minimizing staff, I can see definite advantages to having a setup which is both performant and mobile.


honestly half hour on plug every 4 hours of work sounds mobile enough to me.

I worked in the content creators business making videos, photos and music and frankly the need for 15 hours of battery is a (very cool indeed) glamorous IG fantasy.

In reality even when we were really on the move (I used to follow surfers on several isolated beaches in South Europe) the main problem were the phones' batteries - using them in hotspot mode seriously reduce their battery life - and we could always turn on the van's engine and use the generator to recharge electronic devices.

Post processing was done plugged to the generator.

Because it's better to sit comfortably to watch hours of footage or hundreds of pictures.

I can't imagine many other activities that are equally challenging for a mobile setup.


I am often rendering and coding while traveling for work a few months out of the year. Even when I’m home, I prefer spending my time in the forest behind my house, so being able to use Blender or edit videos as well as code and produce music anywhere I want is pretty sweet.


GPUs are no longer special purposes components; certainly in macOS, the computation capabilities of the GPU are used for all sorts of frameworks and APIs.

It’s not just about rendering any more.


Besides what the other person commented, also consider creatives that travel. Bringing their desktop with them isn't an option.


That’s not the right way to look at it. We never did this because you couldn’t get enough performance.

Now you can get performance off a battery for your entire work day for less money than the competition (if reports are to be believed).

In this scenario, would you render things in a cafe? Why not?


> In this scenario, would you render things in a cafe? Why not?

honestly, as a traveler and sometimes digital nomad, the real question is "why yes?"

There is no real reason to work in a cafe, except because it looks cool to some social audience.

Cafes are usually very uncomfortable work places, especially if you have to sit for hours looking at the tiniest of the details as one often does when rendering.


Maybe you’re hungry but it’s crunch time.

It’s like when the iPad came out and had a camera. “Who is going to lug around an iPad to take pictures!?!?”

But that’s exactly what I started seeing people do. Pull out iPads and take snaps.


what's your point?


Kind of yes .. but sometimes after years of working remotely you may find its nice to be hanging out where people are doing things even while you work


Why would you even buy a laptop if you don't need to be mobile?


Fewer cables is one reason.


That’s a strange reason to buy a laptop. On a desktop you set up the cables one time and you’re done.


There is this thing called all in one desktop


In a quiet laptop?


If you are trying to do hardcore video editing or modeling then 'quiet laptop' likely comes second to speed


I'm looking forward to playing BG3 on mine :)


Is it optimized for mac?


There's a native ARM binary :) You can choose the x86 or ARM binary when you launch, and they're actually separate apps. That's how they get around the Steam "thou shalt only run x86 applications" mandate.


Well, the MAX has double the memory bandwidth of a PRO, but I cannot see workloads other than the ones you mentioned where it would make a significant improvement.

Perhaps ML but that’s all proprietarized on CUDA so it’s unlikely.

Perhaps Apple could revive OpenCL from the ashes?


Pro only supports 2 external displays which is why I ordered max


ha, man, some people have veeeery different workstation setups than me.

I was offered a large external monitor by my employer, but I turned it down because I didn't want to get used to it, and working in different locations is too critical to my workflow. But I'd love to see how people with more than 2 external displays are actually using them enough to justify the space and cost (not being facetious, I really would).


Three displays here.

First - dedicated to personal Chrome profile, Discord, etc.

Second - dedicated to screen share - VS Code, terminal, JIRA, etc.

Third - Work Chrome Profile for email/JIRA/web browsing, note-taking (shoutout to https://obsidian.md), Slack.

I could certainly get by with fewer monitors, and do so when I am mobile, but I really enjoy the screen real estate when at my desk at home.


6+ hours of Web Conferencing a day.

1. Web Conferencing Content

2. Web Conferencing participants video

3. Screen where I multitask in parallel

4. VDI session to a customer's environment for tests


"I'd love to see how people with more than 2 external displays are actually using them enough to justify the space and cost (not being facetious, I really would)."

My friend uses 5 monitors, and I would too if I wasn't mandated to use an iJoke computer at work.

Teams, browser and IDE mandate a minimum of 3 displays.


You realize most machines won't support 5 displays, right? This isn't exclusive to "iJoke" machines.


Actually pretty much all desktops do .. you just plug in this thing called a high end graphics card ..



I disagree with this perspective. I think it's important to recognize that the M1 is a System on a Chip (SoC), not simply a CPU. Comparing the Apple M1 to "mid-range 8-core AMD desktop CPUs from 2020" is not comparing apples to apples. The M1 Max in the Geek Bench score has 10 cores whereas the AMD desktop CPUs you mention have 8-cores. That would be more of an apples to apples comparison.

Where the M1 architecture really shines is the collaboration between CPU, GPU, memory, SSD, and other components on the SoC. The components all work together within the same 5nm silicon fabric, without ever having to go out to electrical interconnects on a motherboard. Thereby saving power, heat, etc.

What you lose in repairability/upgradability, you gain in performance on every front. That tradeoff is no different than what we chose in our mobile devices. If repairability and upgradability are more important to you, then definitely don't buy a device with an Apple M1; absolutely buy a Framework laptop (https://frame.work).


I really hope Qualcomm/NUVIA (or Nvidia/ARM) release a competitive ARM SoC that will eventually become part of a framework mainboard module.


It would be very interesting to see a consumer-oriented ARM SoC from one of the other main manufacturers. I doubt that will happen, however. Their entire business is based on being a component in a chain of components, not being the entire thing. Although, for example, Intel makes some motherboards, some GPUs, etc...their business isn't based on putting it all together in one fabric for their end-clients. They'd have to control/influence more of the OS for that. Apple has it all: full hardware control, full software control, and it's all designed for the mass market consumer.


M1 was 10 years in the making. So I wouldn't hold my breath for Qualcomm or anyone else.


> Apple really dragged their feet on updating the old Intel Macs before the transition

There was a Twitter post doing the rounds which I cannot locate now as my Twitter-search-foo is not strong enough. :-(

To summarise the gist of it: The post was made by someone on the product development team for the newly released MacBook Pro models, they referred to it as multiple years in the making.

So it may well be Apple were dragging their feet for good reason. They knew what was coming and did not want to invest further in Intel related R&D and did not want to end up with warehouses full of Intel-based devices and associated service parts.


I think people are missing the fact that it’s performance + energy efficiency where M1 blows regular x86 out of the water.


>Apple really dragged their feet on updating the old Intel Macs before the transition. People in the Mac world (excluding hackintosh) were stuck on relatively outdated x86-64 CPUs.

Maybe my expectations are different; but my 16" MacBook Pro has a Core i9-9880H, which is a 19Q2 released part - it's not exactly ancient.


Just because the SKU is fairly recent doesn't mean the tech inside is. That 9880H is using Skylake cores which first hit the market in 2015 and is fabricated using a refined version of the 14 nm process which first used for Broadwell in 2014.


But that's Intel's foot dragging, not Apple's, right? The 10th generation i9s didn't come out until 20Q2.


And the fact intel hasn't updated their CPUs is Apple's fault how again?


> 1. M1 is a super fast laptop chip. It provides mid-range desktop performance in a laptop form factor with mostly fanless operation. No matter how you look at it, that's impressive.

Get an x86-64 laptop with a recent fastest Ryzen, install Linux on it. You're gonna see better performance for most practical things than your Mac M1, practically fanless. For half the price.

Power-per-watt remains Apple's competitive advantage and therefore battery life is 1.5-2x there.

I think the question remains, like it had before these new chips: do you want MacOS and the Apple ecosystem? If you do, they're obviously a good choice (even a great choice with these new chips). The less value you get from that ecosystem, the less you will get from these laptops. For nearly everything else, Linux will be the better choice.


If M1 chips have better performance per watt, then they are going to put off less heat. So however "practically fanless" the x86-64 chip you choose is, the M1 is going to be more fanless.

And for a laptop chip those are pretty much the two things that matter (not melting your lap and not being crazy loud).


That's correct. It's about the ecosystem. If you want to run Linux, you don't care about the ecosystem. That person's definition of "most practical things" is going to be very different. For me, using Reminders, Notes, Photos, Messages, and FaceTime across my iPhone and MBP is very practical. I've witnessed amazing gains in the integration just since I got into Apple machines in 2017. My work bought me an iMac in 2011 and I hated it.

And yes, it's that portability through incredible battery life that is the other advantage. I've owned many Windows laptops and the Dell M3800 that came pre-loaded with Ubuntu, and none of them came close in this regard. All other laptops I have used needed to be plugged in for most use which does severely limit portability. This does not. It also doesn't get hot. The fans never turn on. It's a game changer.


Unfortunately laptop price are somehow high up for thin laptop with 32gb+ ram ryzen models recently, probably due to shortage. And about the igpu. AMD haven't release cpu with RDNA2 igpu yet. The first general purpose ryzen with RDNA2 igpu will be probably on steamdeck. But it will be released next year.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: