Hacker News new | past | comments | ask | show | jobs | submit login
Apple’s new M1 Pro and M1 Max processors (apple.com)
1052 points by emdashcomma on Oct 18, 2021 | hide | past | favorite | 983 comments



All: let's keep this thread about the processors, and talk about the new MBPs in the other thread: https://news.ycombinator.com/item?id=28908383.

Edit: to read all the 600+ comments in this thread, click More at the bottom of the page, or like this:

https://news.ycombinator.com/item?id=28908031&p=2

https://news.ycombinator.com/item?id=28908031&p=3


This is about the processors, not the laptops, so commenting on the chips instead. They look great, but they look like they're the M1 design, just more of it. Which is plenty for a laptop! But it'll be interesting to see what they'll do for their desktops.

Most of the additional chip area went into more GPUs and special-purpose video codec hardware. It's "just" two more cores than the vanilla M1, and some of the efficiency cores on the M1 became performance cores. So CPU-bound things like compiling code will be "only" 20-50% faster than on the M1 MacBook. The big wins are for GPU-heavy and codec-heavy workloads.

That makes sense since that's where most users will need their performance. I'm still a bit sad that the era of "general purpose computing" where CPU can do all workloads is coming to an end.

Nevertheless, impressive chips, I'm very curious where they'll take it for the Mac Pro, and (hopefully) the iMac Pro.


> "just" two more cores than the vanilla M1

Total cores, but going from 4 "high performance" and 4 "efficiency" to 8 "high performance" and 2 "efficiency. So should be more dramatic increase in performance than "20% more cores" would provide.


Is there a tradeoff in terms of power consumption?


Yes. But the 14" and 16" has larger battery than 13" MacBook Pro or Air. And they were designed for performance, so two less EE core doesn't matter as much.

It is also important to note, despite the name with M1, we dont know if the CPU core are the same as the one used in M1 / A14. Or did they used A15 design where the energy efficient core had significant improvement. Since the Video Decoder used in M1 Pro and Max seems to be from A15, the LPDDR5 is also a new memory controller.


In A15, Anandtech claims the Efficiency cores are 1/3 the performance, but 1/10 the power. They should be looking at (effectively) doubling the power consumption over M1 with just the CPUs and assuming they don't increase clockspeeds.

Going from 8 to 16 or 32 GPU cores is another massive power increase.


I wonder if Apple will give us a 'long-haul' mode where the system is locked to only the energy efficient cores and settings. I us developer types would love a computer that survives 24 hours on battery.


macOS Monterey coming out on the 25th has a new Low Power Mode feature that may do just that. That said, these Macs are incredibly efficient for light use, you may already get 24 hrs of battery life with your workload. Not counting screen off time.


Yes, it depends on what you're doing, but if you can watch 21 hours of video, many people will be able to do more than 24 hours of development.


Video playback is accelerated by essentially custom ASIC processing built into the CPU, so it's one of the most efficient things you can do now. Most development workloads are far more compute intensive.


I get about 14-16 hours out of my M1 MacBook Air doing basically full-time development (browser, mail client, Slack, text editor & terminal open, and compiling code periodically).


I know everyone's use case is different, but most of my development workload is 65% typing code into a text editor and 35% running it. I'm not continually pegging the CPU, just intermittently, in which case the existence of low power cores help a lot. The supposed javascript acceleration in the M1 has seemed to really speed up my workloads too.


Might be less computationally expensive. But video playback constantly refreshes screen which uses up battery


Did Electron fix the 60Hz (or rather current screen refresh rate) cursor blinking? Otherwise I don't see many web devs getting a lot of runtime in.


That's actually a curious question, I wonder what the most energy efficient dev tool is. Can't imagine its VScode. Maybe plain terminal with VIM?


This is true, but it's not worst case by far. Most video is 24 or 30 fps, so about half the typical 60 hz refresh rate. Still a nice optimization path for video. I'm not sure what effect typing in an editor will have on screen refresh, but if the Electron issue is any indication, it's probably complicated.


Huge, apparently. I just spent a bit over $7,000 for a top-spec model and was surprised to read that it comes with a 140 watt power adapter.

Prior to my current M1 MBP, my daily driver was a maxed-out 16" MBP. It's a solid computer, but it functions just as well as a space heater.

And its power adapter is only 100 watts.


the power supply is for charging the battery faster. the new magsafe 3 system can charge with more wattage than usb-c, as per the announcement. usb-c max wattage is 100 watts, which was the previous limiting factor for battery charge.


USB Power Delivery 3.1 goes up to 240 W (or, I should say, “will go up” as I don’t think anybody is shipping it yet)


USB-C 3.1 PD delivers up to 240watts.


That's with 2 connectors right? I have a Dell Precision 3760 and the one connector charging mode is limited to around 90W. With two connectors working in tandem (they snap together), it's 180W.

The connectors never get remotely warm .. in fact under max charge rate they're consistently cool to touch, so I've always thought that it could probably be increased a little bit with no negative consequences.


Single connector, the 3.1 spec goes up to 5A at 48V. You need new cables with support for the higher voltages, but your "multiple plugs for more power" laptop is exactly the sort of device it's designed for.

It was announced earlier this year, so not in wide use yet. PDF warning: https://usb.org/sites/default/files/2021-05/USB%20PG%20USB%2...


I’ve not seen any manufacturer even announce they were going to make a supported cable yet, let alone seen one that does. I might’ve missed it though. This will only make the hell of USB-C cabling worse imho.


The USB Implementers Forum announced a new set of cable markings for USB 4 certified cables that will combine information on the maximum supported data rate and maximum power delivery for a given cable.


Currently I think there aren't any, and the MBP will only get its full 140W capacity using the magsafe cable.


Color me skeptical.

I believe you (and Apple) that the battery can be charged faster, but I am currently rendering video on an M1 MBP. Its power draw: ~20 Watts.

That's a lot of charging overhead.


The 16” has a 100wh battery, so it needs 100w of power to charge 50% in 30 minutes (their “fast charging”). Add in 20w to keep the laptop running at the same time, and some conversion losses, and a 140w charger sounds just about right.


That actually sounds on the money. I hope you're right!


The other end of the new MagSafe is usb-c, which gets plugged into the power adapter.


Sure, but it's an Apple cable plugging into an Apple socket. They don't have to be constrained by the USB-C specs and could implement a custom high power charging mode. In fact I believe some other laptop manufacturers already do this.


They support fast-charging the battery to 50% in 30 minutes. That's probably the reason for the beefy charger.


I'm surprised the fast charger isn't a separate purchase, a la iPhone.


I’m not particularly surprised. They have little to prove with the iPhone, but have every reason to make every measurable factor of these new Macs better than both the previous iteration and the competition. Throwing in a future-model-upsell is negligible compared to mixed reviews about Magsafe 3 cluttering up reviews they otherwise expect to be positive.


Yeah I'd prefer that. Converted everything to quality 6ft braided usb-c cables, bought a separate multiport charger. Will have to sell original one...


Just in case people missed it - the magsafe cable connects to the power supply via usb-c. So (in theory) there's nothing special about the charger that you couldn't do with a 3rd party charger, or a multiport charger or something like that.


MagSafe was a gimmick for me - disconnects far too easy, cables fray in like 9 months, only one side, proprietary and overpriced. Use longer cables and they will never be yanked again. MBP is heavy enough that even USB-C is getting pulled out on a good yank.


I briefly had an M1 Macbook Air and the thing I hated the most about it was the lack of Magsafe. I returned it (needed more RAM) and was overjoyed they brought Magsafe back with these and am looking forward to having it on my new 16" You can also still charge through USB C if you don't care for Magsafe.


Might be a power limitation. I have an XPS 17 which only runs at full performance and charges the battery with the supplied 130W charger. USB C is only specced to 100W. I can still do most things on the spare USB C charger I have.


With the latest USB-PD standard that was announced in May, up to 240W is supported


I have a top-spec 15” MBP that was the last release just before 16”. It has 100W supply and it’s easy to have total draw more than that (so pulling from the battery while plugged in) while running heavy things like 3D games. I’ve seen around 140W peak. So a 150W supply seems prudent.


In the power/performance curves provided by Apple, they imply that the Pro/Max provides the same level of performance at a slightly lower power consumption than the original M1.

But at the same time, Apple isn't providing any hard data or explaining their methodology. I dunno how much we should be reading into the graphs. /shrug


I think you misread the graph. https://www.apple.com/newsroom/images/product/mac/standard/A...

The graph there shows that the new chip is higher power usage at all performance levels.


Not all; it looks like the M1 running full-tilt is slightly less efficient for the same perf than the M1 Pro/Max. (I.e., the curves intersect.)


Yes, but only at the very extreme. It's normal that a high core count part at low clocks has higher efficiency (perf/power) at a given performance level than a low core count part at high clocks, since power grows super-linearly with clock speed (decreasing efficiency). But notably they've tuned the clock/power regime of the M1 Pro/Max CPUs that the crossover region here is very small.


I think this is pretty easy to math: M1 has 2x the efficiency cores of these new models. Those cores do a lot of work in measured workloads that will sometimes be scheduled on performance cores instead. The relative performance and efficiency lines up pretty well if you assume that a given benchmark is utilizing all cores.


> M1 Pro delivers up to 1.7x more CPU performance at the same power level and achieves the PC chip’s peak performance using up to 70 percent less power

Uses less power


That's compared to the PC chips, not M1. M1 uses less power at same performance levels.

https://www.apple.com/newsroom/images/product/mac/standard/A...


> I'm still a bit sad that the era of "general purpose computing" where CPU can do all workloads is coming to an end.

You'd have to be extremely old to remember that era. Lots of stuff important to making computers work got split off into separate chips away from the CPU pretty early into mass computing, such as sound, graphics, and networking. We've also been sending a lot of compute from the CPU into the GPU as late for both graphics and ML purposes.

Lately it seems like the trend has been taking these specialized peripheral chips and moving them back into SoC packages. Apple's approach here seems to be an evolutionary step on top of say, an Intel chip with integrated graphics, rather than a revolutionary step away from the era of general purpose computing.


Does an Intel 286 without coprocessor count? C64? I remember those, but I wouldn't say I'm extremely old (just regular old).


The IBM PC that debuted with the 286 was the PC/AT ("Advanced Technology", hah) that is best known for introducing the AT bus later called the ISA bus that led to the proliferation of video cards, sound cards, and other expansion cards that made the PC what it is today.

I'm actually not sure there ever was a "true CPU computer age" where all processing was CPU-bound/CPU-based. Even the deservedly beloved MOS 6502 processor that powered everything for a hot decade or so was considered merely a "micro-controller" rather than a "micro-processor" and nearly every use of the MOS 6502 involved a lot of machine-specific video chips, memory management chips. The NES design lasted so long in part because toward then end cartridges would sometimes have entirely custom processing chips pulling work off the MOS 6502.

Even the mainframe era term itself "Central Processing Unit" has always sort of implied it always works in tandem with other "processing units", it's just the most central. (In some mainframe designs I think this was even quite literal in floorplan.) Of course too, when your CPU is a massive tower full of boards that make up individual operations and very the opposite of an Integrated Circuit, it's quite tough to call those a "general purpose CPU" as we imagine them today.


The C64 had the famous SID chip (MOS 6581) for sound. Famous in the chiptune scene at any rate. https://en.wikipedia.org/wiki/MOS_Technology_6581


The C64 was discontinued in 1994 but is technically still available in 2021 as the C64 Mini. ;-)


The C64 mini runs on an ARM processor, so that doesn't count in this context. Also I just learned that the processor in the C64 had two coprocessors for sound and graphics (?). So maybe that also doesn't count.


All this talk of "media engines" and GPUs reminds me of the old SID chip in the Commodore 64 and the Amiga with Agnes and Fat Agnes.


discrete floating point co-processors and vodoo graphics come to my mind


I think the higher memory is also a huge win, with support for up to 64gb.


400GB/s available to the CPU cores in a unified memory, that is going to really help certain workloads that are very memory dominant on modern architectures. Both Intel and AMD are solving this with ever increasing L3 cache sizes but just using attached memory in a SOC has vastly higher memory bandwidth potential and probably better latency too especially on work that doesn't fit in ~32MB of L3 cache.


The M1 still uses DDR memory at the end of the day, it's just physically closer to the core. This is in contrast to L3 which is actual SRAM on the core.

The DDR being closer to the core may or may not allow the memory to run at higher speeds due to better signal integrity, but you can purchase DDR4-5333 today whereas the M1 uses 4266.

The real advantage is the M1 Max uses 8 channels, which is impressive considering that's as many as an AMD EPYC, but operates at like twice the speed at the same time.


Just to underscore this, memory physically closer to the cores has improved tRAS times measured in nanoseconds. This has the secondary effect of boosting the performance of the last-level cache since it can fill lines on a cache miss much faster.

The step up from DDR4 to DDR5 will help fill cache misses that are predictable, but everybody uses a prefetcher already, the net effect of DDR5 is mostly just better efficiency.

The change Apple is making, moving the memory closer to the cores, improves unpredicted cache misses. That's significant.


> Just to underscore this, memory physically closer to the cores has improved tRAS times measured in nanoseconds.

I doubt that tRAS timing is affected by how close / far a DRAM chip is from the core. Its just a RAS command after all: transfer data from DRAM to the sense-amplifiers.

If tRAS has improved, I'd be curious how it was done. Its one of those values that's basically been constant (on a nanosecond basis) for 20 years.

Most DDR3 / DDR4 improvements have been about breaking up the chip into more-and-more groups, so that Group#1 can be issued a RAS command, then Group#2 can be issued a separate RAS command. This doesn't lower latency, it just allows the memory subsystem to parallelize the requests (increasing bandwidth but not improving the actual command latency specifically).


The physically shorter wiring is doing basically nothing. That's not where any of the latency bottlenecks are for RAM. If it was physically on-die, like HBM, that'd be maybe different. But we're still talking regular LPDDR5 using off the shelf dram modules. The shorter wiring would potentially improve signal quality, but ground shields do that, too. And Apple isn't exceeding any specs on this (ie, it's not overclocked), so above average signal integrity isn't translating into any performance gains anyway.


improved tRAS times

Has this been documented anywhere? What timings are Apple using?


Apple also uses massive cache sizes, compared to the industry.

They put a 32 megabyte system level cache in their latest phone chip.

>at 32MB, the new A15 dwarfs the competition’s implementations, such as the 3MB SLC on the Snapdragon 888 or the estimated 6-8MB SLC on the Exynos 2100

https://www.anandtech.com/show/16983/the-apple-a15-soc-perfo...

It will be interesting to see how big they go on these chips.


> Apple also uses massive cache sizes, compared to the industry.

AMD's upcoming Ryzen are supposed to have 192MB L3 "v-cache" SRAM stacked above each chiplet. Current chiplets are 8-core. I'm not sure if this is a single chiplet but supposedly good for 2Tbps[1].

Slightly bigger chip than a iphone chip yes. :) But also wow a lot of cache. Having it stacked above rather than built in to the core is another game-changing move, since a) your core has more space b) you can 3D stack many layers of cache atop.

This has already been used on their GPUs, where the 6800 & 6900 have 128MB of L3 "Infinity cache" providing 1.66TBps. It's also largely how these cards get by with "only" 512GBps worth of GDDR6 feeding them (256bit/quad-channel... at 16GT). AMD's R9 Fury from spring 2015 had 1TBps of HBM2, for compare, albeit via that slow 4096bit wide interface.

Anyhow, I'm also in awe of the speed wins Apple got here from bringing RAM in close. Cache is a huge huge help. Plus 400GBps main memory is truly awesome, and it's neat that either the CPU or GPU can make use of it.

[1] https://www.anandtech.com/show/16725/amd-demonstrates-stacke...


> The M1 still uses DDR memory at the end of the day, it's just physically closer to the core. This is in contrast to L3 which is actual SRAM on the core.

But they're probably using 8-channels of LPDDR5, if this 400GB/s number is to be believed. Which is far more memory channels / bandwidth than any normal chip released so far, EPYC and Skylake-server included.


It's more comparable to the sort of memory bus you'd typically see on a GPU... which is exactly what you'd hope for on a system with high-end integrated graphics. :)


You'd expect HBM or GDDR6 to be used. But this is seemingly LPDDR5 that's being used.

So its still quite unusual. Its like Apple decided to take commodity phone-RAM and just make many parallel channels of it... rather than using high-speed RAM to begin with.

HBM is specifically designed to be soldered near a CPU/GPU as well. For them to be soldering commodity LPDDR6 is kinda weird to me.

---------

We know it isn't HBM because HBM is 1024-bits at lower clock speeds. Apple is saying they have 512-bits across 8 channels (64-bits per channel), which is near LPDDR5 / DDR kind of numbers.

200GBps is within the realm of 1x HBM channel (1024-bit at low clock speeds), and 400GBps is 2x HBM channels (2048-bit bus at low clock speeds).


HBM isn't just "soldered near", it's connected through a silicon interposer rather than a PCB.

Also we know it's not HBM because the word "LPDDR5" was literally on the slides :)

> just make many parallel channels of it

isn't that just how LPDDR is in general? It has much narrower channels than DDR so you need much more of them?


> isn't that just how LPDDR is in general? It has much narrower channels than DDR so you need much more of them?

Well yeah. But 400GBps is equivalent to 16x DDR4 channels. Its an absurdly huge amount of bandwidth.


> The DDR being closer to the core may or may not allow the memory to run at higher speeds due to better signal integrity, but you can purchase DDR4-5333 today whereas the M1 uses 4266.

My understanding is that bringing the RAM closer increases the bandwidth (better latency and larger buses), not necessarily the speed of the RAM dies. Also, if I am not mistaken, the RAM in the new M1s is LP-DDR5 (I read that, but it did not stay long on screen so I could be mistaken). Not sure how it is comparable with DDR4 DIMMs.


The overall bandwidth isn't affected much by the distance alone. Latency, yes, in the sense that the signal literally has to travel further, but that difference is miniscule (like 1/10th of a nanosecond) compared to overall DDR access latencies.

Better signal integrity could allow for larger busses, but I don't think this is actually a single 512 bit bus. I think it's multiple channels of smaller busses (32 or 64 bit). There's a big difference from an electrical design perspective (byte lane skew requirements are harder to meet when you have 64 of them). That said, I think multiple channels is better anyway.

The original M1 used LPDDR4 but I think the new ones use some form of DDR5.


Your comment got me thinking, and I checked the math. It turns out that light takes ~0.2 ns to travel 2 inches. But the speed of signal propagation in copper is ~0.6 c, so that takes it up to 0.3 ns. So, still pretty small compared to the overall latencies (~13-18 ns for DDR5) but it's not negligible.

I do wonder if there are nonlinearities that come in to play when it comes to these bottlenecks. Yes, by moving the RAM closer it's only reducing the latency by 0.2 ns. But, it's also taking 1/3rd of the time that it used to, and maybe they can use that extra time to do 2 or 3 transactions instead. Latency and bandwidth are inversely related, after all!


Well, you can have high bandwidth and poor latency at the same time -- think ultra wide band radio burst from Earth to Mars -- but yeah, on a CPU with all the crazy co-optimized cache hierarchies and latency hiding it's difficult to see how changing one part of the system changes the whole. For instance, if you switched 16GB of DRAM for 4GB of SRAM, you could probably cut down the cache-miss latency a lot -- but do you care? If you cache hit rate is high enough, probably not. Then again, maybe chopping the worst case lets you move allocation away from L3 and L2 and into L1, which gets you a win again.

I suspect the only people who really know are the CPU manufacturer teams that run PIN/dynamorio traces against models -- and I also suspect that they are NDA'd through this life and the next and the only way we will ever know about the tradeoffs are when we see them pop up in actual designs years down the road.


DRAM latencies are pretty heinous. It makes me wonder if the memory industry will go through a similar transition to the storage industry's HDD->SSD sometime in the not too distant future.

I wonder about the practicalities of going to SRAM for main memory. I doubt silicon real estate would be the limiting factor (1T1C to 6T, isn't it?) and Apple charges a king's ransom for RAM anyway. Power might be a problem though. Does anyone have figures for SRAM power consumption on modern processes?


>> I wonder about the practicalities of going to SRAM for main memory. I doubt silicon real estate would be the limiting factor (1T1C to 6T, isn't it?) and Apple charges a king's ransom for RAM anyway. Power might be a problem though. Does anyone have figures for SRAM power consumption on modern processes?

I've been wondering about this for years. Assuming the difference is similar to the old days, I'd take 2-4GB of SRAM over 32GB of DRAM any day. Last time this came up people claimed SRAM power consumption would be prohibitive, but I have a hard time seeing that given these 50B transistor chips running at several GHz. Most of the transistors in an SRAM are not switching, so they should be optimized for leakage and they'd still be way faster than DRAM.


> The overall bandwidth isn't affected much by the distance alone.

Testing showed that the M1's performance cores had a surprising amount of memory bandwidth.

>One aspect we’ve never really had the opportunity to test is exactly how good Apple’s cores are in terms of memory bandwidth. Inside of the M1, the results are ground-breaking: A single Firestorm achieves memory reads up to around 58GB/s, with memory writes coming in at 33-36GB/s. Most importantly, memory copies land in at 60 to 62GB/s depending if you’re using scalar or vector instructions. The fact that a single Firestorm core can almost saturate the memory controllers is astounding and something we’ve never seen in a design before.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


It just said that bandwidth between a performance core and memory controller is great. It's not related to distance between memory controller and DRAM.


L3 is almost never SRAM, it's usually eDRAM and clocked significantly lower than L1 or L2.

(SRAM is prohibitively expensive to do at scale due to die area required).

Edit: Nope, I'm wrong. It's pretty much only Power that has this.


As far as I'm aware, IBM is one of the few chip-designers who have eDRAM capabilities.

IBM has eDRAM on a number of chips in varying capacities, but... its difficult for me to think of Intel, AMD, Apple, ARM, or other chips that have eDRAM of any kind.

Intel had one: the eDRAM "Crystalwell" chip, but that is seemingly a one-off and never attempted again. Even then, this was a 2nd die that was "glued" onto the main chip, and not like IBM's truly eDRAM (embedded into the same process).


You're right. My bad. It's much less common than I'd thought. (Intel had it on a number of chips that included the Iron Pro Graphics across Haswell, Broadwell, Skylake etc)


But only the Iris Pro 5200 (codename: Crystalwell) had eDRAM. All other Iris Pro were just normal DDR4.

EDIT: Oh, apparently there were smaller 64MB eDRAM on later chips, as you mentioned. Well, today I learned something.


Ha, I still use an intel 5775c in my home server!


I think the chip you are talking about is Broadwell.


Broadwell was the CPU-core.

Crystalwell was the codename for the eDRAM that was grafted onto Broadwell. (EDIT: Apparently Haswell, but... yeah. Crystalwell + Haswell for eDRAM goodness)


L3 is SRAM on all AMD Ryzen chips that I'm aware of.

I think it's the same with Intel too except for that one 5th gen chip.


Good point. Especially since a lot of software these days is not all that cache friendly. Realistically this means we have 2 years or so till further abstractions eat up the performance gains.


> 400GB/s available to the CPU cores in a unified memory

It's not just throughput that counts, but latency. Any numbers to compare there?


We'll have to wait for the AnandTech review but memory latency should be similar to Intel and AMD.


I'm thinking with that much bandwidth, maybe they will roll out SVE2 with vlen=512/1024 for future M series.

AVX512 suffers from bandwidth on desktop. But now the bandwidth is just huge and SVE2 is naturally scalable. Sounds like free lunch?


I thought the memory was one of the more interesting bits here.

My 2-year-old Intel MBP has 64 GB, and 8 GB of additional memory on the GPU. True, on the M1 Max you don't have to copy back and forth between CPU and GPU thanks to integrated memory, but the new MBP still has less total memory than my 2-year-old Intel MBP.

And it seems they just barely managed to get to 64 GiB. The whole processor chip is surrounded by memory chips. That's in part why I'm curious to see how they'll scale this. One idea would be to just have several M1 Max SoCs on a board, but that's going to be interesting to program. And getting to 1 TB of memory seems infeasible too.


Just some genuine honest curiosity here; how many workloads actually require 64gb of ram? For instance, I'm an amateur in the music production scene, and I know that sampling heavy work flows benefit from being able to load more audio clips fully into RAM rather than streaming them from disk. But 64g seems a tad overkill even for that.

I guess for me I would prefer an emphasis on speed/bandwidth rather than size, but I'm also aware there are workloads that I'm completely ignorant of.


Can’t answer for music, but as a developer a sure way to waste a lot of RAM is to run a bunch of virtual machines, containers or device simulators.

I have 32GB, so unless I’m careless everything usually fits in memory without swapping. If you got over things get slow and you notice.


Same, I tend to get everything in 32GB but more and more often I'm going over that and having things slow down. I've also nuked an SSD in a 16GB MBP due to incredibly high swap activity. It would make no sense for me to buy another 32GB machine if I want it to last five years.


Don’t run Chrome and Slack at the same time :)


So run Slack inside Chrome? :)


How do you track the swap activity? What would you call “high” swap activity?


Open Activity Monitor, select Memory and there's "Swap used" down the bottom


My laptop has 128GB for running several VMs that build C++ code, and Slack.


Another anecdote from someone who is also in the music production scene - 32GB tended to be the "sweet spot" in my personal case for the longest time, but I'm finding myself hitting the limits more and more as I continue to add more orchestral tracks which span well over 100 tracks total in my workflows.

I'm finding I need to commit and print a lot of these. Logic's little checker in the upper right showing RAM, Disk IO, CPU, etc also show that it is getting close to memory limits on certain instruments with many layers.

So as someone who would be willing to dump $4k into a laptop where its main workload is only audio production, I would feel much safer going with 64GB knowing there's no real upgrade if I were to go with the 32GB model outside of buying a totally new machine.

Edit: And yes, there is does show the typical "fear of committing" issue that plagues all of us people making music. It's more of a "nice to have" than a necessity, but I would still consider it a wise investment. At least in my eyes. Everyone's workflow varies and others have different opinions on the matter.


I know the main reason why the Mac Pro has options for LRDIMMs for terabytes of RAM is specifically for audio production, where people are basically using their system memory as cache for their entire instrument library.

I have to wonder how Apple plans to replace the Mac Pro - the whole benefit of M1 is that gluing the memory to the chip (in a user-hostile way) provides significant performance benefits; but I don't see Apple actually engineering a 1TB+ RAM SKU or an Apple Silicon machine with socketed DRAM channels anytime soon.


I wonder about that too.

My bet is that they will get rid of the Mac Pro entirely. Too low ROI for them at this point.

My hope is to see an ARM workstation where all components are standard and serviceable.

I cannot believe we are in the era of glued batteries and soldered SSDs that are guaranteed to fail and take the whole machine with them.


I think we'd probably see apple use the fast and slow ram method that old computers used back in the 90's.

16-32GB of RAM on the SOC, with DRAM sockets for usage past the built in amount.

Though by the time we see an ARM MacPro they might move to stacked DRAM on the SOC. But i'd really think two tier memory system would be apple's method of choice.

I'd also expect a dual SOC setup.

So I don't expect to see that anytime soon.

I'd love to get my hands on a Mac Mini with the M1 Max.


I went for 64GB. I have one game where 32GB is on the ragged edge - so for the difference it just wasn't worth haggling over. Plus it doubled the memory bandwidth - nice bonus.

And unused RAM isn't wasted - the system will use it for caching. Frankly I see memory as one of the cheapest performance variables you can tweak in any system.


> how many workloads actually require 64gb of ram?

Don't worry, Chrome will eat that up in no time!

More seriously, I look forward to more RAM for some of the datasets I work with. At least so I don't have to close everything else while running those workloads.


I ran 512GB on my home server, 256GB on my desktop and 128GB on small factor desktop that I take with me to summer cottage.

Some of my projects work with big in memory databases. Add regular tasks and video processing on top and there you go.


As a data scientist, I sometimes find myself going over 64 GB. Of course it all depends on how large data I'm working on. 128 GB RAM helps even with data of "just" 10-15 GB, since I can write quick exploratory transformation pipelines without having to think about keeping the number of copies down.

I could of course chop up the workload earlier, or use samples more often. Still, while not strictly necessary, I regularly find I get stuff done quicker and with less effort thanks to it.


Not many, but there are a few that need even more. My team is running SQL servers on their laptops (development and support) and when that is not enough, we go to Threadrippers with 128-256GB of RAM. Other people run Virtual Machines on their computers (I work most of the time in a VM) and you can run several VMs at the same time, eating up RAM really fast.


On a desktop Hackintosh, I started with 32GB that would die with out of memory errors when I was processing 16bit RAW images at full resolution. Because it was Hackintosh, I was able to upgrade to 64GB so the processing could complete. That was the only thing running.


What image dimensions? What app? I find this extremely suspect, but it’s plausible if you’ve way undersold what you’re doing. 24Mpixel 16bit RAW image would have no problem generally on an 4gb machine if it’s truly the only app running and the app isn’t shit. ;)


I shoot timelapse using Canon 5D RAW images, I don't know the exact dimensions off the top of my head but greater than 5000px wide. I then grade them using various programs, ultimately using After Effects to render out full frame ProRes 4444. After Effects was running out of memory. It would crash and fail to render my file. It would display an error message that told me specifically it was out of memory. I increased the memory available to the system. The error goes away.

But I love the fact that you have this cute little theory to doubt my actual experience to infer that I would make this up.


> But I love the fact that you have this cute little theory to doubt my actual experience to infer that I would make this up.

The facts were suspect, your follow up is further proof I had good reason to be suspect. First off, the RAW images from a 5D aren’t 16 bit. ;) Importantly, the out of memory error had nothing to do with the “16 bit RAW files”, it was video rendering lots of high res images that was the issue which is a very different issue and of course lots of RAM is needed there. Anyway, notice I said “but it’s plausible if you’ve way undersold what you’re doing”, which is definitely the case here, so I’m not sure why it bothered you.


Yes, Canon RAW images are 14bit. Once opened in After Effects, you are working in 16bit space. Are you just trying to be argumentative for the fun?


>> die with out of memory errors when I was processing 16bit RAW images

> Canon RAW images are 14bit

You don’t see the issue?

> Are you just trying to be argumentative for the fun?

In the beginning, I very politely asked a clarifying question making sure not to call you a liar as I was sure there was more to the story. You’re the one who’s been defensive and combative since, and honestly misrepresenting facts the entire time. Where you wrong at any point? Only slightly, but you left out so many details that were actually important to the story for anyone to get any value out of your anecdata. Thanks to my persistence, anyone who wanted to learn from your experience now can.


Not the person you're replying to.

>> I was processing 16bit RAW images at full resolution.

>> ...using After Effects to render out full frame ProRes 4444.

Those are two different applications to most of us. No one is accusing you of making things up, just that the first post wasn't fully descriptive of your use case.


Working with video will use up an extraordinary amount of memory.

Some of the genetics stuff I work on requires absolute gobs of RAM. I have a single process that requires around 400GB of RAM that I need to run quite regularly.


I can exhaust my 64GB just opening browser tabs for documentation.


In case your statement is only a slight sarcasm:

Isn’t that just the OS saying “unused memory is wasted memory”? Most of it is likely cache that can easily be evicted with higher memory pressure.


It’s a slight exaggeration, I also have an editor open and some dev process (test runner usually). It’s not just caching, I routinely hit >30 GB swap with fans revved to the max and fairly often this becomes unstable enough to require a reboot even after manually closing as much as I can.

I mean, some of this comes down to poor executive function on my part, failing to manage resources I’m no longer using. But that’s also a valid use case for me and I’m much more effective at whatever I’m doing if I can defer it with a larger memory capacity.


Which OS do you use? It’s definitely not a problem on your part, the OS should be managing it completely transparently.


It is the OS saying “unused memory is wasted memory”, and then every other application thinking they're OS and doing the same.


Since applications have virtual memory, it sort of doesn’t matter? The OS will map these to actual pages based on how many processes are available, etc. So if only one app runs and it wants lots of memory, it makes sense to give it lots of memory - that is the most “economical” decision from both a energy and performance POV.


> So if only one app runs

You answered yourself.


So, M1 has been out for a while now, with HN doom and gloom about not being able to put enough memory into them. Real world usage has demonstrated far less memory usage than people expected (I don't know why, maybe someone paid attention and can say). The result is that 32G is a LOT of memory for an M1-based laptop, and 64G is only needed for very specific workloads I would expect.


Measuring memory usage is a complicated topic and just adding numbers up overestimates it pretty badly. The different priorities of memory are something like 1. wired (must be in RAM) 2. dirty (can be swapped) 3. purgeable (can be deleted and recomputed) 4. file backed dirty (can be written to disk) 4. file backed clean (can be read back in).

Also note M1's unified memory model is actually worse for memory use not better. Details left as an exercise for the reader.


Unified memory is a performance/utilisation tradeoff. I think the thing is it's more of an issue with lower memory specs. The fact you don't have 4GB (or even 2 GB) dedicated memory on a graphics card in a machine with 8GB of main memory is a much bigger deal than not having 8GB on the graphics card on a machine with 64 GB of main RAM.


Or like games, even semi-casual ones. Civ6 would not load at all on my mac mini. Also had to fairly frequently close browser windows as I ran out of memory.


I couldn't load Civ6 until I verified game files in Steam, and now it works pretty perfectly. I'm on 8GB and always have Chrome, Apple Music and OmniFocus running alongside.


Huh, thank you I will try this.


I'm interested to see how the GPU on these performs, I pretty much disable the dGPU on my i9 MBP because it bogs my machine down. So for me it's essentially the same amount of memory.


> but the new MBP still has less total memory

From the perspective of your GPU, that 64GB of main memory attached to your CPU is almost as slow to fetch from as if it were memory on a separate NUMA node, or even pages swapped to an NVMe disk. It may as well not be considered "memory" at all. It's effectively a secondary storage tier.

Which means that you can't really do "GPU things" (e.g. working with hugely detailed models where it's the model itself, not the textures, that take up the space) as if you had 64GB of memory. You can maybe break apart the problem, but maybe not; it all depends on the workload. (For example, you can't really run a Tensorflow model on a GPU with less memory than the model size. Making it work would be like trying to distribute a graph-database routing query across nodes — constant back-and-forth that multiplies the runtime exponentially. Even though each step is parallelizable, on the whole it's the opposite of an embarrassingly-parallel problem.)


That's not how M1's unified memory works.

>The SoC has access to 16GB of unified memory. This uses 4266 MT/s LPDDR4X SDRAM (synchronous DRAM) and is mounted with the SoC using a system-in-package (SiP) design. A SoC is built from a single semiconductor die whereas a SiP connects two or more semiconductor dies. SDRAM operations are synchronised to the SoC processing clock speed. Apple describes the SDRAM as a single pool of high-bandwidth, low-latency memory, allowing apps to share data between the CPU, GPU, and Neural Engine efficiently. In other words, this memory is shared between the three different compute engines and their cores. The three don't have their own individual memory resources, which would need data moved into them. This would happen when, for example, an app executing in the CPU needs graphics processing – meaning the GPU swings into action, using data in its memory. https://www.theregister.com/2020/11/19/apple_m1_high_bandwid...

These Macs are gonna be machine learning beasts.


I know; I was talking about the computer the person I was replying to already owns.

The GP said that they already essentially have 64GB+8GB of memory in their Intel MBP; but they don't, because it's not unified, and so the GPU can't access the 64GB. So they can only load 8GB-wide models.

Whereas with the M1 Pro/Max the GPU can access the 64GB, and so can load 64GB-wide models.


It seems I misunderstood.


so whats the implication of this?

that apples specific use cases for the m1 series is basically "prosumer" ?

(sorry if i'm just repeating something obvious)


Memory is very stackable if needed, since the power per unit area is very low.


How much of that 64 GB is in use at the same time though? Caching not recently used stuff from DRAM out to an SSD isn't actually that slow, especially with the high speed SSD that Apple uses.


Right. And to me, this is the interesting part. There's always been that size/speed tradeoff ... by putting huge amounts of memory bandwidth on "less" main RAM, it becomes almost half-ram-half-cache; and by making the SSD fast it becomes more like massive big half-hd-half-cache. It does wear them out, however.


Why 1TB? 640GB ought to be enough for anything...


Huh, I guess that was as bad an idea as the 640K one.


How much per 8K x 10 bit color, video frame?

Roughly 190GB per minute without sound.

Trying to do special effects on more than a few seconds of 8K video would overwhelm a 64GB system, I suspect.


You were (unintentionally) trolled. My first post up there was alluding to the legend that Bill Gates once said, speaking of the original IBM PC, "640K of memory should be enough for anybody." (N.B. He didn't[0])

[0] https://www.wired.com/1997/01/did-gates-really-say-640k-is-e...


Video and VFX generally don't need to keep whole sequences in RAM persistently these days because:

1. The high-end SSDs in all Macs can keep up with that data rate (3GB/sec) 2. Real-time video work is virtually always performed on compressed (even losslessly compressed) streams, so the data rate to stream is less than that.


And NVMe with 7.5gbps are like, we are almost not even note worthy haha Impressive all around.


It's not that noteworthy, given that affordable Samsung 980 Pro SSDs have been doing those speeds for well over a year now.


980 pro maxes at 7.


But it's also been around for at least a year. And upcoming pcie 5 SSDs will up that to 10-14GBps.

I'm saying Apple might have wanted to emphasise their more standout achievements. Such as on the CPU front, where they're likely to be well ahead for a year - competition won't catch up until AMD starts shipping 5nm Zen4 CPUs in Q3/Q4 2022.


Apple has a well over 5 year advantage when compared to their competition.


That is very difficult to believe, short of sabotage.


Apple has a node advantage.


I'm guessing that's new for the 13" or for the M1, but my 16‑inch MacBook Pro purchased last year had 64GB of memory. (Looks like it's considered a 2019 model, despite being purchased in September 2020).


I don't think this is an apples to apples comparison because of how the new unified memory works


Well technically it is an Apple to Apple comparison in his case.


It all falls apart when the apple contains something non-apple.


Right the Intels supported 64gb, but the 16gb limitation on the M1 was literally the only thing holding me back from upgrading.


And the much higher memory bandwidth


Actually no "extra" chip area in comparison to x86 based solution.

They just throw away so much of cruft from the die like PCIE PHYs, and x86 legacy I/O with large area analog circuitry.

Redundant complex DMA, and memory controller IPs are also thrown away.

Clock, and power rails on the SoC are also probably taking less space because of more shared circuitry.

Same with self-test, debug, fusing blocks, and other small tidbits.


This is very interesting and first time I’ve heard / thought about this. Wonder how much power efficiency comes from exactly these things?


PCI is quite power hungry when it works on full trottle.

The seemed power efficiency when PCIE was going 1.0 2.0 3.0 ... was due to dynamic power control, and link sleep.

On top of it, they simply don't haul memory nonstop over PCIE anymore, since data going to/from GPU is simply not moving anywhere.


Really curious if the memory bandwidth is entirely available to the CPU if the GPU is idle. An nvidia RTX3090 has nearly 1TB/s bandwidth, so the GPU is clearly going to use as much of the 400GB/s as possible. Other unified architectures have multiple channels or synchronization to memory, such that no one part of the system can access the full bandwidth. But if the CPU can access all 400GB/s, that is an absolute game changer for anything memory bound. Like 10x faster than an i9 I think?


Not sure if it will be available, but 400GB/s is way too much for 8 cores to take up. You would need some sort of avx512 to hog up that much bandwidth.

Moreover, it's not clear how much of a bandwidth/width does M1 max CPU interconnect/bus provide.

--------

Edit: Add common sense about HPC workloads.

There is a fundamental idea called memory-access-to-computation ratio. We can't assume a 1:0 ratio since it was doing literally nothing except copying.

Typically your program needs serious fixing if it can't achieve 1:4. (This figure comes from a CUDA course. But I think it should be similar for SIMD)

Edit: also a lot of that bandwidth is fed through cache. Locality will eliminate some orders of magnitudes of memory access, depending on the code.


A single big core in the M1 could pretty much saturate the memory bandwidth available.

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...


> Not sure if it will be available, but 400GB/s is way too much for 8 cores to take up. You would need some sort of avx512 to hog up that much bandwidth.

If we assume that frequency is 3.2Ghz and IPC of 3 with well optimized code(which is conservative for performance cores since they are extremely wide) and count only performance cores we get 5 bytes for instruction. M1 supports 128-bit Arm Neon, so peak bandwidth usage per instruction(if I didn't miss anything) is 32 bytes.


Don't know the clock speed but 8 cores at 3Ghz working on 128bit SIMD is 8316 = 384GB/s so we are in the right ball park. Not that I personally have a use for that =) Oh, wait, bloody Java GC might be a use for that. (LOL, FML or both).


But the classic SIMD problem is matrix-multiplication, which doesn't need full memory bandwidth (because a lot of the calculations are happening inside of cache).

The question is: what kind of problems are people needing that want 400GB/s bandwidth on a CPU? Well, probably none frankly. The bandwidth is for the iGPU really.

The CPU just "might as well" have it, since its a system-on-a-chip. CPUs usually don't care too much about main-memory bandwidth, because its like 50ns+ away latency (or ~200 clock ticks). So to get a CPU going in any typical capacity, you'll basically want to operate out of L1 / L2 cache.

> Oh, wait, bloody Java GC might be a use for that. (LOL, FML or both).

For example, I know you meant the GC as a joke. But if you think of it, a GC is mostly following pointer->next kind of operations, which means its mostly latency bound, not bandwidth bound. It doesn't matter that you can read 400GB/s, your CPU is going to read an 8-byte pointer, wait 50-nanoseconds for the RAM to respond, get the new value, and then read a new 8-byte pointer.

Unless you can fix memory latency (and hint, no one seems to be able to do so), you'll be only able to hit 160MB/s or so, no matter how high your theoretical bandwidth is, you get latency locked at a much lower value.


Yeah the marking phase cannot be efficiently vectorized. But I wonder if it can help with compacting/copying phase.

Also for me the process sounds oddly familiar to vmem table walking. There is currently a RISC-V J extension drafting group. I wonder what they can come up with.


> The question is: what kind of problems are people needing that want 400GB/s bandwidth on a CPU? Well, probably none frankly.

It is needed for analytic databases, e.g. ClickHouse: https://presentations.clickhouse.com/meetup53/optimizations/


This is some seriously interesting stuff.

But they are demonstrating with 16 cores + 30 GB/s & 128 cores + 190 GB/s. And to my understanding they did not really mention what type of computational load did they perform. So this does not sound too ridiculous. M1 max is pairing 8 cores + 400GB/s.


Doesn't prefetching data into the cache more quickly assist in execution speed here?


How do you prefetch "node->next" where "node" is in a linked list?

Answer: you literally can't. And that's why this kind of coding style will forever be latency bound.

EDIT: Prefetching works when the address can be predicted ahead of time. For example, when your CPU-core is reading "array", then "array+8", then "array+16", you can be pretty damn sure the next thing it wants to read is "array+24", so you prefetch that. There's no need to wait for the CPU to actually issue the command for "array+24", you fetch it even before the code executes.

Now if you have "0x8009230", which points to "0x81105534", which points to "0x92FB220", good luck prefetching that sequence.

--------

Which is why servers use SMT / hyperthreading, so that the core can "switch" to another thread while waiting those 50-nanoseconds / 200-cycles or so.


I don't really know how the implementation of a tracing GC works but I was thinking they could do some smart memory ordering to land in the same cache-line as often as possible.

Thanks for the clarifications :)


Interestingly earlyish smalltalk VMs used to keep the object headers in a separate contiguous table.

Part of the problem though, is that the object graph walk pretty quickly is non contiguous, regardless of how it's laid out in memory.


But that’s just the marking phase, isn’t it? And most of it can be done fully in parallel, so while not all CPU cores can be maxed out with that, more often than not the original problem itself can be hard to parallelize to that level, so “wasting” a single core may very well be worth it.


You prefetch by having node->next as close as possible to node. You do that by using an allocator that tries very hard to ensure this.

Doesn't work that well for GC but for specific workloads it can work very nicely.


Yeah, that's a fine point.

I always like pointing out Knuth's dancing links algorithm for Exact-covering problems. All "links" in that algorithm are of the form "1 -> 2 -> 3 -> 4 -> 5" at algorithm start.

Then, as the algorithm "guesses" particular coverings, it turns into "1->3->4->5", or "1->4", that is, always monotonically increasing.

As such, no dynamic memory is needed ever. The linked-list is "statically" allocated at the start of the program, and always traversed in memory order.

Indeed, Knuth designed the scheme as "imagine doing malloc/free" to remove each link, but then later, "free/malloc" to undo the previous steps (because in Exact-covering backtracking, you'll try something, realize its a dead end, and need to backtrack). Instead of a malloc followed up by a later free, you "just" drop the node out of the linked list, and later reinsert it. So the malloc/free is completely redundant.

In particular: a given "guess" into an exact-covering problem can only "undo" its backtracking to the full problem scope. From there, each "guess" only removes possibilities. So you use the "maximum" amount of memory at program start, you "free" (but not really) nodes each time you try a guess, and then you "reinsert" those nodes to backtrack to the original scope of the problem.

Finally, when you realize that, you might as well put them all into order for not only simplicity, but also for speed on modern computers (prefetching and all that jazz).

Its a very specific situation but... it does happen sometimes.


AMD showed with their Infinity Cache that you can get away with much less bandwidth if you have large caches. It has the side effect of radically reducing power consumption.

Apple put 32MB of cache in their latest iPhone. 128 or even 256MB of L3 cache wouldn't surprise me at all given the power benefits.


I suspect the GPU is never really idle.

Even simple screen refresh blending say 5 layers and outputting it to a 4k screen is 190Gbits at 144 Hz.


Apple put ProMotion in the built in display, so while it can ramp up to 120Hz, it'll idle at more like 24 Hz when showing static content. (the iPad Pro goes all the way down to 10Hz, but some early sources seem to say 24Hz for these MacBook Pros.) There may also be panel self refresh involved, in which case a static image won't even need that much. I bet the display coprocessors will expose the adaptive refresh functionality over the external display connectors as well.


It only takes a tiny animation (eg. a spinner, pulsing glowing background, animated clock, advert somewhere, etc), and suddenly the whole screen is back to 120 Hz refresh.


That's a useful tip for saving some battery actually, thanks.


Don't know much about the graphics on an M1. Does it not render to a framebuffer? Is that framebuffer spread over all 4 memory banks? Can't wait to read all about it.


The updates from the Asahi Linux team are fantastic for getting insights into the M1 architecture. They've not really dug deep into the GPU yet, but that's coming soon.


When nothing is changing, you do not have to touch the GPU. Yes, without Panel Self Refresh there would be this many bits going to the panel at that rate, but the display engine would keep resubmitting the same buffer. No need to rerender when there's no damage. (And when there is, you don't have to rerender the whole screen, only the combined damage of the previous and current frames.)


Don't Apple iPhones use an adaptive refresh rate nowadays?


Indeed ProMotion is coming to these new MacBooks too.


More memory bandwith = 10x faster than an i9 ? this makes no sense to me doesn't clock speed and cores determine the major part of the performance of a cpu ?


Yes and no. There are many variables to take into account. An example from the early days of the PPC architecture was their ability to pre-empt instructions. This gave performance boasts even in the absence of a higher clock speed. I can't speak specifically on the M1, but there are other things outside of clock speed and cores that determine speed.


Surely the shared ram between cpu and gpu is the killer feature - zero copy and up to 64gb ram available for the gpu!


Yes, but it's a double edged sword. It means you're using relatively slow ram for the GPU, and that the GPU takes memory bandwidth away from the CPU as well. Traditionally we've ended up with something that looks like Intel's kinda crappy integrated video.

The copying process was never that much of a big deal, but paying for 8GB of graphics ram really is.


> The copying process was never that much of a big deal

I don't know about that? Texture memory management in games can be quite painful. You have to consider different hardware setups and being able to keep the textures you need for a certain scene in memory (or not, in which case, texture thrashing).


The copying process was quite a barrier to using compute (general purpose GPU) to augment CPU processing and you had to ensure that the work farmed to the GPU was worth the cost of the to/from costs. Game consoles of late have generally had unified memory (UMA) and it's quite a nice advantage because moving data is a significant bottleneck.

Using Intel's integrated video as a way to assess the benefits of unified memory is off target. Intel had a multitude of design goals for their integrated GPU and UMA was only one aspect so it's not so easy to single that out for any shortcomings that you seem to be alluding to.


If you're looking at the SKU with the high GPU core count and 64 Gigs of LPDDR5, the total memory bandwidth (400 GBps) isn't that far off from the bandwidth a discrete GPU would have to it's local pool of memory.

You also have an (estimated from die shots) 64 megabyte SRAM system level cache and large L2 and L1 CPU caches, but you are indeed sharing the memory bandwidth between the CPU and GPU.

I'm looking forward to these getting into the hands of testers.


> I'm still a bit sad that the era of "general purpose computing" where CPU can do all workloads is coming to an end.

They’ll still do all workloads, but are optimized for certain workloads. How is that any different than say, a Xeon or EPYC cpu designed for highly threaded (server/scientific computing) applications?


In this context the absence of the 27 inch iMac was interesting. If these SoC were not deemed to be 'right' for the bigger iMac then possibly a more CPU focused / developer focused SoC may be in the works for the iMac?


I doubt they are going to make different chips for prosumer devices. They are going to spread out the M1 pro/max upgrade to the rest of the lineup at some point during the next year, so they can claim "full transition" through their quoted 2 years.

The wildcard is the actual mac pro. I suspect we aren't going to hear about mac pro until next Sept/Oct events, and super unclear what direction they are going to go. Maybe allowing config of multiple M1 max SOCs somehow working together. Seems complicated.


On reflection I think they've decided that their pro users want 'more GPU not more CPU' - they could easily have added a couple more CPU cores but it obviously wasn't a priority.

Agreed that it's hard to see how designing a CPU just for the Mac Pro would make any kind of economic sense but equally struggling to see what else they can do!


There is only so much you can do improving the CPU.

On the other hand, we still haven't really reached the limit for GPUs.


I suppose it’s the software - most CPU software is primarily designed to run on a single / few cores. GPU code is massively parallel by design.


I think we will see iMac Pro with incredible performance. Mac Pros, maybe in the next years to come. It's a really high end product to release new specs. Plus, in they release it with M1 Max chips, what would be the difference? a nicer case and more upgrading slot? I don't see the advantage of power. I think Mac Pros will be upgraded like in 2 years ahead


The real question for the Mac Pros is whether the GPU is changeable or not, and what that means for shared memory.


Nah, it'll be the same M1 Max. Just wait a few months like with the 24" iMac.


You're probably right. Maybe they have inventory to work through!


They might also have supply constraints on these new chips. I suspect they are going to sell a lot of these new MacBook Pros


They also have a limited headcount and resources so they wouldn't want to announce M1x/pro/max for all machines now and have employees be idle for the next 3 months.

Notebooks also have a higher profit margin, so they sell them to those who need to upgrade now. The lower-margin systems like Mini will come later. And the Mac Pro will either die or come with the next iteration of the chips.


The Mac Pro might be blocked, or still progressing, through design changes more so than chip changes.


Yeah, M1 Pro/Max on iMac 30” likely in 2022 H1. Mini also, I imagine.


Yup. Once I saw the 24" iMac I knew the 27" had had it's chips. 30" won't actually be much bigger than the 27" if the bezels shrink to almost nothing - which seems to be the trend.


I wish 34" iMac Pro M1 Max.


I'm going to sound dumb for this, but how difficult do any of you think it would be to make a computer with 2 M1 chips? Or even 4?


They're not meant to go together like that -- there's not really an interconnect for it, or any pins on the package to enable something like that. Apple would have to design a new Mx SoC with something like that as an explicit design goal.


I think the problem would be how one chip can access the memory of the other one. The big advantage in the M1xxxx is the unified memory. I don't think the chips have any hardware to support cache coherency and so on spanning more than one chip.


You would have to implement single system image abstraction, if you wanted more than a networked cluster of M1s in a box, in the OS using just software plus virtual memory. You'd use the PCIe as the interconnect. Similar has been done by other vendors for server systems, but it has tradeoffs that would probably not make sense to Apple now.

A more realistic question would be what good hw multisocket SMP support would look like in M1 Max or later chips, as that would be a more logical thing to build if Apple wanted this.


The rumor has long been that the future Mac Pro will use 2-4 of these “M1X” dies in a single package. It remains to be seen how the inter-die interconnect will work / where those IOs are on the M1 Pro/Max die.


The M1 is a system-on-a-chip so AIUI there's a bunch of stuff on there that you wouldn't want two of.


That's not a problem, you just turn off what's redundant. AMD did the same thing with their original Threadripper/EPYC models.


I think the issue is OS management of tasks to prevent cpu 1 from having to access memory of cpu 2.


Apple are already planning multi-die systems, judging by their drivers. So yes. Rumors are they have a SKU with two M1 Max dies.


The way I interpreted it is that it's like lego so they can add more fast cores or more efficiency cores depending on the platform needs. The successor generations will be new lego building blocks.


If we're going to keep freedom in computing, let's hope "general purpose computing" is the distinction that remains for exactly that purpose.

Accelerators/Memory are just being brought on die/package here, which has been happening for a while. Integrated memory controllers come to mind.


Edit: I was wrong! Thanks for pointing it out

Not exactly. M1 CPU, GPU, and RAM were all capped in the same package. New ones appear to be more a single board soldered onto mainboard, with a discrete CPU, GPU, and RAM package each capped individually if their "internals" promo video is to be believed (and it usually is an exact representation of the shipping product) https://twitter.com/cullend/status/1450203779148783616?s=20

Suspect this is a great way for them to manage demand and various yields by having 2 CPU's (or one, if the difference between pro/ max is yield on memory bandwidth) and discrete RAM/ GPU components


CPU and GPU is one die, you're looking at RAM chips on each side of the package. The M1 also had the RAM separate but on package.

M1: https://d3nevzfk7ii3be.cloudfront.net/igi/ZRQGFteQwoIVFbNn


I know nothing about hardware, basically. Do Apple’s new GPU cores come close to the capabilities of discrete GPUs like what are used for gaming/scientific applications? Or are those cards a whole different thing?


1. If you're a gamer, this seems comparable to a 3070 Laptop, which is comparable to a 3060 Desktop.

2. If you're a ML researcher you use CUDA (which only works on NVIDIA cards), they have basically a complete software lock unless you want to spend an undefined number of X hundreds of hours fixing and troubleshooting compatibility issues.


There has been an M1 fork of Tensorflow almost since the chip launched last year. I believe Apple did the leg work. It’s a hoop to jump through, yes, and no ones training big image models or transformers with this, but I imagine students or someone sandboxing a problem offline would benefit from the increased performance over CPU only.

https://blog.tensorflow.org/2020/11/accelerating-tensorflow-...


Seem on the long term game here my dear all. Have amd sponsor that kind of activities. Or intel. …


Could we use M1 chips on non-apple boards? If yes, I wish Apple releases these for non mac os consumption. Eg. Running linux servers in the cloud.


Not a great fit. Something like Ampere altra is better as it gives you 80 cores and much more memory which better fits a server. A server benefits more from lots of weaker cores than a few strong cores. The M1 is an awesome desktop/laptop chip and possibly great for HPC, but not for servers.

What might be more interesting is to see powerful gaming rigs built around the these chips. They could have build a kickass game console with these chips.


Why they didn't lean into that aspect of the Apple TV still mystifies me. A Wii-mote style pointing device seems such a natural fit for it, and has proven gaming utility. Maybe patents were a problem?


Why? There are plenty of server oriented ARM platforms available for use (See AWS Graviton). What benefit do you feel Apple’s platform gives over existing ones?


The Apple cores are full custom, Apple-only designs.

The AWS Graviton are Neoverse cores, which are pretty good, but clearly these Apple-only M1 cores are above-and-beyond.

---------

That being said: these M1 cores (and Neoverse cores) are missing SMT / Hyperthreading, and a few other features I'd expect in a server product. Servers are fine with the bandwidth/latency tradeoff: more (better) bandwidth but at worse (highter) latencies.


My understanding is that you don't really need hyperthreading on a RISC CPU because decoding instructions is easier and doesn't have to be parallelised as with hyperthreading.


The DEC Alpha had SMT on their processor roadmap, but it was never implemented as their own engineers told the Compaq overlords that they could never compete with Intel.

"The 21464's origins began in the mid-1990s when computer scientist Joel Emer was inspired by Dean Tullsen's research into simultaneous multithreading (SMT) at the University of Washington."

https://en.wikipedia.org/wiki/Alpha_21464


Okay, the whole RISC thing is stupid. But ignoring that aspect of the discussion... POWER9, one of those RISC CPUs, has 8-way SMT. Neoverse E1 also has SMT-2 (aka: 2-way hyperthreading).

SMT / Hyperthreading has nothing to do with RISC / CISC or whatever. Its just a feature some people like or don't like.

RISC CPUs (Neoverse E1 / POWER9) can perfectly do SMT if the designers wanted.


Don’t think that is entirely true. Lots of features which exist on both RISC and CISC CPUs have different natural fit. Using micro-ops e.g. on a CISC is more important than in RISC CPU even if both benefit. Likewise pipelining has a more natural fit on RISC than CISC, while micro-op cache is more important on CISC than RISC.


I don't even know what RISC or CISC means anymore. They're bad, non-descriptive terms. 30 years ago, RISC or CISC meant something, but not anymore.

Today's CPUs are pipelined, out-of-order, speculative, superscalar, (sometimes) SMT, SIMD, multi-core with MESI-based snooping for cohesive caches. These words actually have meaning (and in particular, describe a particular attribute of performance for modern cores).

RISC or CISC? useful for internet flamewars I guess but I've literally never been able to use either term in a technical discussion.

-------

I said what I said earlier: this M1 Pro / M1 Max, and the ARM Neoverse cores, are missing SMT, which seems to come standard on every other server-class CPU (POWER9, Intel Skylake-X, AMD EPYC).

Neoverse N1 makes up for it with absurdly high core counts, so maybe its not a big deal. Apple M1 however has very small core counts, I doubt that Apple M1 would be good in a server setting... at least not with this configuration. They'd have to change things dramatically to compete at the higher end.


Today RISC just means an ISA is fixed-length load-store. No uarch implications.


Here is one meaning:

Intel microcode updates added new machine opcodes to address spectre/meltdown exploits.

On a true RISC, that's not possible.


https://www.zdnet.com/article/meltdown-spectre-ibm-preps-fir...

Or are you going to argue that the venerable POWER-architecture is somehow not "true RISC" ??

https://www.ibm.com/support/pages/checking-aix-protection-ag...


If its got microcode, it's not RISC.


POWER9, RISC-V, and ARM all have microcoded instructions. In particular, division, which is very complicated.

As all CPUs have decided that hardware-accelerated division is a good idea (and in particular: microcoded, single-instruction division makes more sense than spending a bunch of L1 cache on a series of instructions that everyone knows is "just division" and/or "modulo"), microcode just makes sense.

The "/" and "%" operators are just expected on any general purpose CPU these days.

30 years ago, RISC processors didn't implement divide or modulo. Today, all processors, even the "RISC" ones, implement it.


Hyper threading has nothing to do with instruction decode. It's for hiding memory latency. The sparc T line is 8 way threaded.


It's slightly more general than that, hiding inefficient use of functional units. A lot of times that's totally memory latency causing the inability to keep FUs fed like you say, but i've seen other reasons, like a wide but diverse set of FUs that have trouble applying to every workload.


The classic reason quoted for SMT is to allow the functional units to be fully utilised when there is instruction-to-instruction dependencies - that is, the input of one instruction is the output from the previous instruction. Doing SMT allow you to create one large pool of functional units and share them between multiple threads, hopefully increasing the chances that they will be fully used.


Well, tons, there isn't another ARM core that can match a single M1 Firestorm, core to core. Heck, only the highest performance x86 cores can match a Firestorm core. and that's just raw performance, not even considering power efficiency. But of course, Apple's not sharing.


Linux on M1 Macs is under development, running, last I heard.

https://9to5mac.com/2021/10/07/linux-is-now-usable-as-a-basi...


Wasn't there a rumor that AMD was creating ARM chips? It will be great if we have ARM versions of EPYC chips.


I'd love to see someone do a serious desktop RISC-V processor.


They were, but have stopped talking about that for years. The project is probably canceled; I've heard Jim Keller talk about how that work was happening simultaneously with Zen 1.


AMD's equivalent of the Intel Management Engine (ME) is based on ARM, I think.

Intel used to use "ARC" cores, but recently converted to and i486.


This is the ARM core that lives inside AMD Xen:

https://en.m.wikipedia.org/wiki/AMD_Platform_Security_Proces...


anyone know what is the most optimum performance/dollars setup for this m1pro?


"Apple’s Commitment to the Environment"

> Today, Apple is carbon neutral for global corporate operations, and by 2030, plans to have net-zero climate impact across the entire business, which includes manufacturing supply chains and all product life cycles. This also means that every chip Apple creates, from design to manufacturing, will be 100 percent carbon neutral.

But what they won't do is put the chip in an expandable and repairable system so that you don't have to discard and replace it every few years. This renders the carbon-neutrality of the chips meaningless. It's not the chip, it's the packaging that is massively unfriendly to the environment, stupid.


Apple, the company that requires the entire panel to be replaced by design when a 6 dollar display cable malfunctions, is proud to announce its latest marketing slogan for a better environment.


Just because you're not getting that panel back doesn't mean it's destroyed and wasted. I figure that these policies simplify their front-line technician jobs, getting faster turnaround times and higher success rates. Then they have a different department that sorts through all the removed/broken parts, repairing and using parts from them. No idea if this is what they actually do, but it would be the smart way to handle it.


It seems like they will rather shredded perfectly good parts than to let thirt party repair shops use them to help people at sane prices:

https://www.washingtonpost.com/technology/2020/10/07/apple-g...

https://www.vice.com/en/article/yp73jw/apple-recycling-iphon...


So because a company stole devices to sell them makes Apple the bad guy?


It's possible for both companies to be in the wrong.

The recycling center shouldn't have resold the devices (which is, as you point out, effectively theft). However, Apple should not be shredding hundreds of thousands of otherwise usable devices.


Apple does nothing to improve front line technician procedures. They aren't even an engineering factor. If you happen to be able to replace something on an Apple product, it's only because the cost-benefit ratio wasn't in favor of making that part hostile to work with.

Apple puts 56 screws in the Unibody MBP keyboards. They were practically the pioneer of gluing components in permanently. They don't care about technicians. Not even their own. They have been one of the leaders of the anti-right-to-repair movement from day one.


Oh hey they went back to screws on the keyboards? that's nice, they used to be single use plastic rivets, so at least you can redo that.

Also Apple's glue isn't usually that bad to work with. Doesn't leave much residue, so as long as you know where to apply the heat you can do a clean repair and glue the new component back in.


I think apple might have learned a painful lesson about keyboard repairability… with all those warranty repairs they are *still* stuck with.


> Apple does nothing to improve front line technician procedures.

I’m not a fan of planned obselence and waste but this is clearly wrong. They’ve spent loads of engineering effort designing a machine for their store that can replace and reseal and test iPhone screen replacements out back.


Sounds more like using a sword where a knife is needed.


So what’s your proposal? How big would a “phone” be with all those features that an iphone pro has? I am by no means an apple fanboy, but the same way a modern car engine can’t just be tweaked the way it was 50 years ago due to all the miniaturizations that are in large part due to efficiency gains, the same is just as true of phones.

But at the same time, a single chip with everything included will also make these phones pretty sturdy, where it either fails completely, or remain working for long years.


An interesting comparison is formula 1 cars. Peak performance and parts that can be changed in seconds while still running. Even average modern cars have hundreds of parts that a lay person can reach with simple tools. Apple are obviously making a trade off (close it down for reduced size and better weather/water sealing) but then they don't get to pretend to be an environmentally concious company as that is antithical to their design goals.


That's kind of a bad argument, considering that an F1 engine will be absolutely _fucked_ and needs to be thrown away after 2000 km.

That said, the previous poster's argument is terrible and gluing a phone is not what allows """peak performance"""


Gluing is the least of the problem - as others mentioned, it can be easily resealed, and I very much prefer my phone surviving a bit of water.


It was more a comment about their ability to have parts replaced, but you're right that the analogy has many flaws.


The reasoning was to make the device as thin as possible according to Verge iirc. The cable degrades because it's too short and can't be replaced.

Says it all pretty much.


Yes, I want a device as thin as what my wallet is going to be after its repairs


Even in school we were taught the lessons to better and sustainable environmental habits: Reduce, Reuse, Recycle

In the very same order.

So, no sir. Apple isn't the environment friendly company that they claim to be. So much money, and so little accountability.


Obsession with emissions has really made people start to miss the forest from the trees.


Emissions affect everyone on the planet, no matter where they happen. But polluting the ground or water only happens in China, so a lot of Americans that care about emissions don't care about the other types of pollution, because it doesn't affect them.


You would be surprised just how much food you eat has been grown in China using polluted land and water.

It's not so much fresh vegetables, but ingredients in other types of food -- especially the frozen fruit, vegetables and farmed seafood that finds its way into grocery store and restaurant supply chains.


> But polluting the ground or water only happens in China,

Do you have any idea how many superfund sites are in Silicon Valley alone?


Yes, one of them is under my house. But that’s not what my comment was about.

I was pointing out the mindset of people who don’t care about ground pollution of their products because their products are made elsewhere.


> But polluting the ground or water only happens in China

I see you've never been to Houston.


Could you elaborate on what you mean by this?


Not OP. Personally, I've had Dell, HP and Sony laptops. But the macs have been the longest lasting of them all. My personal pro is from 2015.

It has also come to a point where none of the extensions makes sense for me. 512GB is plenty. RAM might be an issue - but I honestly don't have enough data on that. The last time I had more than 16GB RAM was in 2008 on my hand built desktop.

As long as the battery can be replaced/fixed - even if it's not user serviceable, I'm okay with that. I'd guess I'm not in the minority here. Most people buy a computer and then take it to the store even if there's a minor issue. And Apple actually shines here. I have gotten my other laptop serviced - but only in unauthorized locations with questionable spare parts. With Apple, every non-tech savvy person I know has been able to take to an Apple store at some point and thereby extend the life.

That's why I believe having easily accessible service locations does more to device longevity than being user-serviceable.

(In comparison, HTC wanted 4 weeks to fix my phone plus 1wk either way in shipping time and me paying shipping costs in addition to the cost of repair. Of course, I abandoned the phone entirely than paying to fix it.)

We could actually test this hypothesis - if we could ask an electronics recycler on the average age of the devices they get by brand, we should get a clear idea on what brands actually last longer.


I'd much rather have the ability to fix a device myself than be locked into a vendor controlled repair solution. I've been able to extend the life of many devices I've had (the earliest from 2010) through repairs like dust removal, RAM upgrades and thermal paste reapplication.

Also worth noting that some people might be taking laptops to repair shops precisely because they are not user serviceable. Companies like framework are trying to change this with well-labelled internals and easily available parts.


Apple don't offer cheaper laptops. No one doubt it lasts longer in average.


I'm guessing they mean greenwashing statements about lower CO2 emissions glosses over more "traditional" pollution as heavy metals, organic solvents, SO2, NOx. Taming overconsumption is greener than finding ways to marginally reduce per unit emissions on ever more industrial production.


Not to mention all the eWaste that comes with the AirPods.


Who's doing better to mitigate e-waste?

> "AirPods are designed with numerous materials and features to reduce their environmental impact, including the 100 percent recycled rare earth elements used in all magnets. The case also uses 100 percent recycled tin in the solder of the main logic board, and 100 percent recycled aluminum in the hinge. AirPods are also free of potentially harmful substances such as mercury, BFRs, PVC, and beryllium. For energy efficiency, AirPods meet US Department of Energy requirements for battery charger systems. Apple’s Zero Waste program helps suppliers eliminate waste sent to landfills, and all final assembly supplier sites are transitioning to 100 percent renewable energy for Apple production. In the packaging, 100 percent of the virgin wood fiber comes from responsibly managed forests."


Weird that they leave out the parts about the battery and casing waste; and are design to only last on average of 18 months to force you to buy new ones.

https://www.vice.com/en/article/neaz3d/airpods-are-a-tragedy


absolutely, the worst part of the airpods is the degrading non-replaceable battery... it really left a bad impression.


Does anyone have a comparable product that doesn't have this issue?


No, because, its the limitation of a battery. And there is a reason why, they’re manufactured as a non reparable product. They house battery, speakers, microphone, bluetooth and other electronic logic board. The space is so scarce they need to be machined very accurately. But hey bashing on Apple is better than thinking why. Market has already spoken that it needs tiny things hanging on your ears. There is a limit on what we can expect from such things.


And what are the chances of that part failing enough to impact the environment?


As long as they plant a tree every time they replace a panel, it should be fine?


We'll be able to undo most of our damage to environment this way, as it's always replacements


>so that you don't have to discard and replace it every few years

Except you and I surely must know that's not true, that their machines have industry leading service lifetimes, and correspondingly high resale values as a result. Yes some pro users replace their machines regularly but those machines generally go on to have long productive lifetimes. Many of these models are also designed to be highly recyclable when the end comes. It's just not as simple as you're making out.


Right. Im not speaking to iPhones or iPads here, but the non-serviceability creates a robustness pretty unmatched by Windows laptops in terms of durability.

Was resting my 2010 MBP on the railing of a second story balcony during a film shoot and it dropped onto the marble floor below. Got pretty dented, but all that didn't work was the ethernet port. Got the 2015 one and it was my favorite machine ever - until it got stolen.

2017 one (typing on now) is the worst thing I've ever owned and I'm looking forward to getting one of the new ones. 2017 one: -Fries any low voltage USB device I plug in (according to some internal Facebook forms they returned 2-5k of this batch for that reason) -When it fried an external drive plugged in on the right, also blew out the right speaker. -Every time I try and charge it I get to guess which USB-C port is going to work for charging. If I pick wrong I have to power cycle to power brick (this is super fun when the laptops dead and there's no power indicator, as there is on the revived magsafe) -Half-dime shaped bit of glass popped out of the bottom of the screen when it was under load - this has happened to others in the same spot but user error..

Pissed Apple wouldn't replace it given how many other users have had the same issues, but this thing has taken a beating as have my past laptops. I'll still give them money if the new one proves to be as good as it seems.


> their machines have industry leading service lifetimes

Please stop copying marketing content, it really doesn't help your argument.

Additionally, macbooks have high failure rates, especially with keyboards in the previous generations, but also overheating because of their dreadful airflow. Time will tell what happens to the M1, but Apple's hardware is just as (un)reliable as say, Dell's.

No, personal experience isn't data.


> No, personal experience isn't data.

Do you have data to support your statement?

> Apple's hardware is just as (un)reliable as say, Dell's.

When I had access to reports from IT on a previous job (5k+ employees, most on MacBooks) Apple was definitely much more reliable than the Dell Windows machines in use. More reliable than the ThinkPads as well but this is data from one company, unsure how it compares to other large orgs.

Not only more reliable but customer service was much faster and better with Apple computers than Dell's.


This only makes sense if you presume people throw away their laptops when they replace them after "a few years". Given the incredibly high second hand value of macbooks, I think most people sell them or hand them down.


You're talking about selling working devices but parent was also talking about repairing them.

Seems like a huge waste to throw away a $2000+ machine when it's out of warranty because some $5 part on it dies and Apple not only doesn't provide a spare but actively fights anyone trying to repare them, while the options they realistically will give you out of warranty being having your motherboard replaced for some insane sum like $1299 or having you buy a new laptop.

Or what if you're a klutz and spill your grape juice glass over your keyboard? Congrats, now you're -$2000 lighter since there's no way to take it apart and clean the sticky mess inside.


> Or what if you're a klutz and spill your grape juice glass over your keyboard? Congrats, now you're -$2000 lighter since there's no way to take it apart and clean the sticky mess inside.

Thanks to the Right To Repair, you can take the laptop to pretty much any repair shop and they can replace anything you damaged with OEM or third-party parts. They even have schematics, so they can just desolder and resolder failed chips. In the past, this sort of thing would be a logic board swap for $1000 at the very least, but now it's just $30 + labor.

Oh, there is no right to repair. So I guess give Apple $2000 again and don't drink liquids at work.


Removable ram wouldn't change anything in your story presuming the entire board is fried. Anyway, $30 + labor is a deceitful way to put it. The labor in your story would be 100s an hour and would probably fail to actually fix the issue most of the time.


Not gonna lie, you had me in the first half.


> and don't drink liquids at work.

Which is ironic given that Apple laptops are often depicted next to freshly brewed cafe lattes.


Perhaps this is the real reason behind the "crack design team" jokes? A wholesale internal switchover from liquid-based stimulants after one too many accidents?

/s


> actively fights anyone trying to repare them

What makes you say that? What did you expect would happen if you spill juice into your laptop?

What they are perhaps fighting is unauthorized repairs, in the sense that they want to be able to void the warranty if some random third party messes with the insides. That's not quite the same thing.

Apple has been very helpful when I brought in a 5 year old macbook pro with keyboard issues, replaced some keys for free on the spot. Also when the batteries of 8 and 9 year old MBAs started to go bad, they said they could replace them but advised me to order batteries from iFixit and do it myself, which I did.


Seems like a huge waste to throw away a $2000+ machine

There are other options besides throwing it away.

You can (a) trade it in for a new Mac (I just received $430 for my 2014 MBP) or (b) sell it for parts on eBay.

Or what if you're a klutz and spill your grape juice glass over your keyboard? Congrats, now you're -$2000 lighter since there's no way to take it apart and clean the sticky mess inside.

You can unscrew a Mac and clean it out. You can also take it into Apple for repair.


My F500 company has a 5 year tech refresh policy, and old laptops are trashed, not donated or resold.


Doesn't that mean that the problem is the policy of the F500 company and not whatever the supplier had in mind?


Puts on ewaste management company uniform

"Yeah <normal guy>'s out sick today, I'm his replacement."

*yeet*

In all seriousness I would absolutely love to do this sort of thing IRL, in situations where I'll just make incompetent management etc unimpressed (because I'm showing their inefficiency) and there wouldn't be any real/significant ramifications (eg machines that processed material a couple notches more interesting than what PCI-DSS covers).

But obviously I don't mean I'd literally use the above example to achieve this ;P

I've just learned a bit about (eh, you could say "been bitten by") poorly coordinated e-waste management/refurbishment/etc programs - these can be a horrendously inefficient money-grab if the top-level coordination isn't driven by empathy in the right places. So I would definitely get a kick out of doing something like that properly.


I cannot imagine that represents the majority of the market.


Exactly, corporate never risks them getting into wrong hands, no matter how theoretical risk that might be. Same for phones


We used to remove the hard drives which takes about 20 seconds on a desktop that has a "tool-less" case. Then donate the computer to anybody including the employees if they want it.

It takes a few minutes to do that on a laptop but it's not that long.


One of our interns got that jobs. Left are 200 old laptops, take out the SSD and smash it with a hammer. Right are 200 new laptops, don't touch.

Turns out someone got confused between left and right, gave him the wrong instructions, and he smashed 200 brand new SSDs. Ouch.


Seems like FDE would solve that and make it a hardware non-issue.


Work if you trust FDE. Some FDE are broken, like this: https://www.zdnet.com/article/flaws-in-self-encrypting-ssds-...


I suspect the disposal company your company contracts with parts them out and resells them. Although if you're literally throwing them in the dumpster, that's not even legal in many jurisdictions.


Damn, they could at least just trash the hard drives and just give them to local schools or something...


All of my Apple laptops (maybe even all their products) see about 5 to 8 years of service. Sometimes with me, sometimes as hand-me-downs. So they’ve been pretty excellent at not winding up in the trash.

Even software updates often stretch as far back as 5 year old models, so they’re pretty good with this.


Big Sur is officially supported by 2013 Mac models (8 years).

iOS 15 is supported by the 6s, which was 2015. So 6 years.

And I still know people using devices from these eras. Apple may not be repair friendly, but at the end of the day, their devices are the least likely to end up in the trash.


And here I am sitting at my 2011 Dell Latitude wondering what is so special about that. My sis had my 2013 Sony Duo but that's now become unusable with its broken, built-in battery. Yes, 5 to 8 years of service is nice, but not great or out of norm for a $1000+ laptop.


I guess 2011 Latitudes were the pro models? they were built like a tank.. not anymore i guess


Because they run windows.

If you look at android phones, you're looking at a few years only.

Because of software


Parent is talking about laptops, I am talking about laptops, why are you talking about smartphones? Though I also had my Samsung S2plus from 2013 to 2019 in use, and that was fairly cheap. I do not know any IPhone users that had theirs for longer.


My iphone 2G from 2007 still works well into 2013-2014.


> it's the packaging that is massively unfriendly to the environment, stupid.

Of all the garbage my family produces over the course of time, my Apple products probably take less than 0.1% of my family's share of the landfill. Do you find this to be different for you? Or am I speaking past the point you're trying to make here?


Is there an estimate of what the externality cost is for the packaging per unit? Would be useful to compare to other things that harm the environment like eating meat, taking car rides, to know how much I should think about this. E.g. if my iphone packaging is equivalent to one car ride I probably won't concern myself that much, but if it's equivalent to 1000 then yeah maybe I should. Right now I really couldn't tell you which of those two the true number is closer to. I don't expect we would be able to know a precise value but just knowing which order of magnitude it is estimated to be would help.


It absolutely doesn't render the carbon-neutrality of the chip useless. Concern about waste and concern about climate change are bound by a political movement and not a whole lot else. It's not wrong to care about waste more, but honestly its emissions that I care about more.


> It's not wrong to care about waste more, but honestly its emissions that I care about more.

Waste creates more emissions. Instead of producing something once, you produce it twice. That's why waste is bad, it's not just about disposing of the wasted product.


Not if the company that produces them is carbon neutral, which is theoretically the argument here. In general you're obviously correct, but I'd expect most emissions aren't incurred from waste.


My family has 3 MBP's. 2 of them are 10 years old, 1 of them is 8. When your laptops last that long, they're good for the environment.


Won't that make the chip bigger and/or slower? I think the compactness where the main components are so close together and finely tuned that makes the difference. Making it composable probably means also making it bigger (hence won't fit in as small spaces) and probably slower than what it is. Just my two cents though am not a chip designer.


Just making the SSD (the only part that wears) replaceable would greatly increase the lifespan of these systems, and while supporting M.2 would take up more space in the chassis, it would not meaninfully change performance or power.


Isn’t most of the components in MacBooks recyclable? If I remember correctly, Apple has a recycling program for old Macs so it’s not like these machines goes to landfill when their past their time or broken.


I believe Apple tries to use mostly recyclable components. And they do have a fairly comprehensive set of recycling / trade-in programs around the globe: https://www.apple.com/recycling/nationalservices/

That being said, I haven’t read any third-party audits to know if this is more than Apple marketing. Would be curious if they live up to their own marketing.


> Would be curious if they live up to their own marketing.

Do people really think that companies like Apple et al (who have a huge number of people following them eager to rip into them at ever opportunity) could get away with a "marketing story" like that? Like, really, Apple just making all that up and _not one single person_ whistleblowing on it if it were a lie?


AFAIK their recycling is shredding everything, no matter if it still works and separating gold and some other metals.


When you trade-in your Mac you are asked if the enclosure is free of dents, turns-on, battery holds charge etc.

Strange that they would ask these questions if they were simply going to shred the device.


>> Strange that they would ask these questions if they were simply going to shred the device.

Why not? It does sound micer that way and some customers may actually think they resell them just because of this...


Yeah it’s all a big conspiracy.


As others have said, the option to easily configure the computer post-purchase would make a massive difference in terms of its footprint


>This renders the carbon-neutrality of the chips meaningless.

You have a point but if they were actually truly neutral it wouldn't matter if you make 100,000 of them and throw them away.


> what they wont do is put the chip in an expandable and repairable system

Because that degrades the performance overall. SoC has proven itself to simply be more performant than a fully hotswappable architecture. Look at the GPU improvements they're mentioning - PCIe 5.0 (yet unreleased) maxes out at 128GB/s, whereas the SoC Apple has announced today is transferring between the CPU/GPU at 400GB/s.

In the end, performance will always trump interchangability for mobile devices.


Comparing memory interface vs PCIe isn't valid. Comparing LPDDR5 vs DDR5 latency, throughput, and power consumption would be good.


In this case I'd argue it is, because to communicate between the CPU and GPU on a non-SoC computer, you need to send that data through the PCIe interface. On the M1 SoC, you don't. They operate differently, and thats the main point here. You have to add those extra comparison points.


yep, thats definitely true

but it would be nice if the soc itself would be a module that you could upgrade and keep the case/display, it would also cut down on environmental impact probably as well...


I just used Apple's trade-in service for my 2014 MacBook Pro and received $430.

So there is another, quite lucrative, option besides discarding it.


Good to know and thanks! Looking forward to doing the same as I have a 2014 MBP that works and is in good condition.


They would make less money if they made the chip repairable. This doesn't have to make them evil. Apple being more profitable also means they can lower the cost and push the technological advancement envelop forward faster. Every year we will get that much faster chips. This is good for everyone.

This doesn't mean Apple's carbon footprint has to suffer. If Apple does a better job recycling old Macbooks than your average repair guy who takes an old CPU and puts in a new one in a repairable laptop then Apple's carbon footprint could be reduced. I remember the days when I would replace every component in my desktop once a year, I barely thought about recycling the old chips or even selling them to someone else. They were simply too low value to an average person to bother with recycling them properly or reselling them.


> They would make less money if they made the chip repairable

How would a 5 nanometer chip be "repairable"? Who would be able to repair such a chip and what would the tool cost be?


Chips aren't made out of vacuum tubes any more, you can't "fix" a transistor


>But what they won't do is put the chip in an expandable and repairable system so that you don't have to discard and replace it every few years. This renders the carbon-neutrality of the chips meaningless. It's not the chip, it's the packaging that is massively unfriendly to the environment, stupid.

Mac computers last way longer than their PC counterparts.


Is apple's halo effect affecting your perception of the mac vs PC market? iPhones last longer because they have much longer software updates, and are more powerful to start with. None of these factors apply to macs vs PCs.


Yeah but some Android stuff and windows stuff is so low-end that it only lasts for like 2 years and then it's functionally obsolete because of software. All the mac stuff from 10 years ago seems to still be able to work and has security updates.


> It's not the chip, it's the packaging that is massively unfriendly to the environment, stupid.

Who are you calling stupid? If you're going to call someone or something stupid, don't do it in a stupid way.


It's an allusion to Bill Clinton's 1992 presidential campaign slogan: "It's the economy, stupid." See: https://en.wikipedia.org/wiki/It%27s_the_economy,_stupid


You can always put your money where your mouth is and support someone that is doing all of the above:

https://frame.work/


Yes, I have one on order! :-)


This isn't relevant to the chips. Take this to the other thread.


agree with your point, but one could also look at the performance/power savings and use that in an argument for environmentally friendliness


What the hell does this have to do with the chips?


Yeah, but one of them doesn't cost a ton to implement (what they're doing) and the other one would cost them a ton through lost sales (what you're asking for).

Always follow the money :-)


Erhh... I think OP gets it, he's just calling out the greenwashing.


I always thought it was strange that "integrated graphics" was, for years, was synonymous with "cheap, underperforming" compared to the power of a discrete GPU.

I never could see any fundamental reason why "integrated" should mean "underpowered." Apple is turning things around, and is touting the benefits of high-performance integrated graphics.


Very simple: thermal budget. Chip performance is limited by thermal budget. You just can't spend more than roughly 100W in a single package, without going into very expensive and esoteric cooling mechanisms.


this is mostly wrong. The real issue has always been memory bandwidth. the highest end consumer x86 CPU has about the same memory bandwidth as a dGPU from 10 years ago. The M1 is extremely competitive with modern dGPUs, only a bit behind a 6900 XT.


If Intel/AMD are seriosus about iGPU, they implement solution for memory bandwidth (they did, Intel Iris Pro eDRAM, AMD gaming consoles using GDDR, upcoming AMD stacking SRAM). So I believe the core problem is that market didn't seriously want great iGPU but just fine with poorer iGPU or rich dGPU by Nvidia.


>The M1 is extremely competitive with modern dGPUs, only a bit behind a 6900 XT.

Do you have a source for this?


Apple compares the M1 Max as having similar performance to a nVidia's 3080 Laptop GPU, which scores around 16,500 on passmark. For comparison, the AMD 6900 XT desktop CPU scores 27,000, while the nVidia 3080 desktop GPU scores 24,500.

So the M1 Max is not as fast as a high end desktop GPU. Still, it is incredible that you are getting a GPU that performs slightly less than a last generation 2080 desktop GPU at just 50-60 watts.


Yes, apple's marketing materials claim 400 GB/s, while the 6900 XT is 512 GB/s. This is very easily Googled. While memory bandwidth isn't everything, it is the major bottleneck in most graphics pipelines. An x86 cpu with 3200 MHz memory has about 40 GB/s of bandwidth, which more or less makes high end integrated graphics impossible.


Ah, I misunderstood your comment. When you said it was competitive with the 6900 XT I thought you were talking about GPU performance in general, not just in terms of memory bandwidth.


According to the numbers Apple is touting, the M1 Max is competitive with modern GPUs in general, being on par with—roughly—a 3070 (laptop version) or a 2080 (desktop version). They've still got a ways to go but this is shockingly close, particularly given their power envelope.


> this is mostly wrong. The real issue has always been memory bandwidth.

Not really wrong. Memory bandwidth is only a limitation for a very narrow subset of problems.

I've gone back and forth between server-grade AMD hardware with 4-channel and 8-channel DDR4 and consumer-grade hardware with 2-channel DDR4. For most of my work (compiling, mostly) the extra memory bandwidth didn't make any difference. The consumer parts are actually faster for compilation because they have a higher turbo speed, despite having only a fraction of the memory bandwidth.

Memory bandwidth does limit certain classes of problems, but we mostly run those on GPUs anyway. Remember, the M1 Max memory bandwidth isn't just for the CPU. It's combined bandwidth for the GPU and CPU.

It will be interesting to see how much of that memory can be allocated to a M1 Max. It might be the most accessible way to get a lot of high-bandwidth RAM attached to a GPU for a while.


GP is talking specifically about GPUs. iGPUs are 100% bottlenecked by memory bandwidth; specifically it is the biggeset bottleneck for every single purchasable iGPU on the market (excluding M1 Pro/Max).

Your compute anecdotes have no bearing on (i)GPU bottlenecks.


They talking specifically about GPUs.


As the charts Apple shared in the event showed, you hit diminishing returns in performance/watt pretty quickly.

Sure. It'd be tough to be the top performing chip in the market, but you can get pretty close.


I dunno. Setting power limits on 3090 at >50% has nearly linear effect on performance.


> I never could see any fundamental reason why "integrated" should mean "underpowered."

There was always one reason: limited memory bandwidth. You simply couldn't cram enough pins and traces for all the processor io plus a memory bus wide enough to feed a powerful GPU. (at least not in a reasonable price)


We solved that almost a decade ago now with HBM. Sure, the latencies aren't amazing, but the power consumption numbers are and large caches can hide the higher access latencies pretty well in almost all cases.


PS4 / PS5 / XBox One / XBox Series X are all iGPU but with good memory bandwidths.


Only time that I can remember HBM being used with some kind of integrated graphics was strange Intel NUC with Vega GPU and IIRC correctly they were on the same die.


That product had an Intel CPU and AMD GPU connected via PCIe on the same package, not the same die. It was a neat experiment, but it was really just a packaging trick.


Still confused how 32 core M1 Max competes with Nvidia's thousands-of-cores GPUs. Certainly there are some things that are nearly linear with core count, or otherwise they wouldn't keep adding cores, right?

Edit: Found answer here. GPU core is not the same thing as a CUDA core. https://www.reddit.com/r/hardware/comments/73i3ne/why_do_app...


The desktop RTX 3070 has 46 SMs, which are the most comparable thing to Apple's cores.

NVIDIA defines any SIMD lane to be a core. They recently have gotten more creative with definition, they were able to double FP32 executions per unit (versus previous gen) and hence in marketing materials, doubled the number of "CUDA cores".


Funny, all these years I've been wondering how they possibly packed so many "cores" into those things.


The apples to apples comparison would be CUDA cores to execution units. Basically how many units which can perform a math operation. Apple's architecture has 128 units per core, so a 32 core M1 Max has the same theoretical compute power as 4096 CUDA cores. This of course doesn't take into consideration clock speed or architectural differences.


CPU core == Cuda SM


Perhaps with Vista? "Integrated" graphics meant something like Intel 915 which couldn't run "Aero". Even if you had the Intel 945, if you had low bandwidth RAM graphics performance still stuttered. Good article: https://arstechnica.com/gadgets/2008/03/the-vista-capable-de...


Video game consoles have been using integrated graphics for at least 15 years now, since Playstation 3 and Xbox 360.


You are mistaken. On both PS3 and Xbox 360 CPU and GPU is on different chips and made by different vendors(CPU made by IBM and GPU by Nvidia in case of PS3 and CPU by IBM and GPU by ATI for Xbox 360). Nonetheless in PS4/XOne generation they both use single die with unified memory for everything and their GPU could be called integrated.


For the 360, from 2010 production (when they introduced the 45nm shrink), the CPU and GPU was merged into a single chip.


When they did that they had to deliberately hamstring the SOC in order to ensure it didn’t outperform the earlier models. From a consistency of experience perspective I understand why, but it makes me somewhat sad that the system never truly got the performance uplift that would have come from such a move. That said there were significant efficiency gains from that if I recall.


Yup. Prior they were absolutely different dies just on the same package


If you mean including PS3 and X360, these two consoles had discrete GPUs. The move to AMD APUs was on the Xbox One and PS4 generation


Longer since integrated graphics used to mean integrated onto the north bridge and it's main memory controller. nForce integrated chipsets with GPUs in fact started from the machinations of the original Xbox switching to Intel from AMD at the last second.


In that case, it's more like discrete graphics with integrated CPU :)


Yeah, and vendors like Bungie are forced to cap their framerates at 30fps (Destiny 2).


They capped PC as well.


If they did, it definitely wasn't at 30. I was getting 90+ on my budget rig.

But no, I don't think they did.


Destiny 2 is capped on PC? The cutscenes are but the actual game is not


It used to have a bug that randomly cap the fps at 30. Only toggle vsync on and off again can fix it. It have no idea whether that has been fixed.


The software side hasn't been there on x86 GP platforms, even though AMD tried. It's worked out better on consoles.


What software is missing? I figured the AMD G-series CPUs used the same graphics drivers and same codepaths in those drivers for the same (Vega) architecture.

My impression was that it was still the hardware holding things back: Everything but the latest desktop CPUs still using the older Vega architecture. And even those latest desktop CPUs are essentially PS5 chips that got binned out.


Deep OS support for unified memory architectures for one. Things they tried to do with HSA etc. Also NVidia winning so much gpu programming mindshare with Cuda, and OpenCL failing to take off on mobile, dooming followon opencl development plans, didn't help.

In the wider picture, gpu compute in general on PC also failed to become mainstream enough to sway consumer choices. Development experience for GPUs is still crap vs the cpu, the languages are mostly bad, there's massive sw platform fragmentation among os vendors and gpu vendors, driver bugs causing OS crashes left and right, etc.

Re your impression, yes, AMD shifted focus more toward cpu from gpu in their SoCs after a while when their initiatives failed to take off outside consoles. But it's been an ok place to be, just keeping the gpu somewhat ahead of Intel competition and getting some good successes in the cpu side.


iGPU Vega is actually really, really good esp when it comes to perf/watt. It is bottlenecked by the slow memory bandwidth. DDR5 will more or less double iGPU performance.


How do they get 200/400GB per second RAM bandwidth? Isn't that like 4/8 channel DDR5. 4/8 times as fast as current Intel/AMD CPUs/APUs? (E.g. https://www.intel.com/content/www/us/en/products/sku/201837/... with 45.8GB/s)

Laptop/desktop have 2 channels. High-end desktop can have 4 channels. Servers have 8 channels.

How does Apple do that? I was always assuming that having that many channels is prohibitive in terms of either power consumption and/or chip size. But I guess I was wrong.

It can't be GDDR because chips with the required density don't exist, right?


It's LPDDR5, which maxes out at 6.4Gbit/s/pin, on a 256bit/512bit interface.

It's much easier to make a wider bus with LPDDR5 and chips soldered on the board than with DIMMs.


I hope we will see this on more devices. This is a huge boon to performance.

Might even forebode soldering RAM onto packages from here on out and forever.

Steamdeck will probably have a crazy 100gb/sec ram b/w. twice of current laptops and desktops.


Steamdeck is 88 GB/s using quad channel.


You aren't wrong, Apple is able to do this because implementing LPDDR is much more efficient from both a transistor and power consumption point of view, and is actually faster too. The tradeoff is you can't put 8 or 16 dram packages on the same channel like you can with regular DDR, which means that the M1 Max genuinely has a 64 GB limit, while a DDR system with the same bandwidth would be 1 TB. Fortunately for Apple there isn't really a market for a laptop with a TB of RAM.


They are using LPDDR5.

Not the usual DDR5 used in Desktop / Laptop.


DDR5 isn't common yet.

DDR4 is the common desktop/laptop chip. LPDDR5 is cell-phone chip, so its kinda funny to see a low-power RAM being used in a very wide fashion like this.


Don't cell phones sell more than desktops, laptops, and servers? Smartphones aren't a toy: they are the highest volume computing device.

They are also innovating with things like on-chip ECC for LPDDR4+, while desktop DDR4 still doesn't have ECC thanks to Intel intentionally gimping it for market segmentation.


Well sure. But that doesn't change the fact that DDR5 doesn't exist in any real numbers.

LPDDR5 is a completely different protocol from DDR5 by the way, just like GDDR5 is completely different from DDR3 it was based on. LPDDR3 was maybe the last time the low-power series was something like DDR3 (the mainline).

Today, LPDDR5 is based on LPDDR4, which diverged significantly from DDR4.

> They are also innovating with things like on-chip ECC for LPDDR4+

DDR5 will have on-chip ECC standard, even unbuffered / unregistered.


That sound like HBM2, maybe HBM3 but that would be the first consumer product to include it afaik.

Basically the bus is really large, and the memory dies must be really close to the main processing die. Those memory were notably on the RX Vega from AMD, and before that on the R9 Fury.

https://en.m.wikipedia.org/wiki/High_Bandwidth_Memory


If that were the case you could probably see an interposer. And I think the B/W would be even higher.


And the price would be even higher.


If it was HBM it would have considerably higher bandwidth. A single HBM2E stack is 16GB at 460GBps, at 64GB that's 1.8TBps of bandwidth.


Disingenuous for Apple to compare these against 2017 Intel chips and call them 2x and 3.7x faster.

I would love to see how they fare against 2021 Intel and amd chips.


They did that to compare against the last comparable Intel chips in a Mac, which seems rather useful for people looking to upgrade from that line of Mac.


Reminds me of AMD comparing their insane IPC increase when Ryzen first came out.


How is it disingenuous - defined in my dictionary as not candid - when we know precisely which chips they are comparing against?

They are giving Mac laptop users information to try to persuade them to upgrade from their 2017 MacBook Pros and this is probably the most relevant comparison.


I'm pretty sure they are comparing them with 2019/2020 MacBook Pros, which apparently have chips originally launched 2017.


Looks to me like they are comparing against 2020 MBPs (at least for 13 inch) which use 10nm Ice Lake so nothing to do with 2017 at all!!


Intel's 2021 laptop chips (Alder Lake) are rumoured to be released later this month (usually actual availability is a few months after "release"). I expect them to be pretty compelling compared to the previous generation Intel parts, and maybe even vs AMD's latest. But the new "Intel 7" node (formerly 10++ or something) is almost certainly going to be behind TSMC N5 in power and performance, so Apple will most likely still have the upper hand.


I'd still bet on Apple for the all round package for a laptop but Intel should be coming out of the gates flying when Alder Lake launches.

This is their first ISA that actually reacts to Zen, from what I've heard.


Various leaked benchmarks show it outperforms the comparable Ryzens (and it's pretty obvious these rumours are sanctioned by Intel by their copious omission of watt numbers).


Are those the ones with big.LITTLE designs already?


The slide where they say its faster than an 8-core PC laptop CPU is comparing it against the 11th gen i7 11800H [1]. So it's not as fast as the fastest laptop chip, and it's certainly not as fast as the monster laptops that people put desktop CPUs in. But it uses 40% of the power of a not-awful 11th gen 8-core i7 laptop. The M1 is nowhere near as fast as a full-blown 16 core desktop CPU.

I am sure we will see reviews against high end intel and amd laptops very soon, and I wont be surprised if real world performance blows people away, as the M1 Air did.

[1] https://live.arstechnica.com/apple-october-18-unleashed-even...


... and neither is the M1 (in any configuration) a "full blown 16 core desktop CPU".

Those will be called M2 and come later next year, according to the rumor mill anyway.


Sorry, that is what I meant. I'll edit.


When M1 first released they pulled some marketing voodoo and you always saw the actively cooled performance numbers listed with the passively cooled TDP :D Nearly every tech article/review was reporting those two numbers together.


I suspect that’s because they:

1- want to convince people still on Intel Macs to update 2- lengthen the news cycle when the first units are shipped to the tech press and _they_ run these benchmarks


I thought they compared it with an i9-9980HK which is the top-end 2019 chip in the outgoing 16" MBP?


For me, think about that memory bandwidth. No other CPU comes even close. A Ryzen 5950X can only transfer about 43GB/s. This thing promises 400GB/s on the highest-end model.


As always, though, the integrated graphics thing is a mixed blessing. 0-copy and shared memory and all of that, but now the GPU cores are fighting for the same memory. If you are really using the many displays that they featured, just servicing and reading the framebuffers must be...notable.

A high end graphics card from nvidia these days has 1000GB/s all to itself, not in competition with the CPUs. If these GPUs are really as high of performance as claimed, there may be situations where one subsystem or the other is starved.


No consumer CPU comes close. Just saw an article about the next-gen Xeon's with HBM though that blows even this away (1.8TB/s theoretically), but what else would one expect from enterprise systems. Getting pretty damn excited about all the CPU manufacturers finally getting their asses into gear innovation-wise after what feels like a ridiculously long period of piss-warm "innovation".


Thanks to Apple in this case for taking a holistic approach to making a better computer.


The AMD chips in the PS5 and new XBox reach 448GB/s and 326GB/s bandwith respectively with their unified memory.


Not entirely true. Xbox Series X has two different memory bandwidths. The first 10GB has 560GB/s and the last 6GB has only 336GB/s.


Yeah it's an interesting setup. I believe the 560 is prioritized for graphics processing and 336 for more general tasks.


And only 10 cores so a 5950x completely wreck an M1.


A 5950x uses ~4 times as much power.


What application makes full use of 5950x all cores?


Video encoding, ray tracing. Prime95 if you're trying to stress test your CPU and memory.


To add to the other comments: Spark, Stockfish, some games


make -j32


Compiling stuff.


The benchmark to power consumption comparisons were very interesting. It seemed very un-Apple to be making such direct comparisons to competitors, especially when the Razer Blade Advanced had slightly better performance with far higher power consumption. I feel like typically Apple just says "Fastest we've ever made, it's so thin, so many nits, you'll love it" and leaves it at that.

I'll be very curious to see those comparisons picked apart when people get their hands on these, and I think it's time for me to give Macbooks another chance after switching exclusively to linux for the past couple years.


I think that for the first time, Apple has a real performance differentiator in its laptops. They want to highlight that.

If Apple is buying Intel CPUs, there's no reason making direct performance comparisons to competitors. They're all building out of the same parts bin. They would want to talk about the form factor and the display - areas where they could often out-do competitors. Now there's actually something to talk about with the CPU/GPU/hardware-performance.

I think Apple is also making the comparison to push something else: performance + lifestyle. For me, the implication is that I can buy an Intel laptop that's nicely portable, but a lot slower; I could also buy an Intel laptop that's just as fast, but requires two power adapters to satisfy its massive power drain and really doesn't work as a laptop at all. Or I can buy a MacBook Pro which has the power of the heavy, non-portable Intel laptops while sipping less power than the nicely portable ones. I don't have to make a trade-off between performance and portability.

I think people picked apart the comparisons on the M1 and were pretty satisfied. 6-8 M1 performance cores will offer a nice performance boost over 4 M1 performance cores and we basically know how those cores benchmark already.

I'd also note that there are efforts to get Linux on Apple Silicon.


I was casually aware of Asahi before this announcement. Now I'm paying close attention to its development.


They are selling these to people who know what the competition is, and care.


Apple used to do these performance comparisons a lot when they were on the PowerPC architecture. Essentially they tried to show that PowerPC-based Macs were faster (or as fast as) Intel-based PCs for the stuff that users wanted to do, like web browsing, Photoshop, movie editing, etc.

This kind of fell by the wayside after switching to Intel, for obvious reasons: the chips weren’t differentiators anymore.



I think that Apple took a subtle, not so subtle stand: power consumption has to do with environmental impact.


Apple almost single-handedly made computing devices non-repairable or upgradable; across their own product line and the industry in general due to their outsized influence.


Just today I got one 6s and one iPhone 7 screen repaired(6s got the glass replaced, the 7 got full assembly replaced) and battery of the 6s replaced at a shop that is not authorized by Apple. It cost me 110$ in total.

Previously I got 2017 Macbook Air SSD upgraded using an SSD and an adapter that I ordered from Amazon.

What’s that narrative that Apple devices are not upgradable or repairable?

It simply not true. If anything, Apple devices are the easiest to get serviced since there are not many models and pretty much all repair shops can deal with all devices that are still usable. Because of this, even broken Apple devices are sold and bought all the time.


>Just today I got one 6s and one iPhone 7 screen repaired

Nice, except doing a screen replacement on a modern iPhone like the 13 series will disable your FaceID making your iPhone pretty much worthless.

>Previously I got 2017 Macbook Air SSD upgraded using an SSD and an adapter that I ordered from Amazon

Nice, but on the modern Macbooks, the SSD is soldered and not replaceable. There is no way to upgrade them or replace them if they break, so you just have to throw away the whole laptop.

So yea, parent was right, Apple devices are the worst for reparability period since the ones you're talking about are not manufactured anymore therefore don't represent the current state of affairs and the ones that are manufactured today are built to not be repaired.


Hardware people are crafty, they find ways to transfer and combine working parts. The glass replacement(keeping the original LCD) I got for the 6S is not a procedure provided by Apple. Guess who doesn’t care? The repair shop that bought a machine from China for separating and re-assembly of the Glass and LCD.

Screen replacement is 50$, glass replacement is 30$.

iPhone 13 is very new, give it a few years and the hardware people will leverage the desire of not spending 1000$ on a new phone when the current one works fine except for that broken part.


Only if Apple wants to let them as far as I have seen. The software won't even let you swap screens between iPhone 13s. Maybe people will find a work around, but it seems like Apple is trying its hardest to prevent it.


And yet they authorize shops to perform these repairs. They’re not trying to prevent repairs, they’re trying to ensure repairs use Apple-supplied parts. Which, sure, you may object to that… but it’s very different from saying they’re preventing repairs full stop. And there’s very little chance such an effort would do anything other than destroy good will.


And how will the crafty HW people replace the SSD storage on my 2020 Macbok if it bites the dust?


By changing chips. There are already procedures for fun stuff like upgrading the RAM on the non-retina MacBook Airs to 16GB. Apple never offered 16GB version off that laptop but you can have it[0].

if there’s a demand there would be a response.

[0] https://www.youtube.com/watch?v=RgEfMzMxX5E


You clearly don't have a clue how modern Apple HW is built and why stuff that you're talking about on old Apple HW just won't work anymore on the machines build today.

I'm talking about 2020 devices where you can't just "change the chips" and hope it works like in the 2015 model from the video you posted.

Modern Apple devices aren't repairable anymore.


I would love to be enlightened about the new physics that Apple is using which is out of reach to the other engineers.

/s

Anyway, people are crafty and engineering is not an Apple-exclusive trade. believe it or not, Apple can’t do anything about the laws of physics.


> I would love to be enlightened about the new physics that Apple is using which is out of reach to the other engineers.

That’s known as private-public key crypto with keys burnt into efuses on-die on the SoC.

You can’t get around that (except for that one dude in Shenzhen who just drills into the SoC and solders wires by hand which happen to hit the right spots). But generally, no regular third party repair shop will find a way around this.


I know about it, it simply means that someone will build a device that automates the thing that the dude in Shenzhen does or they will mix and match devices that have different kind of damage. I.e. if a phone that has destroyed screen(irreparable) will donate its parts to phones that have the face id lens broken.

You know, these encryption authentications work between ICs and not between lenses and motors. Keep the coded IC, change the coil. Things also have different breaking modes, for example a screen might break down due to the glass failure(which cannot be coded) and the repair shop can replace the broken assembly part when keeping the IC that ensures the communication with the mainboard. Too complicated for a street shop? Someone will build a service that does it B2B, shops will ship it ti them, they will ship it back leaving only the installation to the street shop.

Possibilities are endless. Some easier some harder but we are talking about talent that makes all kind of replicas of all kind of devices. With billions of iPhones out there, it's actually very lucrative market to be able to salvage 1000USD device, their margins could be even better than the margins of Apple when they charge 100USD to change the glass of the LCD assembly.


Compared to a thinkpad where I can replace the parts with a screwdriver myself, this is still an incredibly wasteful effort.


>I would love to be enlightened about the new physics that Apple is using but is out of reach for the other engineers.

Watch Luis Rosmann on youtube.


I know Luis, he made a career of complaining that it's impossible to repair Apple devices when repairing Apple devices.

Instead of watching videos and getting angry about Apple devices being impossible to repair, I get my Apple devices repaired when something breaks. Significantly more productive approach, you should try it.


>I get my Apple devices repaired when something breaks

Your old Apple devices, that are known to be vert easy to repair. You wouldn't be so confident with the latest gear.

But why spoil it for you? Let's talk in a few year when you find it out the hard way on your own skin.


Louis makes "Apple impossible to repair" videos since ever. It's not an iPhone 13 thing, give it a few year and you can claim that iPhone 17 impossible to repair, unlike the prehistoric iPhone 13.

Here is a video from 2013, him complaining that Apple doesn't let people repair their products: https://www.youtube.com/watch?v=UdlZ1HgFvxI

He recently moved to a new larger shop in attempt to grew his Apple repair operations. Then had to move back to a smaller shop because as it turns out, it wasn't Apple who is ruining his repair business.


Apple is using SOCs now where CPU and RAM are one chip package. How are you going to upgrade RAM here even with the mother of all reflowing stations?


You don't. It's a technological progress similar to one where we lost our ability to repair transistors with introduction of chips. If this doesn't work for you you should stick with the old tech, I think the Russians did something like that on their soviet era plane electronics. There are also audiophiles who don't even switch to transistor and use vacuum tubes. Also the Amish who stick to the horses and candles who choose to preserve their way of doing things and avoid the problems of electricity and powered machinery.

You will need to make a choice sometimes. Often you can't have small efficient and repairable all the time.


Nice, except doing a screen replacement on a modern iPhone like the 13 series will disable your FaceID making your iPhone pretty much worthless.

Only if you go to someone who isn't an authorised Apple repairer.


> Nice, but on the modern Macbooks, the SSD is soldered and not replaceable. There is no way to upgrade them or replace them if they break, so you just have to throw away the whole laptop.

I mean, you can replace the logic board. Wasteful, sure, but there's no need to throw out the whole thing.


People also replace IC’s all the time. Heat it, remove the broken SSD chip, put the new one, re-heat.


I know, but I can understand people preferring socketed parts.


In modern Apple laptops (2018 and later), the storage is soldered as the memory has been since 2015. Contrast this with a Dell XPS 15 you can buy today within which you can upgrade/replace both the memory and the storage. This is the case with most Windows laptops. The exception is usually the super thin ones that solder in RAM Apple-style, but there are some others that do as well.

There's also the fact that Apple does things like integrate the display connector into the panel part. So, if it fails - like when Apple made it too short with the 2016 and 2017 Macbook Pros causing the flexgate controversy - it requires replacing a $600 part instead of a $6 one.


True, but you are talking about devices that are 4-6 years old. Storage is now soldered. Ram has been soldered for a while now, and with Apple Silicon its part of the SoC.


For context, Apple started soldering RAM in 2015 and soldering storage in 2018.


Perhaps leading to fewer failures and longer device lifespans.

As far as I understand, the less components and heat, the longer the electronics keep working.


That isn't "less components", that's same components but soldered so customers can't replace it.


It removes connectors and may remove buffers. It is hard to pop a memory SIMM out over time using a laptop if everything is soldered in.


not that I've heard of anyone popping out a DIMM over time, but I'd rather pop it back in rather than having to ship it to a repair shop with BGA workstation to replace if a DRAM chip develops fault over time.


newer MacBooks have both the SSD and RAM soldered on board, it's no longer user upgradable, unless you have a BGA rework station and knows how to operate it.


Single handedly?

>According to iFixit, the Surface Laptop isn’t repairable at all. In fact, it got a 0 out of 10 for repairability and was labeled a “glue-filled monstrosity.”

The lowest scores previously were a 1 out of 10 for all previous iterations of the Surface Pro

https://www.extremetech.com/computing/251046-ifixit-labels-s...


One might argue that Surface laptops were Microsoft's answer to MacBooks.


If repairability was important to consumers, it would be a selling point for competitors. But it's not.


If Apple actually cared about sustainability, they would make their devices repairable.


They are repairable, but not by consumers in most cases.


They are mostly not repairable even by authorized repair providers.

Basically, they can only change a few components (keyboard, display (with assembly), motherboard, and probably the aluminium case), but that's it.


You literally cannot replace the battery in that Surface Laptop without destroying the whole thing.

It's made to be thrown away, instead of repaired.


Conversation is about Apple.


The conversation is about repairability, and Apple has yet to make a line of products that consistently earns a repairability score of 1 or less.


And they get away with it because Apple normalized it.


Weirdly these machines have a "6.1 repairability rating" when you go in their store. I wonder what ifixit will think of them.


I'm still daily driving a 2015MBP. Got the battery replaced, free under warranty, a few years ago. Running lates MacOS without any issues

The phones in my family are an iPhone 6S, iPhone 8 and an iPhone XS. All running the latest iOS. The 6S got a battery swap for 50€, others still going strong.

Similar with tablets, we have three and the latest one is a 2017 iPad Pro. All running the latest iPadOS.

Stuff doesn't need to be repairable and upgradable if it can outlast the competition by a factor of two while still staying on the latest official OS update.

Can't do that with any Android device. A 6 year old PC laptop might still be relevant though.


Apparently, you didn't compare Apple devices with what the bulk of the market consists of.

Also, implying that repairability is required for environmental sustainability is questionable at best. People in their vast majority tend to get rid of 5 years old phones and laptops.


It’s almost like it’s just about marketing and not much else…


FWIW, they are in general quite accurate with their ballpark performance figures. I expect the actual power/performance curves to be similar to that they showed. Which is interesting, because IIRC on the plots from Nuvia before they were bought their cores had a similar profile. It would be exciting if Qualcomm could have something good for a change.


If we can get an actual Windows on ARM ecosystem started things will get really exciting really quickly.


There still will be a question of porting the bulk of Windows software to Arm.


If Apple can implement Rosetta 2, then surely Microsoft can do it as well (they’ve actually done it, just with terrible performances).


> I'll be very curious to see those comparisons picked apart when people get their hands on these, and I think it's time for me to give Macbooks another chance after switching exclusively to linux for the past couple years.

I really enjoy linux as a development environment, but this is going to be VERY difficult to compete with..


Asahi Linux is making great strides in supporting M1 Macs, and they're upstreaming everything so your preferred distro could even support them.

https://asahilinux.org/


You can always run Linux in a VM too


That's just not the same.


Yeah this is the first time they actually compared to something with the possibly bigger performance.


For the first time ever, they have something to brag about in their laptop specs. They are no longer just pulling parts off the shelf.


They're just marketing to the audience (actual Pros; not the whole 'uni student with macbook pro for taking notees'.


I'm not going to wait for the comparisons this time. Maxing this baby out right now.


Honest question, what do you do where a $6,099 laptop is justifiable?


I skip getting a Starbuck's latte, and avoid adding extra guac at Chipotle.

I'm kidding, that stuff has no affect on anything.

Justifiable, as in "does this make practical sense", is not the word, because it doesn't. Justifiable, as in, "does it fit within my budget?" yes that's accurate. I don't have a short answer to why my personal budget is that flexible, but I do remember there was a point in my life where I would ask the same thing as you about other people. The reality is that you either have it or you don't. That being said, nothing I had been doing for money is really going to max this kind of machine out or improve my craft. But things that used to be computationally expensive won't be anymore. Large catalogues of 24 megapixel RAWs used to be computationally expensive. Now I won't even notice, even with larger files and larger videos, and can expand what I do there along with video processing, which is all just entertainment. But I can also do that while running a bunch of docker containers and VMs... within VMs, and not think about it.

This machine, for me, is the catalyst for greater consumptive spending though. I've held off on new cameras, new NASs, new local area networking, because my current laptop and devices would chug under larger files.

Hope there was something to glean from that context. But all I can really offer is "make, or simply have, more money", not really profound.


Thank you for a very honest and thorough answer.


There's also future-proofing to some degree. I'll probably get a somewhat more loaded laptop than I "need" (though nowhere near $6K) because I'll end up kicking myself if 4 years from now I'm running up against some limit I underspeced.


Yeah I forgot to mention that, its a given for me.

Like there’s the potential tax deductibility, along with being a store of value (it will probably be $2300 in a few years but thats okay), making it easier to rationalize future laptops in the future by trading this one in. But I’m not betting on any of that.

I’ve just been waiting for this specific feature set, I’m upgrading from a maxed out dual GPU 2015 MBP that I purchased in 2017.

I skipped the whole divergence and folly.

No butterfly keyboards, no tolerating usbc while the rest of the world caught up, no usbc charging, no touch bar, I held out. And now I get Apple Silicon which already had rave reviews and blew everything else out of the water in the laptop space, and now I get the version with the RAM I want.

Surprisingly little fanfare, on my end. Which is kind of funny because I remember fondly configuring expensive maxed out Apple computers on their website that I could never afford. Its definitely more monumental if you save money for one specific thing and achieve that. But now I just knew I was already going to do it if Apple released a specific kind of M1 upgrade in a specific chassis, which they did and more. So it fit within my available credit, and which I’ll pay off likely by the end of the week, and I’m also satisfied that I get the points and a spending promotion my credit card had told me about.

But I was going to buy this irregardless.


A few thousand dollars per year (presumably it will last more than one year) is really not much for the most important piece of equipment a knowledge worker will be using.


It's still a waste if you don't need it though. This money could be spent on much more useful things.


If it improves compilation speeds by 1% then it's not a waste.

My time is worth so much more to me than money.


Then why are you using a laptop?


Why even bother with such an inane answer ?

It's because I need to use my computer whilst not physically attached to the same spot i.e. between work/home, travel.

You know the same reason as almost everyone else.


If faster compilation speeds matters as much as you said earlier then I'm sure it would be worth investing in machines for both work and home.


I'm not sure I understand how this would help if a user wants to stay mobile or what does this have to do with 'better investments'.

What does saparate machine for work has to do with "compilation speeds" in the first place?


which Mac desktop has similar performance?


Like what?


Guac at Chipotle


I mean, the Audi R8 has an MSRP > $140k and I've never been able to figure out how that is justifiable. So I guess dropping $6k on a laptop could be "justified" by not spending an extra $100k on a traveling machine?

To be clear, I'm not getting one of these, but there's clearly people that will drop extra thousands into a "performance machine" just because they like performance machines and they can do it. It doesn't really need to be justified.

Truthfully, I'm struggling to imagine the scenario where a "performance laptop" is justifiable to produce, in the sense you mean it. Surely, in most cases, a clunky desktop is sufficient and reasonably shipped when traveling, and can provide the required performance in 99% of actual high-performance-needed scenarios.

If I had money to burn, though, I'd definitely be buying a luxury performance laptop before I'd be buying an update to my jalopy. I use my car as little as I possibly can. I use my computer almost all the time.


Dude this is Hacker News. I'm surprised when I meet an engineer who doesn't have a maxed out laptop.


and yet, when I commented on Apple submissions about 16GB of maximum RAM being not enough in 2021, especially at that price point, people answered to me that I was bragging and their M1 Air with 8GB of RAM was more than enough to do everything, including running a production kubernetes cluster serving thousands of customers.

When commenting on Mac hardware it is always difficult for me to separate wishful thinking, cultism and actual facts.


I assume "vmception" requires a lot of power...


If you don't max out the HDD space, but max out all the portions which effect performance, it only about half that.


IIRC bigger SSDs in previous generations had higher performance.


That's fundamental to how NAND flash memory works. For high-end PCIe Gen4 SSD product lines, the 1TB models are usually not quite as fast as the 2TB models, and 512GB models can barely use the extra bandwidth over PCIe Gen3. But 2TB is usually enough to saturate the SSD controller or host interface when using PCIe Gen4 and TLC NAND.


All depends on your priorities and such.

My personal desktop was about $4k for what's inside the case. Add in my $2k monitor, and I'm right up there.

Some people call it excessive, I do too. But man, my desktop is blazing fast and my gaming experience is top notch.

The $1000 5950x was the easiest decision. Cut my compile times by 80%.

If I was serious about a portable development and such machine that many people with MacBooks are, I could see dropping $6k.

I'm not, hence I have a $2k M1 MBA and remote into my gaming desktop for anything where speed matters.


Not OP but ordered a maxxed out 16" with 1TB SSD (can't justify 2k more for disk space, I'll just buy an external and curb my torrenting).

My work flow is intensive yet critical:

I have at all times the following open:

ELECTRON APPS: Slack, Telegram, Teams, Discord, Git Kraken, VSCode (multiple workspaces hosting different repos all running webpack webservers with hot module reloading), Trading View.

NATIVE APPS: Firefox (10 - 32 tabs, many with live web socket connections such as stock trading sites, various web email providers, and at least one background YouTube video or twitch stream), Chrome (~6 tabs with alternate accounts using similar web socketed connections), iTerm, Torrent client (with multiple active transfers).

All of this is being displayed on two external 4k screens + the laptop.

So ya, I can justify maxxed out specs as my demands are far higher than that of an average user and that's with me actively closing things I don't need. Also my work will happily pay for it, so why not?


Not your father's currency. If you think of them as pesos, the price is easier to comprehend.

Not to mention if it makes a 200k salary worker 5% more productive, its a win. (Give or take for taxes.)


It's a win for a worker who's compensated based on their work output, which is pretty much the opposite of what a salaried worker is.


Productivity is productivity, doesn't matter how one is paid.


…then why mention a salary at all?


Perspective. It was a noise word really. Imagine instead a contractor working $100 an hour and pulling enough hours to make $200k a year. Does that change the discussion any? I don't believe so.


Based on the numbers it looks like the M1 Max is in the RTX 3070-3080 performance territory. Sounds like mobile AAA gaming has potential to reach new heights :D


> Based on the numbers it looks like the M1 Max is in the RTX 3070-3080 performance territory.

The slides are comparing to a laptop with a 3080 Mobile, which is not the same as a normal RTX 3080. A desktop 3080 is a power hungry beast and will not work in a laptop form factor.

The 3080 Mobile is still very fast as a benchmark, but the full-size 3080 Desktop is in another league of performance: https://www.notebookcheck.net/The-mobile-GeForce-RTX-3080-is...

Still very impressive GPU from Apple!


It's not a function of capability. I spent $4,000 on a 2019 MBP, including $750 for a Vega 20. It plays Elder Scrolls Online WORSE than a friend's 2020 with INTEGRATED graphics. (I guess Bethesda gave some love to the integrated chipset, and didn't optimize for the Vega. It hitches badly every couple of seconds like it's texture thrashing.)

Whatever AAA games that might have gotten some love on the Mac (and there are some), it's going to be even harder to get game companies to commit to proper support to the M1 models. Bethesda has said they won't even compile ESO for M1. So I will continue to run it on a 12-year-old computer running an ATHLON 64 and a nVidia 9xx-series video card. (This works surprising well, making the fact that my Mac effectively can't play it all the more galling.)

I'm never going to try tricking out a Mac for gaming again. I should have learned my lesson with eGPU's, but no. I thought, if I want a proper GPU, it's going to be built in. Well, that doesn't work either. I've wasted a lot of money in this arena.


Well, Apple is selling M1 Macs like hotcakes, so it won't be too long until it'll be stupid not to support them. Also, texture thrashing isn't really an issue when you've got a shared CPU/GPU memory with 64 GB of space. Just cache like half the game in it lol


There is 0% chance that game dev supports mac, it's a dead platform for gaming.

The downvote police is there, am I missing something? are they any modern game on mac?

https://applesilicongames.com/


EVE Online now supports M1. But regardless, now that MacBooks are capable gaming machines (definitely not the case in the past), and a core demographic of Mac users overlap with the gaming demographics (20-40 yo), I really think it’s just a matter of time now.


I would argue that Macs with PC-shared Intel CPU's and AMD GPU's should have been much EASIER to support than the new, completely-different architecture, and that hasn't really happened.


Sure, it's candy for tech people, but the average person is going to scoff at a $2000 laptop. They can buy a functional laptop and a better gaming for that price. It's not going to change the gaming market.


Apple had a shot at making Mac gaming a reality around 2019. They decided to axe 32-bit library support though, which instantly disqualifies the lion's share of PC games. You could still theoretically update these titles to run on MacOS, but I have little hope that any of these developers would care enough to completely rewrite their codebase to be compatible with 15% of the desktop market share.


Yeah, and they also deprecated OpenGL, which would have wiped out most of those games even if the 32-bit support didn't. I'm not expecting to see much backwards compatibility, I'm expecting forwards compatibility, and we're starting to see new titles come out with native Apple Silicon support, slowly, but surely.


I wouldn't hold your breath. Metal is still a second-class citizen in the GPU world, and there's not much Apple can do to change that. Having powerful hardware alone isn't enough to justify porting software, otherwise I'd be playing Star Citizen on a decommissioned POWER9 mainframe right now.


The major factor Apple has working in their favor with regards to the future of gaming on Macs is iOS. Any game or game engine that wants to support iPhone or iPad devices is going to be most of the way to supporting ARM Macs for "free".

My older Intel Macs I'm sure are more or less SOL but they were never intended to be gaming machines.


That won't get the top 10 Steam games running on MacOS. There's just too great of a disparity in the tooling, a 'convergence' like you're describing would take the better half of a decade, conservatively speaking. And even if they did converge, that's only guaranteeing you a portion of the mobile market, and just the new games at that. Triple-A titles will still be targeting x86 first for at least the next 5 years, and everything after that is still a toss-up. There's just too much uncertainty in the Mac ecosystem for most games developers to care, which is why it's a shame that Proton doesn't run on Macs anymore. Apple's greatest shot at a gaming Mac was when MacOS had 32-bit support.

Your older Intel Macs are probably just fine for gaming, too. I play lots of games on my 2016 Thinkpad's integrated graphics, Minecraft, Noita, Bloons Tower Defense 6, all of these titles work perfectly fine, even running in translation with Proton. If you've got a machine with decent Linux support, it's worth a try.


> That won't get the top 10 Steam games running on MacOS.

Top 10 Steam games according to https://store.steampowered.com/stats/

* New World - Amazon Lumberyard

* Counter-Strike: Global Offensive - Source

* Dota 2 - Source 2

* Team Fortress 2 - Source

* Apex Legends - Source

* PUBG: BATTLEGROUNDS - Unreal

* Destiny 2 - Custom

* Rust - Unity

* Dead by Daylight - Unreal

* MIR4 - Unreal

Unreal, Unity, Lumberyard, and Source 2 all support iOS and thus Metal on ARM already. A game developer using one of those engines should generally be able to just click a few buttons to target an additional platform unless they've gone around the engine's framework in ways that tie their title to their existing platform(s). Obviously in all but the most trivial cases there will still be work to be done, but those game developers using a major commercial engine are doing so because someone else has already done most of the hardest work in platform support for them.

That means six of the top 10 could add native MacOS support with relative ease (as in significantly less work than doing it from scratch) if they wanted to. The three Source titles are likely stuck on DX/OGL platforms forever because it doesn't really make sense to rework such an old engine, but at least the two Valve in-house titles have had persistent rumors of a Source 2 update for years.


I mean…the “tooling” these days is usually just Unity3D. And Unity supports Apple silicon as a compile target. Tell me if I’m wrong, but it seems like the ability to support multiple platforms and architectures in gaming has never been easier.


AAA games are developed using in house engines, not Unity.


iPhone games are an entire different beast however, and likely not what people “want”.

At least we still have Minecraft.


> iPhone games are an entire different beast however, and likely not what people “want”.

Most of those mobile games you're thinking of are made with Unity, Unreal, or one of a few other general purpose game engines. Those same engines are used for a significant chunk of PC games as well. The AAA developers who have in-house engines like to reuse them as well. It doesn't matter if a given game does or does not support mobile if it uses an engine that does.


Metal is 4th class:

1. Vulkan 2. DirectX 3. OpenGL 4. Metal

IIRC, there are some efforts to translate Vulkan to Metal similar to how the WINE project translates DirectX into OpenGL/Vulkan, but that's still an imperfect workaround.


Yeah, I was living with gaming on a Mac when that happened, and watched 2/3rds of my Steam library get greyed out.


That's why the 2015 Macbook Pro I'm writing this on is still running Mojave.


After committing to Metal and then killing 32-bit support and then switching to ARM, Apple has made it clear that video games are dead on MacOS. I don't know what people are going to do with these new GPUs but it's not going to be gaming.


Dedicated GPUs has always been just selling point. The real limit is the thermal throughput.

There has never been a reason to put separate GPU into a laptop, because a laptop can only handle so much juice before frying itself.


I love that game. Wonder why it gets some much hate online.


And in the case for the M1 Pro, apple is showing it to be faster than the Lenovo 82JW0012US - which has a RTX 3050ti. so the performance could be between RTX 3050ti - RTX 3060. All of this with insanely low power draw.


But still not fanless, right? Maybe they'll update the Macbook Air with some better graphics as well, so that one could do some decent gaming without a fan.


Air is already at the thermal limit range with the 8 core GPU, will probably have to wait until the next real iteration of the technology in the chip (M2 or whatever) which increases efficiency instead of just being larger.


Mobile 3070-80 so around 1080 desktop for those that are interested. I'm curious what their benchmark criteria was though.


Elephant in the room is that an A15/M1 beefed up GPU is exactly the right chip for a Nintendo Switch/Steam Deck form factor device.


Or an AV/VR headset.


If Proton is made to work on Mac with Metal, there's some real future here for proper gaming on Mac. Either that or Parallels successfully running x64 games via Windows on ARM virtualization.


I’ve been looking into trying CrossOver Mac on my M1 Max for exactly this reason. After seeing the kinds of GPUs Apple is comparing themselves to, I’m very hopeful.


At this point I'd rather use parallels to run steam on Linux.


The 3080M has something around 20TFLOPS of theoretical f32 performance. That's double that of Apple's biggest offerings.

In theoretical compute, it's closer to the 3060M which is nothing to sneeze at.


Who actually supports MacOS though? Didn't valve just give up?


Apple won't support Vulkan, and moltenVK isn't good enough for DXVK. macOS also doesn't support eventtfd system calls like linux.

Pretty much the only options are crossover/wineskin engines, or parallels.


But do you need to play those games through rosetta? Does that make a difference?


Not to disagree, but what gave you that impression?


The slides with performance numbers list the comparison machines


based on the relative differences between the m1 and the m1 pro/max, and also the comparisons shown by apple to other laptops from the MSI and Razerblade both featuring the RTX 3080.


I'm so ridiculously happy with my first generation M1 I have zero desire to upgrade.

Kind of wild to consider given how long it has taken to get here with the graveyard of Apple laptops in my closet.


I was thinking the exact same thing! The fanless M1 Air is a dev monster in a mouse package. Couldn’t be happier with that combo.


If the M1 supported 3 displays I would’ve bought an air last year.

Feels like I’ll have to pay a lot for that 3rd monitor!


This, I currently run triple 24inch monitors in portrait off my work 2019 16" laptop rather smoothly. Unfortunately the M1 couldn't run it. The new 14 and 16 max can do triple 4k.

Up to two external displays with up to 6K resolution at 60Hz at over a billion colors (M1 Pro) or Up to three external displays with up to 6K resolution and one external display with up to 4K resolution at 60Hz at over a billion colors (M1 Max)


When they make those statements I'm always curious if they are expecting the user to use two external with the lid closed, or open (which would be 3 displays).

I've always found MacBooks don't play well when the lid is closed, but maybe that has changed?


You could always get a big 4k display, install a window docking utility, and pretend that it's four 1080p monitors with no bezel.


MORE PANES!


For sure. Mine has been the perfect dev machine. My Docker build times are the envy of the office.


See, this is why you have kids. Now my kid gets my M1 Air, and I get a M1 Max!


Your child is the justification for your relentless consumerism?


My child is the relentless consumerism! Have to keep the ponzi scheme fed!


Same. I am impressed with M1 Pro and M1 Max performance numbers. I ordered the new MBP to replace my 2020 M1 MBP, but I bought it with the base M1 Pro and I'm personally way more excited about 32gb, 1000 nits brightness, function row keys, etc.




Any indication on the gaming performance of these vs. a typical nvidia or AMD card? My husband is thinking of purchasing a mac but I've cautioned him that he won't be able to use his e-gpu like usual until someone hacks support to work for it again, and even then he'd be stuck with pretty old gen AMD cards at best.


The fine print:

>Testing conducted by Apple in September 2021 using preproduction 16-inch MacBook Pro systems with Apple M1 Max, 10-core CPU, 32-core GPU, 64GB of RAM, and 8TB SSD, as well as production Intel Core i9-based PC systems with NVIDIA Quadro RTX 6000 graphics with 24GB GDDR6, production Intel Core i9-based PC systems with NVIDIA GeForce RTX 3080 graphics with 16GB GDDR6, and the latest version of Windows 10 available at the time of testing.

https://www.businesswire.com/news/home/20211018005775/en/Gam...


In terms of Hardware M1 Max is great. On paper you wont find anything match its performance under load. As even Gaming / Content Creation Laptop throttle after a short while.

The problem is gaming isn't exactly a Mac thing. From Game selection to general support on the platform. So really performance should be the least of your concern if you are buying a Mac for games.


I'm not aware of a single even semi-major game (including popular indie titles) that run native on M1 yet. Everything is running under Rosetta and game companies so far seem completely uninterested in native support.


Disco Elysium, EVE Online, Minecraft*, Timberborn, Total War: Rome Remastered, World of Warcraft.

Minecraft is kinda cheating, because Java, and even considering that it takes a bit of hacking. Alternatively you can sideload the iOS version.


Graalvm makes Minecraft kinda native heh.


World of Warcraft & Eve Online are both native.


Neither of which are exactly AAA recent games, WoW is from 2004 right ? The other ones I can think of are the old GTA titles which run "Natively" since they have an iOS port

While at the same time you have (running on Rossetta):

* Deus Ex: Mankind Divided

* Dying Light

* Rise of the Tomb Raider/Shadow of the Tomb Raider

* Dirt Rally

* Metro: 2033, Last Light, Exodus

* Mafia III

* Middle-earth: Shadow of Mordor

* Mad Max

* Sleeping Dogs

* Batman Arkham City

* Bioshock 1 & 2

* Borderlands 2


If you're looking for some to test try EVE, WOW, or Baldur's Gate 3. That's about it as far as major games AFAIK.


He typically just dual-boots windows anyway so selection isn't much of an issue, though I also don't know if that is working yet on the M1 platform


>though I also don't know if that is working yet on the M1 platform

Dual booting isn't working and likely not any time soon as Microsoft does not intend to support Apple M1 [1]. And I doubt Apple have intention to port their GPU Metal Drivers to Windows. ( Compared to using AMD Drivers on Windows in the old days )

He will likely need to use some sort of VM solution like Parallel. [2]

[1] https://appleinsider.com/articles/21/09/13/microsoft-says-wi...

[2] https://www.engadget.com/parallels-desktop-17-m-1-mac-perfor...


Wow, I had no idea M1 doesn't support eGPUs. I was planning on buying an external enclosure for Apple Silicon when I upgraded; thanks for pointing that out.


Not only that, but you're stuck with cards from 5 years ago with current macbooks. It's a shame too because the plug and play support is better than the very best e-gpu plug and play on linux or windows.

I don't see Apple personally adding support any time soon, either. Clearly their play now is to make all hardware in house. The last thing they want is people connecting 3090s so they can have an M1 Max gaming rig. They only serve creators, and this has always been true. Damn waste if you ask me.


You could use the new AMD cards at least though right? I don't think Nvidia support will ever happen again though (I got burned by that, bought a 1060 right before they dropped Nvidia).

I'm on a RX 5700XT with my Hackintosh, and it works well.

Edit: Thinking about this more.. I bet third party GPUs are a dead end for users and Apple is planning to phase them out.


I think I'd wait to see what the Mac Pro supports before coming to that conclusion. Could be something they're still working on for that product, and then when it's working on the Apple Silicon build of macOS will be made available on laptops as well.


I guarantee you this is them finally ripping the Band-Aid of having to support _any_ outside hardware


Oh that's cool, last I checked the best AMD card you can use is a Vega 64.


Radeon 6900XT works with eGPU. But yes, intel macs only. And of course, you're not getting all 16 lanes!

https://support.apple.com/en-us/HT208544


I wouldn't get one of those for games, better get a Windows PC and a M1 MacBook Air, cost should be about the same for both. Game support just won't be there if you care about gaming.


The video moved a bit too fast for me to catch the exact laptops they were comparing. They did state that the M1 Pro is 2.5x the Radeon Pro 5600M, and the M1 Max is 4x the same GPU.

The performance to power charts were comparing against roughly RTX 3070 level laptop cards.


Did they really just claim 30-series tier performance on the max GPU? If that's true that would be insane!


the comparison was only on laptops, but still impressive if they did that with 70% (?) less power


Well, no - they immediately followed the discrete laptop graphic comparison with a desktop graphic comparison, highlighting how much more power they draw for "the same" performance.


> Well, no - they immediately followed the discrete laptop graphic comparison with a desktop graphic comparison

pretty sure the comparison was with "the most powerful PC laptop we could find", which makes sense because they then talked about how much it was throttled when running only on battery while the new M1 is not.


That wasn't for desktop graphics, it was for a top-end laptop graphics SKU (I think RTX 3080 Mobile on a ~135W-150W TDP configuration?). Otherwise the graph would extend all the way to 360W for a RTX 3090.


And I think based on these numbers that a desktop 3090 would have well over double the performance of the M1 Max. It's apples to oranges, but lets not go crazy just yet.

Now, I am extremely excited to see what they will come up with for the Mac Pro with a desktop thermal budget. That might just blow everything else by any manufacturer completely out of the water.


Yup - I misheard while watching.

The chart only shows a line up to about 105W, so it's not clear what they're trying to represent there. (Not that there's any question this seems to be way more efficient!)


GPU workloads are very parallel. By throwing more transistors at the problem while lowering clock rates you can get pretty good performance even in a constrained power budget.


> claim 30-series tier performance on the max GPU? If that's true that would be insane!

TDP, TDP, TDP!

With big enough heatsink, the performance can be proportionally high (perf = sqrt(TDP))


...but they are claiming 50-70 % lower power usage, and therefore 50-70 % lower TDP as well.


Equivalent to "the notebook with the fastest GPU we could find at half the power" is how I remember it...

I'm just not entirely certain what GPU performance does for me...? I don't work with video, there aren't any games, and I'm not playing them, anyway. Does iTerm2 scrolling get better?

I used to be quite happy with the GeForce 7xx(?) and tensorflow, and this seems like it would have quite a bit of power for ML. Unfortunately, the software just isn't there (yet?).



I hate the naming.

By name alone and without looking at specs, can you tell me which is the faster chip-- the M1 Max or the M1 Pro?


The M1 Pro is faster. The M1 MAX is an M1, but with new control software which makes it keep crashing.


M1 Pro is on the cheapest model, and M1 Max is on the most expensive model. So I think you are just flat out wrong here.


You missed the joke they were making about the Boeing 737 MAX.


wrong


I assume it was a reference to the 737 Max..


Not to mention in casual conversation when talking about the M1 Max and people think you’re talking about the more general “M1 Macs”.


In isolation, maybe. But it follows the naming convention of their iPhones models:

base model < pro model < pro max model


but it's not M1 Pro and M1 Pro Max (which would be arguably better than what they did)

It's M1 Pro and M1 Max.


It would be odd for me if "Max" were not the "maximum".


That's still a bad naming convention. It won't be the maximum forever.


But it will the the max M1 forever. When M2 comes around, that's a different class.


It appears that M1 Max itself comes in 24-core and 32-core GPU variants. So I guess some M1 Max chips are more maximum than other M1 Max chips.


Well I'm not necessarily a fan of the naming but assuming Max stands for maximum, it's pretty clearly the best one. The one you get if you want to max it out. But they should've called it Pro Max for consistency with the iPhones...


It makes me sad that no one will never be able to build anything with those chips.

I imagine there could me many, many innovative products built with these chips if Apple sold them and supported Linux (or even Windows).


...or just released the full documentation for them. Apple being Apple and wanting full control over its users, I don't see that happening. I really don't care how fast or efficient these are, if they're not publicly documented all I think is "oh no, more proprietary crap". Apple might even make more $$$ if it wanted to open up, but it doesn't.


What do you mean? All the APIs (Xcode SDK, Metal, ML, etc) required to build on their devices are very well documented.


I mean the documentation of the SoC itself. The thousands (or perhaps tens of thousands) of pages of information about every register and peripheral on it.


I'm not talking about apps on their devices, I'm talking about new types of devices based on their processors.

If these were available for others to buy, I think we would be very surprised by the innovative new devices people would invent.


They wouldn't have existed in this form then. Open chip market has very different needs, and SoCs are designed accordingly.


Imagine if apple was forced (via anti trust laws) to spin off their CPU division...


Now, if only Unreal Engine builds were available M1 native, I could get rid of my huge and heavy desktop entirely!

Interestingly, some improvements to Rosetta were mentioned, extremely briefly.


Same wish here. Last I tried a few months ago, I was unable to compile UE4 as well. These machines would be great for UE4 dev if only the compatibility was there. I wonder if the politics between Epic and Apple has made efforts in this area a lower priority.


Tim Sweeney seems to have confirmed that efforts are under way still. Fingers crossed!

https://twitter.com/TimSweeneyEpic/status/145016360777861120...


Apple very much against games, they've even broken OpenGL just because. Don't expect any type of gaming ecosystem around Apple anytime soon.


> Apple very much against games

They are not against games, they just don't care about supporting anything else that's not coming through their frameworks and the app store. This can easily verified by the way-too-long segments of game developer demos at the annual WWDC.


Thats not how industry works, thats not how any of this works. iPhone ecosystem is big enough to move itself forward, but desktop market plays by different rules.If you don't follow what majority of the market do, it's much cheaper to just ignore that tiny customer segment which requires totally alien set of technologies


Which is precisely what I said. They don't care that the larger gaming market ignores their platform. Apple Arcade and other gaming related endeavours all aim at the casual mobile gamer market.


>If you don't follow what majority of the market do, it's much cheaper to just ignore that tiny customer segment which requires totally alien set of technologies

iOS and Macs both use Metal.

You can't refuse to support Metal without missing out on a very big slice of total gaming revenue.


On mobile devices - definitely

On desktop - missing what, 2% or less? Checked Steam stats - yep, about 2%


That Steam stat is probably a chicken and the egg situation. I know I don’t run Steam on my Macbook because there’s nothing I want to play — but I would if there was.

Still the Mac marketshare is not that high (~15%?) but might start looking attractive to developers looking to “get in first” when hardware that can actually run games becomes available (cough).


I mean, it’s similar to Linux, right? Linux has about 2% on Steam, and that’s with compatibility layers like wine allowing many windows games to run cross platform. This [0] puts Mac at 9.5% of operating systems overall and Linux at 2.4%.

But games with native Linux support are not very common compared to Windows, even though it’s mainly a matter of supporting Vulkan, which many modern games already do. My point is that even though Linux should be relatively easy to support natively (compared to mac not supporting cross-platform graphics APIs out of the box), devs aren’t putting the effort in.

I really hope this changes, and hopefully mac “gaming-level” hardware could help push cross-platform work along.

- 0: https://netmarketshare.com/operating-system-market-share.asp...


Metal is Metal. Once you support Metal for iOS games they also work under MacOS when running on Apple's chips.

You can support traditional MacOS application chrome with little additional effort.


Desktop games and mobile games are not the same. On mobile pretty much every heavy game uses either UE or Unity. High end PC games use custom engines that are heavily tuned for x86 and use different APIs. Metal port would be expensive and not worth it.


>High end PC games use custom engines

High end PC games tend to license somebody else's engine.

The most popular of those gaming engines already support Metal.


Most games use licensed engines, most AAA games use their own engines. Only 3 out of top 10 games in Steam use engine that support metal. More than a half of games in top 100 use engines that don't support metal. In this month we have few prominent releases on PC: Guardians of the Galaxy, Far Cry 6, Age of Emires 4, FIFA 22, The Dark Pictures Anthology: House of Ashes and Back 4 Blood. Out of these 6 titles only last 2 use UE4, others use their own custom engines. And I could go on and on.


So against games that they just became a Patron supporter of Blender [1]...

[1] https://www.blender.org/press/apple-joins-blender-developmen...


Blender has strong use case in the animation and movie ecosystems. RenderMan, Pixar has some strong connections with Jobs and in turn with Blender, games may not really be in their radar for Blender sponsorship.

Besides supporting creator workflows (Final Cut Pro, best in class laptop graphics, Blender etc) doesn't mean they want to directly support gamers as buyers, just that they believe creators who produce games (or other media) are strong market for them go after.

The marketing is designed strongly towards the WFH post pandemic designer market. They either had to ship their expensive Mac desktops to their home or come in and work at office last year. This laptop's graphics performance pitch is for that market to buy/upgrade now.


The M1 Pro & Max CPUs are approaching dedicated Laptop GPU levels, so Minis will be a reasonable low/mid-end game machine, comparable to the best PC gaming laptops.

The M* Mac Pros will start out with built-in graphics starting at the M1 Max level and going 4 or perhaps 8x higher, still using integrated graphics.

The fact that Apple now supports $5K dedicated graphics cards on the pro series suggests that the pro M* Macs will be able to use any graphics card (that supports Metal) that you can buy.

The fact that Apple's store was crashing continually after the new laptops were announced suggests that the market for Mac games is going to grow a lot faster than anyone thinks...


It is a good machine yes, although I wouldn't go too much store crashing as an indicator of popularity, that could just be poorly managed web services at Apple.

Delivery dates are even now within 1 week after shipping starts (Nov 3-10), so demand seems within their initial estimates.

Hardware is not only reason gaming is not strong on Mac. Microsoft had a decent hardware and software offering for their Nokia phones in the later years, that didn't help them.

It will take ecosystem of developers investing years of effort in building a deep catalog. Game publishers are not going to risk spending that kind of effort unless there is already enough of market for large titles to recover the money, while this can grow organically, to compete with MS who have XBox as well as dominance in PC gaming, Apple will need to be active in their attempt.

A lot of effort in dev community engagement, incentivize publishers to release on their platform, get exclusive deals etc. After all this, it still may fail. Apple has not shown any interest in even trying to do that so far.


I just wanted to point out that delivery times have now bumped to Nov 26 -> Dec 6 for the same one I ordered on Monday afternoon ago that was Nov 10 -> Nov 17. In less than 2 days, the shipment has been bumped by 2 weeks or more.


Thats interesting, it could be smaller initial supplies perhaps chip shortage or supply chain issues - The average wait time for ships is quite bad.

I would expect demand to taper off during pre order phase. People who would pre order are likely to do it earlier than later, to not have to wait for a month like now.

The rest are going to wait for reviews be able to check it out at a store and buy one when they need it/afford it or at a store over the next months.


Or, you know, they could be insanely popular. That’s an option too…


They literally had a Unity developer in the showcase during the keynote.


Where in the showcase?



This is roughly in line with what I expected, given the characteristics of the M1. It's still very power efficient and cool, has more CPU cores, a lot more GPU cores, wider memory controller, and presumably it has unchanged single core performance.

Apple clearly doesn't mean these to be a high performance desktop offering though because they didn't even offer an Mac Mini SKU with the new M1s.

But what I'm really curious about is how Apple is going to push this architecture for their pro desktop machines. Is there a version of the M1 which can take advantage of a permanent power supply and decent air flow?


I don't think they are going to make desktop versions, they'll probably put the pro and max versions in a new iMac body during Q2 2022 and might add config options to mac mini. Might be for supply chain reasons, focusing 100% on macbook pro production to meet the absolutely massive incoming demand.


Indeed, ETA on a 16" was already late December for an order placed when the presentation ended, I'm sure it's into 2022 now.


I think it likely that Apple will just ramp up what they've been doing so far - make an "M1 Ultra" that has 32 or 64 or even 128 cores of CPU, at least that many GPU and scale the memory and I/O in the same way. Put the one with fewer cores in the iMac Pro and the one with the most cores in the Mac Pro.

Every couple of years when they upgrade again every product they make will go up to M2, then M3, etc. etc.


If Apple ever gets around to putting an M1 Max in a Mac Mini, that'd probably push me over the edge toward buying one.


Yeah, especially if it could run Linux. This would be a powerful little server.

I decked out my workstation with a 16 core Ryzen & 96GB RAM and it didn't cost anywhere near the price of this new 64GB M1 Max combo. (But it may very well be less powerful, which is astonishing. It would be great to at least have the choice.)


Ubuntu already has an ARM build. Is there a reason you wouldn't be able to run linux?


Last I heard, there were graphics driver issues left to iron out.

Linux 5.13 was the first with M1 support. Dunno if that's enough for M1 Pro/Max.


> Last I heard, there were graphics driver issues left to iron out.

Who cares in a server ?


I assume because so far the only devices released with M1 processors have been consumer devices with attached screens, the focus has been on bringing over the full graphical, desktop Linux experience to it, not just headless server linux.


I think you’re forgetting the Mac mini


You're right, I forgot they updated the mini with the M1 last year too.


Many will as that kinda processing power and more so memory interface speed - it will seriously blur a lot of lines.

More so, it has really got the processing leaps and jumps back into life again and not really seen much of that for well over a decade now.


Always wondered who the Mac minis were marketed towards. Sure they're cool but wouldn't you want something portable should you want to go to a coworking location or extend a vacation with a remote work portion of the trip?

Surely in the world of covid remote work is common among developers... Would that mean you'd need a second device like an Air to bring on trips?


In a world of COVID remote work I got tired of my i9 always running loud and hot, and replaced it with a Mac mini which runs at room temperature, is the same or more snappy, since I was almost always running it in clamshell mode hooked up to the two displays anyway. And unplugging and plugging it back in to the TB hub always caused weird states that required reboots maybe 30% of the time.

So now I've got a Mac mini hooked up permanently to those monitors and it just works.

Now I am very tempted to trade in both my Mac mini and i9 for a 16" M1 pro so I can once again return to one machine that isn't always out of sync.

But I'm going to wait to see how well it runs in clamshell mode hooked up to 2 4k displays.


For me, it'd be something I'd stick next to my router and access remotely (over a VPN, if I'm outside of my LAN).


Can't you buy a MBP 16" and connect it to whatever display you were going to connect your Mac Mini to?


But there’s a huge difference in the price if you don’t need all the other stuff that comes with the MacBook Pro.


Agreed, and the mini has different I/O that you might prefer (USB-A, 10 gig eth). Also, it’s smaller (surprise, “mini”). Plus, clamshell mode on a MacBook just isn’t the same as a desktop for “always-on” use cases.


Anyone can comment on what Intel and AMD are going to do now?

Will they be able to catch up or will Qualcomm become the alternative for ARM laptop chips? (and maybe desktop chips too)


This exact question was asked a year ago when the M1 was announced.

In the year since, their laptop market share increased about 2% from 14 to 16%[0].

The reasons for this are:

1. When deciding on a computer, you often have to decide based on use case, software/games used, and what operating system will work best for those use cases. For Windows users, it doesn't matter if you can get similar performance from a Macbook Pro, because you're already shopping Windows PCs.

2. Performance for most use cases has been enough for practically a decade (depending on the use case.) For some things, no amount of performance is "enough" but your workload may still be very OS-dependent. So you probably start with OS X or Windows in mind before you begin.

3. The efficiency that M1/Pro/Max are especially good at are not the only consideration for purchase decisions for hardware. And they are only available in a Macbook / Macbook Pro / Mini. If you want anything else - desktop, dedicated gaming laptop, or any other configuration that isn't covered here, you're still looking at a PC instead of a Mac. If you want to run Linux, you're probably still better off with a PC. If you want OS X, then there is only M1, and Intel/AMD are wholly irrelevant.

4. Many buyers simply do not want to be a part of Apple's closed system.

So for Intel/AMD to suddenly be "behind" still means that years will have to go by while consumers (and especially corporate buyers) shift their purchase decisions and Apple market share grows beyond the 16% they're at now. But performance is not the only thing to consider, and Intel/AMD are not sitting still either. They release improved silicon over time. If you'd asked me a year ago, I'd say "do not buy anything Intel" but their 2021 releases are perfectly fine, even if not class-leading. AMD's product line has improved drastically over the past 4 years, and are easy to recommend for many use cases. Their Zen 4 announcement may also be on the 5nm TSMC node, and could be within the ballpark of M1 Pro/Max for performance/efficiency, but available to the larger PC marketplace.

[0] https://www.statista.com/statistics/576473/united-states-qua...


All good points but:

1) In the pro market (audio, video, 3d, etc) performance is very relevant.

2) Battery time is important to all types of laptop users.

3) Apple is certainly working on more desktop alternatives.

4) You don't need move all your devices into the closed ecosystem just because you use a Mac. Also, some people just don't want to use macOS on principle, but I'm guessing this is a minority.

> AMD's product line has improved drastically over the past 4 years

My desktop Windows PC has a 3700X which was very impressive at the time, but it is roughly similar in perf to the "low end" M1 aimed at casual users.

> Their Zen 4 announcement may also be on the 5nm TSMC node, and could be within the ballpark of M1 Pro/Max for performance/efficiency, but available to the larger PC marketplace.

That would be great.


In the pro market especially you have people who are stuck using some enterprise software that is only developed for PC like a few Autodesk programs. If you are into gaming many first party developers don't even bother making a mac port. The new call of duty and battlefield games are on every platform but switch and mac OS, and that's increasingly par for the course for this industry since mac laptops have been junk to game on for so long.


My counterpoint is that for the pro market, portability and power efficiency are not always that important. Yes, for plenty of people. But many pros are sitting at a desk all day and don’t need to move their computer around.

For these users, you’re not comparing M1 max to laptop CPUs/GPUs, but to the flagship AMD/Intel CPUs. Based on early results from the M1 max on geekbench, the 11900K and 5950X are still better. And the best GPUS for pro are absolutely still significantly more powerful than M1 Max.

This makes sense, because you have to dedicate a lot of power to desktop PCs, which you just can’t do in a laptop. But I think the pro question is still often “what gives me the most performance regardless of power usage,” and the answer is still a custom built computer with the latest high-end parts from Intel/amd/Nvidia, not apple.

Obviously Apple is basically offering the best performance hands down in a portable form factor. But Apple also isn’t releasing parts like the 3090 which draws like 400W, so it’s not yet competing for super high end performance.

Point being, I think Intel and AMD aren’t really left in the dust yet.


True, but there are plenty of pros that need mobility (eg: photographers, musicians, designers, etc).

Also there are many pros (most?) that do not have super high performance requirements and would rather use a laptop they can also use for casual use.


Agreed.

I think the big thing to remember is that "performance crown" at any moment in time does not have a massive instantaneous effect on the purchasing habits across the market.

I have no doubt that Apple will continue to grow their market share here. But the people that continue to buy PC will not expect ARM-based chips unless someone (whether Intel, AMD, Qualcomm or someone else) builds those chips and they are competitive with x86. And x86 chips are not suddenly "so bad" (read: obsolete) that no one will consider buying them.


> My desktop Windows PC has a 3700X which was very impressive at the time, but it is roughly similar in perf to the "low end" M1 aimed at casual users.

Still, your desktop can play AAA games at 120Hz with an appropriate GPU attached. No M1 device can do that. So once more, performance doesn't mean anything if you can't do what you want with it.


Exactly, but the Ryzen machine I use mostly for music production.


These are unit market share numbers, so will include large numbers of PCs - both consumer and corporate - at price points where Apple isn't interested in competing because the margins are probably too low.

I suspect by value their share is far higher and their % of profits is even bigger.

The strategy is very clear and it's the same as the iPhone. Dominate the high end and capture all the profits. Of course gaming is an exception to this.

The bad news for Intel is that they make their margins on high end CPUs too.

For Intel and AMD there are two different questions: will Intel fix their process technology, and will AMD get access to TSMC's leading nodes in the volumes needed to make a difference to their market share?


Laptops are only small component of biz for CPU chip manufacturers like AMD/Intel. AMD is traditionally weak in laptops and has not decent market share ever. This doesn't impact their business that much (Intel's numbers are not down that much after loosing Apple's deal after all)

AMD and especially Intel have high margin server CPU business, Apple's entire value prop is low power segment, their chips are not designed to compete in high power category and they will never sell outside their products offerings as only chips . AMD also does custom chip stuff like with PlayStation 5 etc, none of that is threatened by Apple.

Servers Chips with ECC support, enterprise features and other typically high end chips have very high profit /unit lot more than even Apple can make per chip( maybe higher % for Apple, but not absolute $ / chip). Apple is a minor player in the general CPU business.

There will be of course pressure from OEMs who stand to loose sales to Apple to step up their game, AMD/Intel are not loosing sleep over this in terms of revenue/margin yet.


Sure Intel have a server business but that’s smaller than client computing and revenues there are falling.

I don’t know what the precise % is but if Apple have 8% market share by volume their % of Intel’s client business by value is much higher. Losing a growing customer that represents that much of your business is not a trivial loss.

Of course this is all part of a bigger picture where falling behind TSMC enables a range of competitors both on servers and clients. If they don’t fix their process issues - and they may well under PG - then this will only get worse.


The Client Computing group is larger yes, however few things to keep in mind

1. they don't split revenue for Laptop market alone , So hard to say the impact of laptops (especially Apple) itself on their revenue or margins.

2. Also CCG is much slower growing than the Date Centric Group(DCG) business for Intel in the last 4-5 years as to be expected to be in the future as well.

3. The Apple deal was likely their lowest margin large deal(perhaps even loosing money ). Apple is not known for being generous to suppliers and also Intel was in not in any position of strength to ask great margins in the years leading up to Apple Silicon, the delayed processors and poor performance and threat of Apple Silicon had to have impact on pricing in the deal and therefore their margins.

Not saying that Intel don't have a lot to fix, but it also not that suddenly they are in much worse position than say last year.


Sorry, disagree on all of these points. Intel has new competition in all its highest margin businesses. It’s not going bust and may well turn things around but if you look at the PE ratio it tells you the market is pretty pessimistic vs its competitors.


I get (and respect) that you disagree with my argument. However only point 3 is inference/opinion that we can argue on,

1. and 2. are just facts from their annual report, maybe there is scope to argue that they are not relevant here or doesn't show the full picture etc, or are you are saying the facts are wrong ?


On 1. and 2. I’m (hopefully respectfully too) disagreeing with the thrust of the point.

1. We don’t know the precise split but we do know laptops are a very major part of their CCG business (based on laptops having majority of PC market share).

2. DCG revenue was down last quarter and the business is facing major competition from both AMD and Arm so I don’t think we can base expectations on performance over the last five years.


>Anyone can comment on what Intel and AMD are going to do now?

In the short term, nothing. But it isn't like Apple will magically make all PC user switch to Mac.

Right now Intel will need to catch up with Foundry first. AMD needs to work their way into partnering with many people with GPU IP which is their unique advantage. Both are currently well under way and are sensible path forward. Both CEOs knows what they are doing. I rarely praise any CEOs, but Pat and Dr Lisa are good.


Intel is releasing heterogeneous chips in less than a month. Both Intel and AMD bet heavily on 3d stacking and it should hold Apple off desktop. A lot of Apple advantage is due to node advantage and I expect that AMD will start getting latest nodes much sooner than now, 5nm ryzen should be able to compete with m2, Intel's own nodes should get better in few years relatively to latest TSMC nodes(they are not that much behind). But x86 will surely loose market share in the coming decade.


Is there any reason to believe these will have improved single-core performance? Or are these just more M1 cores in the same package?


FWIW initial Geekbench scores that surfaced today do not show a significant improvement in the single-core performance for M1 Max compared to M1. https://www.macrumors.com/2021/10/18/first-m1-max-geekbench-...


Sigh… back to eating ramen (joking… I’m Italian, I’d never cut on my food)


These things came at a great time for me. My late-2014 MBP just received its last major OS upgrade (Big Sur), so I'm officially in unsupported waters now. I was getting concerned in that era from 2015-2019 with all the bad decisions (butterfly keyboard, no I/O, touchbar, graphics issues, etc.) but this new generation of MacBooks seems to have resolved all my points of concern.

On the other hand, my late-2014 model is still performing... fine? It gets a bit bogged down running something moderately intensive like a JetBrains IDE (which is my editor of choice), or when I recently used it to play a Jack Box Party Pack with friends, but for most things it's pretty serviceable. I got it before starting university, it carried me all the way to getting my bachelor's degree last year, and it's still trucking along just fine. Definitely one of my better purchases I've made.


You could easily upgrade macOS using https://dortania.github.io/OpenCore-Legacy-Patcher/MODELS.ht....

On the hardware side, you could open it up, clean the fan, re-apply thermal paste (after so many years, this will make a big difference) and maybe even upgrade the SSD if you feel like it.

That way, this laptop can easily survive another 1-3 years depending on your use-cases.


Interesting, thanks for the link

I actually ended up pulling the trigger on the base model 14 inch that was announced today, but I'll probably still keep this laptop around as a tinkering device in the future. If not only because it's still fairly capable, but I've got some good nostalgia for it!


Not that it matters but I should correct that it's a late-2013 model I have. I get confused because I bought it in 2014

Also I ended up pulling the trigger and preordering the base-model 14-inch. Let's hope it's as good as they say!


> JetBrains IDE

Runaway feature creep IDE? I use it too.


Do you think it suffers feature creep?

I've used it since 2017 and it hasn't changed much since then. I guess they recently added some online code pairing feature I don't plan on using, but that's all that comes to mind.


There appears to be a sea change in RAM (on a Macbook) and its affect on the price. I remember I bought a Mac Book pro back in 2009, and while the upgrade to 4gb was $200, the upgrade to 8gb was $1000 IIRC! Whereas the upgrade from 32GB to 64GB was only $400 here.

Back then, more memory required higher density chips, and these were just vastly more expensive. It looks like the M1 Max simply adds more memory controllers, so that the 64GB doesn't need rarer, higher priced, higher density chips, it just has twice as many of the normal ones.

This is something that very high and laptops do: have four slots for memory rather than two. It's great that Apple is doing this too. And while they aren't user replaceable (no 128Gb upgrade for me), they are not just more memory on the same channel either: the Max has 400GB/s compared to the Pro 200Gb/s.


Those power comparisons aren't really fair IMO. They're testing power consumption...

They're using a "msi prestige 14 evo (intel CPU)" vs an optimized laptop using an M1.

Further, where's AMD? They have a better power vs performance ratio.

I'm not sure it's as good or not, but that's a lot of cherry picking.


Can you be more specific in how the M1 is optimized while MSI's isn't? Also why was MSI a bad comparison?

It seems reasonable to me but I don't follow PC much these days.


Intel chips eat much more energy than AMD ones. Comparatively to AMD gap isn't that big(and it's mostly due to better node).


$400 to upgrade RAM from 16GB to 32GB -- Ouch!!


Apple has always been so stupid about RAM pricing. I miss the days where you could triple your base RAM by spending maybe $75 on sticks from crucial and thirty seconds of effort.


Also $400 to go from 32GB to 64GB if you start with the 16 inch MPB with the M1Max. So $400 in that case buys 32G extra instead of just 16GB extra.

Interesting.


Except you already paid more to get the M1 Max, too, the upgrade to 64 is not available on any other models.


> M1 Pro and M1 Max also feature enhanced media engines with dedicated ProRes accelerators specifically for pro video processing

Do we know if this includes hardware encoding/decoding of AV1? I've found it to be quite lackluster on my M1, and would love to jump formats.


Compared to the M1, they kept the number of memory channels the same but increased the width of each channel. What are the performance implications of this from a pure CPU workload standpoint?


Let's see how fast we'll see support for those in Asahi Linux


I feel happy for the Apple community with these processors.

But I can't stop to think my Intel machine. It feels like I am left in the dust and nothing seems to be coming that remotely looks like the M1.


Intel is years behind Apple, with no real strategy for catching up. Apple most likely already has the M2 ready, and the M3 under heavy development.


The good news is that AMD is working their butt off on this, and seems to be much closer to Apple in terms of Watts/unit of performance. Intel needs to get in gear, now.


Intel was stuck on the same node for 6 years. Alder lake looks very promising, Alchemist GPUs same. They will have CPU performance crown on laptops in less than 6 months. Power usage will be much better than now. Their 3d stacking strategy is very promising, it will allow for many very interesting SKUs. I wouldn't count them out.


I just bought a new HP laptop with an i3-1125G4.

I just found out that this is a 10nm chip, roughly equivalent to TSMC's 7nm.

Maybe things are looking up for Intel?


M2 was sent for manufacture earlier in the year, if you believe the rumor mill (and to be fair, they were spot on this time around)


Kind of surprising to me they’re not making more effort towards game support - maybe someone can explain what the barriers are towards mac support - is it lack of shared libraries, x64 only, sheer number of compute cores?

When I see the spec sheet and “16x graphics improvement” I go okay what could it handle in terms of game rendering? Is it really only for video production and GPU compute tasks?


I've answered this a few times, so I'll just give a bog-standard answer here.

Apple had really good game support 3 years ago. Mojave was really among the best when it had Valve Proton natively supported, and it was starting to seem like there might be some level of gaming parity across MacOS, Linux and Windows. Apple broke compatibility with 32-bit libraries in Catalina though, which completely nixed a good 95% of the games that "just worked" on Mojave. Adding insult to injury, they burned their bridges with OpenGL and Vulkan very early on, which made it impossible to implement the most important parts of Proton, like DXVK and CL-piping.

So, is it possible to get these games running on MacOS? Maybe. The amount of work that needs to be done behind-the-scenes is monumental though, and it would take one hell of a software solution to make it happen. Apple's only option at this point is HLE of x86, which is... pretty goddamn terrible, even on a brand-spanking new M1 Max. The performance cores just don't have the overhead in clock speed to dynamically recompile a complete modern operating system and game alongside it.


The gaming capability is there, but Apple only officially supports their own Metal on macOS as far as graphics languages, meaning the only devs with experience with Metal are the World of Warcraft team at Blizzard and mobile gaming studios. MoltenVK exists to help port Vulkan based games, but generally it's rough going at the moment. I'm personally hoping that the volume of M1 Macs Apple has been selling will cause devs to support Macs.


As a former Starcraft 2 player I’d say: don’t count on it. It wasn’t even worth it for Blizzard to make SC2 that performant on Vulkan. They had a graphics option for it, but it was buggy compared to the OpenGL option. When a company that size doesn’t want to commit the dev resources, that leaves little hope for the smaller companies.


They never before had good graphics in mainstream products. And there's no official support for any of the industry standard API(to be fair there is MoltenVK, but not much traction yet), yes, there is support for Metal in UE4/5 and Unity, but AAA games use custom engines and cost/benefit analysis didn't make much sense, maybe now it will change.


Price you can’t win the gaming market with 2000+$ product


Have you seen GPU prices lately? Desktop 3070 level performance in portable laptop for 3500$ is not that bad of a deal. If they make Mac Mini around 2500$ it would be pretty competitive in PC space.


Gaming market spends the most of any home electronics user except ultra rich mansion hi-fi outfitters. Entry gaming laptops are $1300, entry GPUs are $400 if you can find them.

Not competing against consoles, but against big rig gaming PCs, and products from Asus and Razor, and companies like Nvidia/Amd in compute.


Does anyone have a handle on how the new M1X is expected to perform on Deep Learning training runs vs a NVIDIA 1080Ti / 2080Ti. I think the 400 Gbps bandwidth and 64 GB unified memory will help - but can anyone extrapolate based on the M1 ?


Nit: the M1 Max has 400GB/s (3.2Tbps) bandwidth.


I would like to upgrade to Apple silicon, but I have no need for a laptop. I hope they put the M1 Max in a Mac Mini soon.


Seems reasonable. They still sell the Intel Mac mini, despite having an M1 powered Mac mini already. The Intel one uses the old “black aluminum means pro” design language of the iMac Pro. Feels like they are keeping it as a placeholder in their line up and will end up with two Apple silicon powered Mac mini’s, One consumer and one pro


I doubt that we'll see a really powerful Mac Mini anytime soon. Why? Because it would cannibalize the MacBook Pro when combined with an iPad (sidecar).

Most professionals needing pro laptops use the portability to move between studio environments or sets (e.g home and work). The Mini is still portable enough to be carried in a backpack, and the iPad can do enough on it's own to be viable for lighter coffee shop based work.

Not many would do highend production work outside a studio or set without additional periphery, meaning that highend performance of the new MBP isn't needed for very mobile situations.

A powerful mini and an iPad would therefore be the much better logical choice vs. a highend MacBook Pro. There where you need the power there's most likely a power outlet and room for a Mini.


Compare to a battery powered laptop you can flip open and start working on from anywhere? Not a chance.


For myself, my iPad Pro actually covers all use cases that I would need a laptop for, so my current Macbook Pro is just in clamshell mode on my desk 100% of the time. That's why I would love to replace it with an M1 Max Mac Mini.


iPad + keyboard has a battery and can be enough for most tasks you'd do outside an office, set or studio environment when away from your other pro periphery. You wouldn't cut movies on the trackpad within a coffe shop if you're used to doing it with a mouse and other tools inside a studio. That's what I mean.


There were rumors that it was supposed to be today. Given that it wasn't, I now expect it to be quite a while before they do. I was really looking forward to it.


Yep, If it didn't launch today it will launch alongside the M2 or Mac Pro update probably around WWDC next year.


Do these new processors mean anything for those of us who need to run x86 VMs or is Apple Silicon still a no-go?


you can always run an emulator such as QEMU if you really need x86 once in a while

working with it would be a pain, however, if you absolutely need x86 better get an Intel mac (possibly used) or a PC


Will the 64GB RAM max chip be practical for training deep learning models? Any benchmarks vs GTX 3090?


Their comparison charts showed the performance of mobile GPUs, not the desktop ones. So, I wouldn’t call this “practical”. Most likely depends on what kind of models you are building and what software you use and how optimized it is for M1.


It will be definitely handy for finetuning large models due to huge ram, but in training from scratch 3090 is certainly better. They are seemingly cooking 128 core GPU, if they release this kraken it will beat 3090 in pretty much everything.


I wish that we could compare Intel/AMD at the 5nm process to these chips, to see how much speedup is the architecture vs the process node.

Also, all of the benchmarks based on compiling code for the native platform are misleading, as x86 targets often take longer to compile for (as they have more optimization passes implemented).


So with 57B transistors for the M1 Max you can fit the AMD 5800H (10 B) and the RTX 3080 Ti (28 B) and have 19B transistors left.

So the performance should be top notch but cooling and power requirements will be quite high.

So battery life of 21 hours is quite the achievement.

Still, i prefer the open architecture of the PC any day.


I think memory is part of that, whereas it would be excluded for the other chips you mentioned.

But OTOH 57B transistors for 64GB of memory means there would be less than one transistors per byte of memory - so I'm not sure how this works, but I'm not too knowledgeable in chip design.


https://en.wikipedia.org/wiki/Transistor_count

It seems to be way beyond any CPU for end users and even some servers like AWS Graviton 2


I think only company with more transistors on the chip is Cerebras.


I wish we had real open hardware with everything documented. Sadly that us very rare.


Why no one is talking about the base 14" 8-Core CPU and 14-Core GPU but not a single mention in the presentation ?

How the new M1 Pro 8 core compared the the M1 8 core ?


I don't think the 'old' M1 13" Pro is long for this world - for £100 more (16GB+1TB spec) you get a much better machine in the 14" model. But independent comparisons will follow.

I'd love to see a Pro/Max Mac Mini, but that's not likely to happen.


The M1 has 4 small cores and 4 big while the Pro is 2/6. Didn’t really see if they claimed difference in the performances of the cores themselves.


> 140W USB-C Power Adapter

Wait huh? My current 16" Intel Core i9 is only 95watts. Does this mean all my existing USB-C power infrastructure won't work?


My guess is it will work like using a 65W adapter on a Mac that prefers 95W. It will charge if you're not doing much but it will drain the battery if you're going full tilt.


Just like USB-C cables can differ in capacity I’m finding a need to scrutinize my wall chargers more now. A bunch of white boxes but some can keep my MacBook Pro charged even when on all day calls and some can’t. With PD the size isn’t a great indicator anymore either.


Maybe its a typo and they mean the Magsafe3 connector? I though the USB-C standard was limited to 100 watts.


They recently introduced a new standard that allows up to 240W


They may have a usbC -> magsafe 3 connector. It lets you just replace the cable when it inevitably breaks instead of the whole brick.


Instead of having to replace the mag safe power brick for 85€ you can now just replace the cable for 55€.

However, it my personal experience, I've never had the cable fail, but I've had 2 mag safe power supplies fail (they started getting very hot while charging and at some point stopped working alltogether).


That's exactly what it is.


Dell USB-C power adapters have exceeded 100 watts in the past on e.g. the XPS 17


No, it would be just be slower to charge


It will work but it will charge slower and probably won't charge when maxing out the juice off the SoC.


I think the 140w power adapter is to support fast charging, they mentioned charging to 50% capacity in 30 mins, I'd imagine power draw should be much less than 140w


Yeah, I don't know the peak power draw when everything's on 100% load, but I think (also thinking the efficiency of M1) 140W would keep charging even under highest load.


It likely charges slower, though they is a minimum needed power - you cannot charge with a mobile charger.


Can these drive more than one external monitor?


Indeed. The M1 Max can drive 3 6K monitors and a 4K TV all at once. Why? Professional film editors and color graders. You can be working on the three 6K monitors and then Output your render to the 4K TV simultaneously


2x6K on the Pro, 3x6K + 1x4K on the Max. The 4K seems to be because that's the limit of the HDMI port.

No mention of MST though.


I’m assuming that running multiple 5-6k monitors will still require multiple cables/ports though. One thunderbolt port per monitor.

I’m still waiting for the day we can hook 2 5k monitors + peripherals up to a thunderbolt dock and then hook that up to a MacBook using a single cable.


MST is not supported and won't be supported, that's been Apple policy for some time now. Only Thunderbolt tunneling is supported for multiple screens on one output (which logically provides separate DP outputs, no MST)


XDR Pro Display was the only external monitor mentioned, and was shown connected a few times. Screenshots and below details here: https://forums.macrumors.com/threads/pro-display-xdr-owners-...

Also specific statements:

- M1 Pro SoC supports two XDR Pro Display

- M1 Max SoC was highlighted (Connectivity: Display Support -> 33:50):

    - Supports three XDR Pro Displays and a 4k television simultaneously  

    - "75 Million pixels of real estate."

    - Highlighted still having free ports with this setup.


It was shown driving 4 external monitors


The real question is can you plug in a display and not get kernel panics.


I see myself in this one and I don’t like it.


Yes, the live-stream showed a MacBook Pro connected to many monitors


up to 4


Yes. Up to three on Max.


yay more-nitors!


Yes. The Pro and Max can drive 2 XDR displays.


> XDR

What does that mean?


It's their marketing term for their HDR tech. The XDR displays are their flagship media productivity monitors. The new screens in the 14/16 inch MacBooks have "XDR" tech as well.


The 6K pro monitor Apple sells.


Just so I don't misunderstand, does that mean I need XDR or will it work with any monitor? I was very surprised to see that the original M1 only supported one external monitor so just want to confirm before I press buy.


No it will, it's just a very demanding monitor so if it can run multiple of those it will have no problem with random 1080p or 4K monitors.


It will of course work with other monitors, but there aren't that many high res monitors out there. The Apple display is the only 6k monitor I know of. There are a few 5k monitors and lots of 4k monitors.

There's one 8k monitor from Dell, but I don't think it's supported by macOS yet.


It’s interesting that the M1 Max is similar in GPU performance to RTX 3080. A sub $1000 Mac Mini would end up being the best gaming PC you could buy, at less than half the price of an equivalent windows machine.


Similar to an RTX 3080 Mobile, which is IIRC equivalent to somewhere around a RTX 3060 Desktop.


Even that is doubtful until we see 3rd party benchmarks.


The M1 Max starts at $2700 for the 16" 10-core CPU, 24-core GPU.

The comparison to higher end graphics uses the M1 Max 32-core GPU which starts at $3500.

I'm not seeing a way for a Mac Mini to have the M1 Max, and still be priced below $1000.


While not that big of difference, the 14" is a little cheaper:

* The 14" MBP with M1 Max 10-core CPU, 24-core GPU, 32gb memory is $2,900

* The 14" MBP with M1 Max 10-core CPU, 32-core GPU, 32gb memory is $3,100.

The Mac mini with the Max variant chip will certainly be more than $1,000. But I expect it will be more reasonable than the MBPs, maybe $2,100 for the 32-core GPU version with 32gb of memory. That's how much the currently available Intel 6-core i7 Mac mini with 32gb of memory and 1TB of storage costs.


The point being that the cheapest 32-core GPU is $3,100, for competing with an RTX 3080 mobile that's constrained to about 105W (before it starts to pull ahead of the 32-core GPU in the M1 Max).

Overall it's just a silly premise that a sub-$1000 Mac Mini "would end up being the best gaming PC you could buy." That comment speaks to either not knowing the pricing structure here, or misunderstanding the performance comparisons.

A mid-to-low end desktop GPU pulls closer to 150-200W, and is not part of the comparisons here. And as Apple increases the performance of their chips, they also increase the price. So unless they start having 3 chips with the cheapest one being less than $1000 and massively ahead of desktop GPUs while pulling less than 50W, it's not going to happen. I don't see it happening in the next 5 years, and meanwhile Nvidia and AMD will continue their roadmap of releasing more powerful GPUs.


But without games to play on it. Instead, you could just get an Xbox Series X for half that price.


PS5! Not true apple owner buys a Microsoft product!


Based on aesthetics and hardware design, the Series X is way more in line with Apple compared to the PS5.


If only the Software compatibility was there. I'd love to be able to skip buying expensive desktop machines exclusively for gaming.


400 GB/s is insane memory bandwidth. I think a m5.24xlarge for instance has something around 250 GB/s (hard to find exact number). Curious if anyone knows more details about how this compares.


Its still a bit unclear how much of the bandwidth the CPUs can use (as opposed to the GPUs)


I think Anandtech showed a M1 P-core could max out 50+GB/s on it's own, so 8 P-cores alone likely can use 400GB/s. With both the CPU+GPU, they'd be sharing the BW. A 3060/6700XT have ~360-384 GB/s, so game benchmarks should be interesting to see at high res.

Additionally, do the 24-core GPU and 32/64 GB RAM variants of the M1 Max all have 4 128-bit memory controllers enabled? The slide seems to just say 400GB/s (Not "up to"), so probably all Max variants will have this BW available.

The value is in the power efficiency here (N5P?+), and if you can afford a $3k+ laptop.


Can the M1 MacBooks be used with Linux?


It's a work in progress. See https://asahilinux.org/blog/ for the latest updates.


This is interesting:

> However, Apple is unique in putting emphasis in keeping hardware interfaces compatible across SoC generations – the UART hardware in the M1 dates back to the original iPhone! This means we are in a unique position to be able to try writing drivers that will not only work for the M1, but may work –unchanged– on future chips as well. This is a very exciting opportunity in the ARM64 world. We won’t know until Apple releases the M1X/M2, but if we succeed in making enough drivers forwards-compatible to boot Linux on newer chips, that will make things like booting older distro installers possible on newer hardware. That is something people take for granted on x86, but it’s usually impossible in the embedded world – and we hope we can change that on these machines.


And marcan42 has specifically promised to start bringup work on the M1 Pro next week:

https://twitter.com/marcan42/status/1450163929993269249


Server Chips: if you removed the GPU, and added ECC - these would be dang nice server chips.


There is technically no reason Apple could not introduce a cloud computing service based on their silicon at some point. But would it generate the kind of profit margins they need? An interesting space to watch.


More important than profit margins are the return on capital vs. the alternative uses for that capital.


2 years when their contracts expire


I was having a vision last night of a server rack filled with iPad Pros /w Ethernet through the USB-C. I still wonder what the performance per mm^3 would be in comparison to a traditional server.


AFAIK LPDDR5 already uses ECC internally so you are halfway there.


Lots of junk comments, but I guess that happens with Apple announcements. Laptops seem impressive to me, I want to see the real world use metrics. Pushing hard on the performance per watt type metric and no doubt they have a lot of power and use less power. Seems like they listened to the outcry of people regarding the Touch Bar and more ports. Seems like this should sell well.


Seems they may have finally noticed the hit from a decent number of the pro's using their products migrating to different platforms, and realized they needed to take a few steps back on the more radical innovations to put out a solid working machine. Hell I haven't wanted an Apple machine since the early days of the unibody when other manufacturers started releasing the same form-factor. This has me considering one for my next development machine depending on the price premium over the competition.


Yup, waiting for the performance review embargo to lift.


Does anyone want to guess their packaging arch? Is it CoWoS? How are they connecting memory with 200 Gb/s bandwidth? Interposer?


I really wonder if the CPU cores are able to access the memory with the specified high bandwith or if its just for the GPU cores.


I think the M1 Max has more transistors than any commercially available CPU or GPU? In a $3499 laptop!


57 billion transistors? a 5950x has under 20 a 3060ti has under 20

are they counting the RAM?


Does either have a 10 core CPU or large cache memory?


Yes, the 5950X has 16 big CPU cores and 64 MB of L3 cache.


My mistake. But it doesn't have a 32 core GPU!

Agreed these counts look high - but the fact that they can slot this into a $3,499 laptop is remarkable and must say something about the cost effectiveness of TSMC 5nm process.


The CPU may not have a 32 core GPU inside it but that doesn't stop you from adding the 2 numbers together and seeing it's still significantly less than 57 billion.

Would be very curious to see what took all of the die space. Neural engine? Those additional video encoding engines? I doubt we'll get to know unfortunately.



Nope, RAM is off die but on package.


I wish that these systems somehow could use/access the CUDA and DLSS pipelines from NVidia.


Are these M1 Pro/Max based on the A14 or A15?

Does “M1” == “A14” or does it mean “M1” == “5nm TSMC node”?


They share the core design, icestorm for efficiency cores and firestorm for performance cores, but these are recombined into entirely different systems on chip. To say the m1 max is the same as a14 is like saying a xeon is the same as an i3 because they both have skylake-derived cores.


The differences between the A14 and A15 are so small it doesn't matter. I suspect the CPU/GPU cores come from the A14 but the ProRes accelerator comes from the A15.


>The differences between the A14 and A15 are so small it doesn't matter.

The testing shows increases in performance and power efficiency.

>Apple A15 performance cores are extremely impressive here – usually increases in performance always come with some sort of deficit in efficiency, or at least flat efficiency. Apple here instead has managed to reduce power whilst increasing performance, meaning energy efficiency is improved by 17% on the peak performance states versus the A14. If we had been able to measure both SoCs at the same performance level, this efficiency advantage of the A15 would grow even larger. In our initial coverage of Apple’s announcement, we theorised that the company might possibly invested into energy efficiency rather than performance increases this year, and I’m glad to see that seemingly this is exactly what has happened, explaining some of the more conservative (at least for Apple) performance improvements.

On an adjacent note, with a score of 7.28 in the integer suite, Apple’s A15 P-core is on equal footing with AMD’s Zen3-based Ryzen 5950X with a score of 7.29, and ahead of M1 with a score of 6.66.

https://www.anandtech.com/show/16983/the-apple-a15-soc-perfo...

Having your phone chip be on par with single core Zen 3 performance is pretty impressive.


m1 is already very different to A14/15. Why do you think they are "based" on the mobile SoCs?


Because CPU tears downs show that “Apple's M1 system-on-chip is an evolution of the A14 Bionic” [0]

[0] https://www.tomshardware.com/amp/news/apple-m1-vs-apple-m14-...


So if I want a new Macbook purely for software development and building mobile apps, what should I pick between the $2499 14" and the $3499 16"? Doesn't look like there's any difference in Xcode build times from their website


14" + M1 Max (24 GPU cores) with 32 Gb Ram is the sweet spot imho. It costs a bit more but you get twice the memory bandwidth and double the ram, which will always prove handy.

I develop iOS apps and I think this is the sweet spot. I am not sure what impact the extra bandwidth of the M1 Max will have though. We will have to wait to see. For video editing is clear. For Xcode not so sure.

14 or 16 inches is up to personal preference. I just value more the smaller package and the reduced weight. Performance is about the same.


I’d get the 16”, 14” is pretty small for something you’d be using every day


It’s only a matter of your preference, whether you like more real estate on the screen or better portability.


Probably depends on your eyesight. I like small laptops with very high resolution, but I have good eyes.


Is it disclosed anywhere what the bandwidth the CPU complex has to memory? There's the overall bandwidth to memory, which was probably made so high to feed the GPU, but can the CPUs together actually drive 200 or 400 GB/s to memory?

If they can, that's an absolutely insane amount of bandwidth. You can only get ~200 GB/s on an AMD EPYC or Threadripper Pro CPU with 8 channels of DDR4-3200, so here we have a freakin' LAPTOP with as much or even double the bandwidth of the fastest and hungriest workstation CPUs on the market.

Excited to see what a future Apple Silicon Mac Pro looks like and makes me quite envious as someone who is stuck in the x86 world.


I’m waiting on Apple’s final decision on the CSAM scanner before I buy any more hardware from them. These processors look cool, but I don’t think they’re worth the Apple premium if they’re also spying for law enforcement.


valid point


...and the Apple store is down lol :-)


What is it about the M1 architecture that makes it so speedy compared to x86 chips? Is it the Risc instruction set? The newer node process? Something else?


RISC is a big part of it, and enabled the biggest part, which is Apple got to design their own SoC. And they have their own OS, so they can cause a complete paradigm shift without having to wait for the OS company/collective to come around.


RISC means nothing except "looks like MIPS" and has little impact on performance. ARMv8 doesn't look that much like MIPS anyway.


Oh, them's fighting words =) I wrote ARM assembly before I wrote MIPS, so for me, MIPS looks like ARM. ARM was influenced by Berkeley RISC. MIPS came out of Stanford. ARM and MIPS CPUs were both introduced in 1985. And MIPS is dead, whereas ARM is doing slightly better, so, statistically, if you show a RISC assembly programmer MIPS code, they'll probably say "that looks like ARM".

Now there are two approaches to the performance argument.

First, I will argue that RISC processors provide better performance than CISC processors.

Second, the counter argument to that is that, actually, no, modern RISC processors are just as Complex as CISC processors, and M1 is faster simply because Apple. My second argument is that Apple choose ARM because of RISC. So even if it were true, now, that one could build a CISC that is just as performant as a RISC, the fact that right now, the most performant chip is a RISC is ultimately because it is a RISC.

Do RISC processors provide better performance than CISC? The #1 supercomputer in the world uses ARM. AWS's Graviton offers better price and performance than AWS Intel. M1 is faster power/performance than any x86. ARM holds all the records. But it's just a coincidence?

I think PP's position is that CISC or RISC doesn't matter. One argument I've heard is that its the 5nm production node that matters, and that CISC or RISC, it's all the same nowadays.

So how is Apple on 5nm? Why are the CISC manufacturers stuck on 7nm (or failing to get even there)? When the Acorn Computer team were looking for a new processor, they were inspired by the UC Berkeley and their RISC architecture. In particular, the were inspired by the fact that students were designing processors. They decided that they, too, could build a new CPU if they used RISC, and that was the Acorn RISC Machine. The ARM. I do not believe that RISC and CISC are "basically the same" when it comes to processor design. The fact that Intel is still stuck on 10nm(?) must in part be due to the thing being CISC. One might argue that it's because they made a particularly complicated CISC, but that would only make my point for me. I don't think that "only the instruction decoder is simpler so it doesn't make much difference" holds any water. I would love to hear from actual CPU designers who have worked on M1 or Graviton, but if they said "RISC is easier" then they would be dismissed as biased.

But let's suppose that no, actually, the geniuses at Apple could equally create a CISC CPU that would be just as performant, I'd still argue that the success is because of RISC. M1 is the most recent of a long line of ARM CPUs. They are ARM CPUs because the first in that long line needed battery life - the Newton. M1 is ARM because ARM was RISC when RISC mattered. You may argue that RISC doesn't provide a benefit over CISC now, but it certainly did then.

And how does Apple have such geniuses? Again, this is largely because the iPhone was perhaps the largest technological step-change in history. They have the market cap they do because of iPhone, and they have iPhone because RISC. So even if the argument is "Well, the M1 is fast because Apple has lots of money" well that is because of RISC.

But I still think that its easier to build a faster CPU with RISC, and I expect the first RISC Mac Pro will prove me right. At which point, RISC will own performance/watt, performance/price, #1 supercomputer, and, at last, fastest desktop.


The M1 Max is drool worthy but Mac gaming still sucks. Can’t really justify it given that I don’t do video editing or machine learning work.


M1 Pro Max is only $200 more. I'm tempted, but do we think it will be more power hungry under the same workload than the Pro?


The full-blown Max they talked about is an $800 upgrade. https://www.apple.com/shop/buy-mac/macbook-pro/16-inch It's combined with double RAM (32GB), double GPU (32-core).

The $200 upgrade is called 'Max', but is still 16GB RAM, and 'only' 24 core GPU.


Nah, 200-400 more and memory size is an independent thing.


On Apple's website, see the notes below the battery consumption claims.

https://www.apple.com/macbook-pro-14-and-16/#footnote-23

They are using the m1 pro to get their battery claim numbers.

I ordered an M1 Pro based on the slightly lower price and my assumption that it will be less power hungry. If it is only 200 dollars cheaper why else would they even offer the M1 Pro? The extra performance of he max seems like overkill for my needs so if it has worse power consumption I don't want it. I could probably get away with an M1 but I need the 16" screen.

We will find out in a few weeks when there are benchmarks posted by 3rd party reviewers, but by that time who knows how long it will take to order one.


> This also means that every chip Apple creates, from design to manufacturing, will be 100 percent carbon neutral.

How is that even possible?


likely they not but is possible to say that because they buying a Carbon offsets or similar products to make it 100 neutral https://en.wikipedia.org/wiki/Carbon_offset

(obviously apple can afford that)


I guess it must be net, right? So maybe carbon offsets or providing more green energy than they consume, to the grid?


Collecting juicy tax credits for installing solar power. Carbon credits.


With offsets.


There are two kinds of offsets: 1. existing offsets where you buy them but there is no net offset creation, 2. newly created offsets where your purchase makes a net difference. An example of (1) could be buying an existing forest that wasn’t going to be felled. An example of (2) could be converting a coal electricity plant to capture CO2 instead of shutting the plant down.

A quick skim of the Apple marketing blurb at least implies they are trying to create new offsets e.g. “Over 80 percent of the renewable energy that Apple sources comes from projects that Apple created”, and “Apple is supporting the development of the first-ever direct carbon-free aluminium smelting process through investments and collaboration with two of its aluminium suppliers.” — https://www.apple.com/nz/newsroom/2020/07/apple-commits-to-b...


Has anyone tried extremely graphically intense gaming on these yet, I actually would love to consolidate all of my computer usage to a single machine, but it would need to handle everything I need it to do. $2000 for a laptop that can replace my desktop, is not a bad deal. Although that said I’m in no rush here.


Gaming is a non-starter on MacOS since 2018. You can get certain, specific titles working if you pour your heart and soul into it, but it's nothing like the current experience on Linux/Windows unfortunately.


Darn.

Hard pass then. I already have a pretty fast Windows laptop. The only real issue is it sounds like an helicopter underload. ( It also thermal throttles hard ).


Depends on the games you play. Some publishers are good about this, some aren't. Worth checking.


I'm actually using Xbox gamepass for a lot of stuff.

I doubt this will work well, unless M1's can emulate windows at native speed.


I'm switching now, after waiting for the M1 16' one for more than a year now.

However, my current laptop is a 2015 MacBook, I've never had any issues with it when it comes to coding. If anyone here's switching and you don't do anything like 3D/video editing, I'm curious what's your reason?


To finally use my laptop as it's intended: without being tethered to an outlet. The battery life on these is phenomenal.


Hardware people: We worked our asses off for this breakthrough in performance.

Software people: Thanks buddy, so we can move everything to Electron now?

On a serious note, it does saddens me how a portion of hardware advancement is lost to inevitably sloppier software in the name of iteration speed.


Prediction for Mac Pros and iMac Pros: several SoCs on the mainboard, interconnected with a new bus, 16 CPU cores for each SoC, 4 SoCs max. The on SoC RAM will act as a L4 Cache and they will share normal, User replaceable DDR5 RAM for „unified“ access.


Looks like Naples. It seems to not easy to treat NUMA especially as a personal computer. So I wondered whether Apple uses Chiplet approach for M1X, but seems not.


It's a bit concerning that the new chips have special purpose video codec hardware. I hope this trend doesn't continue, requiring laptops from different manufacturers to play different video formats or at least with a non-degraded quality.


Video encode and decode have been GPU and integrated GFX features for quite a long time now.


Well, they've been features of special ASICs that just happen to be on the GPU. Video decoding is not really suitable for CPUs (especially since H.264 or so) but it's even less suitable for GPGPU.


Naive question - is this chip purely home-made at Apple or does it use Arm licensed IP ?


Apple has an architecture license from ARM, so they're allowed to create their own ARM-compatible cores. They do not license any core designs from ARM (like the Cortex A57), they design those in house instead.


Apple uses the Arm Aarch64 instruction set with Apple silicon which is Arm IP. I don't believe that any Apple silicon uses any other Arm IP but who really knows since it is likely Apple would have written any contracts to prevent disclosure.


Apple had a choice: many CPU cores or a bigger GPU. They went with a much bigger GPU. Makes sense: most software is not designed to run on a large number of CPU cores, but GPU software is massively parallel by design.


Does the M1 Pro/Max support running/building x86 Docker images using x86 hardware emulation?

As as developer, this is the feature I've missed the most after having used my M1 MacBook Air for about a year.


docker buildx should be able to handle it, no?


What makes you think that?


Because it's used to build multi architecture docker images.


Ah, sorry. I didn't mean building, just running with x86 hardware emulation.


Doing stuff on two different windows just became a bit clumsier every time e.g. code reviews. I can imagine manually resizing windows when in a tiling mode.


Does this leap sound big enough to eat into the traditional windows pro laptop market?

ITs going to have a tough time justifying purchases.


Doubtful, because those purchases are often driven by software reasons, and there Apple loses heavily (whether it's corporate manageability of the laptops, or access to special software which rarely has Mac versions)


Wonder will we see apple server one day and will the 27 inch goes sepearate box ( a macmini like Microsoft desktop).


I wonder if 64GB in unified memory will jumpstart ML developments for Macs. I cannot wait to run a Mac Mini farm.


Any guesses how long it will take for Apple to update the pre-existing M1 Macs ? (Price drop, performance boost)


maybe around next fall


My dream setup would be dual 27" 5K screens and M1 Max 16" with 64GB. I guess it's like 8k EUR.


These chips are impressive, but TBH I have always been annoyed by these vague cartoon-line graphs. Like, is this measured data. No? Just some market doodle? Please don't make graphs meaningless marketing gags. I mean, please don't make graphs even more meaningless marketing gags.


Is there any VR for the Mac? Seems like the machine is more-than-ready for VR!


Apple had SteamVR support for a while, and even Valve Proton support for a while (meaning that Windows-only games were quite playable on Mac hardware). Unfortunately, Apple pulled 32-bit library support without offering a suitable alternative, so Valve was forced to scrap their plans for Mac gaming entirely.


I maintain that this was to avoid the lack of support for 32-bit games being blamed on the Apple Silicon.


There was only a 15-year transition period away from a seriously technically inferior architecture (for both performance and security.)


Works on my machine.


Does anyone know how the single core performance matches to the M1 chip ?


Isn't almost every new Apple chip the most powerful chip Apple has ever built?


They could have been optimizing for lower power consumption rather than more compute power. For example, the next iPhone chip will likely not be the most powerful when it comes to compute, even if it beats the other iPhone chips.


And that's exactly why I put the word "almost" in the sentence!


Maybe currently, but they are only on their second generation of laptop chips.

I guess going forward the current A-series chip will be lower power/performance than any reasonably recent M-series chip (given the power envelope difference).


I suppose the future of personal computing may be ARM then? For now


If Nvidia buys ARM they'll flee like rats on a sinking ship. I'd bet on RISC-V.


Or we'll see Windows laptops with ARM Cpus and an RTX 4090.


Probably. It will be an expensive niche product though. Will probably sell about as well as the shield.


Ah yes, the naming. Instead of M2 we got M1 Pro & M1 Max. I'm waiting for M1 Ultra+ Turbo 5G Mimetic-Resolution-Cartridge-View-Motherboard-Easy-To-Install-Upgrade for Infernatron/InterLace TP Systems for Home, Office or Mobile [sic]


It's scaled up M1. No new cores, no new modules. Just more of everything. I would've been disappointed if it was called m2.


I am not interested in Apple's ecosystem. While I stay with X86 I wonder if and when AMD and Intel will catch up. Or if another ARM chip maker will release a chip as good but without tying it to a proprietary system.


This is a chicken and egg problem.

No one will bother to make an ARM chip for "PCs" if there is no OS to run on it. MS won't fully port Windows to ARM with a Rosetta-like layer unless there is an ARM computer to run it.

Yes, Linux can run on anything, but it won't sell enough chips to make creating a whole new ARM computer line profitable.


MS has a full Windows for arm64 with a Rosetta-like layer already. It's been available for at least two years, although it only supported x86 at first (presumably because of the patents on x86-64). x86-64 was added recently (https://blogs.windows.com/windows-insider/2020/12/10/introdu...)


Is it possible to dual boot windows with the new m1 chips?


You can most definitely use the latest Parallels IIRC, why dual boot?


Mostly for gaming. I use bootcamp now to play MTG Arena and StarCraft 2 on windows which seems to have much better perf.

I imagine gaming doesnt work well in parallels?


I don't think I'd make that assumption. https://www.applegamingwiki.com/wiki/M1_compatible_games_mas... , for instance, has a number of games that do pretty well in Parallels.


You can play SC2 via rosetta. Unfortunately not optimised for M1


No.


How is the memory bandwidth so much better than intel ?


Seems to be a much wider memory bus combined with a new generation of RAM, LPDDR5.


I’m not super familiar with how hardware works so this might be a stupid question but how different are the tiers of processors for each upgrade and what’s a reasonable use case to choose any of them?


Can somebody check up on intel? Are they okay?


wonder if they are any good for Machine Learning (large scale) ?


Wow! Amazing CPUs. Any word on pricing yet? I'd love to build my new PC with these.

/s


Great design, abolute nope, human rights are more important

Don't feed the monsters


I'll wait for the Linus benchmarks/comparisons.


That's funny, I stopped watching LTT videos immediately after his awful "dumpster fire" M1 first impressions video last year.


You mean Anandtech


I know Apple has a translating feature called Rosetta. But what about virtual machines? Is it possible to run Windows 10 (not the ARM edition but the regular, full Windows 10) as a virtual machine on top of an Apple M1 chip? It looks like UTM (https://mac.getutm.app/) enables this, although at a performance hit, but I don’t know how well it works in practice. What about Parallels - their website suggests you can run Windows 10 Arm edition but doesn’t make it clear whether you can run x86 versions of operating systems on top of an M1 Mac (see old blog post at https://www.parallels.com/blogs/parallels-desktop-m1/). I would expect that they can run any architecture on top of an ARM processor but with some performance penalty.

I’m trying to figure out if these new MacBook Pros would be an appropriate gift for a CS student entering the workforce. I am worried that common developer tools might not work well or that differences in processors relative to other coworkers may cause issues.


Neither Apple, Microsoft, nor Parallels is planning to support x86-64 Windows on Apple silicon. You can run emulation software like QEMU and it works but it is very slow. UTM uses QEMU.


If they're entering the workforce, gift them for their personal use. The office should provide them with the laptop instead of a personal one.


how does this new GPU and "neural engine" perform comparing to Nvidia GPUs, and do they support Tensorflow and something similar to CUDA SDK


The M1 gpu was comparable to a 1080. https://blog.tensorflow.org/2020/11/accelerating-tensorflow-... I believe they are working on PyTorch support.


Bad day for Intel®


I wonder what would the benchmark results be.


Apple headline: New chip is faster than old chip.

HN: 54 points

Slow news day.


MacBook announcements are the opposite of slow news days. Consumer tech outlets are literally scrambling to cover every aspect of these things because there is so much interest.


Not suggesting this is whats happening, but you can pay for this kind attention.

On HN you probably don't have to though. Lots of fans of Apple things here.


As is typical for apple, the phrasing is somewhat intentionally misleading (like my favorite apple announcement - "introducing the best iphone yet" - as if other companies are going backwards?). The wording is of course carefully chosen to be technically true, but to the average consumer, this might imply that these are more powerful than any CPU apple has ever offered (which of course is not true).


>this might imply that these are more powerful than any CPU apple has ever offered (which of course is not true).

Excuse my ignorance, what is?


This time, they showed which laptop was used to compare the performance on the bottom left corner during presentation


Yeah, that is good ofcourse.


Which are more powerful?


Napkin math based on Ethereum mining, which on original 8 GPU core M1 was about 2MH/s, puts M1 Max GPU performance (8MH/s with 32 cores) to be only 1/4 of mobile 3060, which does over 34MH/s.

So I am extremely skeptical about Apple claims on "comparable" GPU performance to RTX 30xx mobile series. And again, that RTX is still using 7nm.


Great update, I think Apple did the right thing by ignoring developers this time. 70% of their customers are either creatives who rely on proprietary apps, or people who just want a bigger iPhone. Those people will be really happy with this upgrade, but I have to wonder what the other 30% is thinking. It'll be interesting to see how Apple continues to slowly shut out portions of their prosumer market in the interest of making A Better Laptop.


I agree this is very targeted towards the creative market (i.e. SD card port, etc), but I'm curious as a developer what you would have liked to see included that isn't in this release.

I guess personally having better ML training support would be nice, since I suspect these M1 Max chips could be absolute monsters for some model training/fine-tuning workloads. But I can't think of anything design-wise really.


The big ticket items: Hardware-accelerated VSCode cursor animation. Dedicated button for exiting vim. Additional twelve function keys, bringing the total to 24; further, vectorised function key operations, so you can execute up to 12 "nmap <f22> :set background=dark" statements in parallel. Dedicated encode and decode engines that support YAML, TOML and JWT <-> JSON. A neural engine that can approximate polynomial time register allocation almost instantly. The CPU recognises when you are running Autoconf `./configure` checks and fast-forwards to the end.

I would also like a decent LSP implementation for Siri, but the question was about hardware.


You could just say "I want Linux" and get the same point across.


Linux has offered the Virtual Function Key System since at least 1998, but there isn't a driver that uses the native 8-lane AVF instruction set on my 230% mechanical keyboard yet.


Solid gold


Nope, just Silver and Space Grey.


As a developer and a power user, I'd love for them to stop iOS'ifying Mac OS.

There's such a huge disconnect between what they do with hardware and what they do to MacOS.


Honestly a MacOS "Pro Mode" would be great. Let me run the apps I want, not use my mouse, and be able to break things, but keep the non-power user experience smooth and easy.


Seconding this. If both iOS and MacOS had a "Pro Mode" that didn't treat me like a toddler, I'd be jumping into the Apple ecosystem head-first.


Thirded.


I certainly don't feel ignored as a developer by this update. The memory capacity and speed bump, and the brighter, higher resolution screen are very significant for me. My M1 Air is already a killer portable development machine because of how cool and quietly (silent) it runs. The 14" looks like a perfect upgrade for me.

What is missing for you as a developer?


Still have not solved the tinytiny inverted Tee arrow key arrangement on the keyboard. Need to improve on the former IBM's 6-key cluster below the right shift key, or arrange full sized key arrows in a crossplus pattern breaking out of the rectangle at the lower right corner.


I am in the market for a new laptop and bit skeptic for M1 Chips. Could anyone please tell me how is this not a "premium high performance Chromebook" ?

Why should I buy this and not Dell XPS machine if I will be using it for web development/Android Development/C#/DevOps. Might soon mess with machine learning


Better battery life while performing faster than any Intel/AMD chip at equivalent power usage. Portability is the reason for the existence of laptops.


For web dev it works very well, android development no clue, c# is primarily a windows-oriented development environment so probably not so great. For devops, well.. Mac is a linux-esque environment, it'll be fine?

I have the m1 air and the big advantages are the absurd battery life and the extremely consistent very high performance.

It's always cold, it's always charged, it's always fast.

I believe you can get both faster cpu and gpu performance on a laptop, but it costs a lot in battery life and heat which has a bigger impact on usability than I believed before getting this one.

Might want to add, this is my first ever apple laptop. I've always used Lenovo laptops, ran mostly windows/linux dual boots on my laptops and desktops over the years.


> c# is primarily a windows-oriented development environment so probably not so great.

I'm not confident that is still true today. .net core is multi-platform all-the-way and is the future.


That's a debate between OSX and alternatives, the M1 has little to do with that. Except maybe the Rosetta for unsupported x86, but I doubt that'll cause you any issues.

Edit: C# support is there on OSX, Rider has Apple Silicon builds and .net core is cross platform now.

ML is probably a let down and you'll be stuck to CPU sized workloads, that being said the M1 does pretty well compared to other x86 CPU


Why is ML a let down?


I am a very satisfied Apple customer, but will gladly tell you that a Windows machine would make more sense for the use cases you’ve described.


C# is the only odd item there. Entire companies are doing everything else in that list using exclusively MacBook Pros.


Isn't C# quite available on mac?


Yes and dotnet 6 (release date scheduled for 11/9, but is available now as rc2) works natively on the m1... REALLY quickly I might add :)...

I go between vs.code and JetBrains Rider (rider now has an EAP that runs natively on the m1)...

I am going to upgrade just because I didn't get enough ram the first time around :)


Other than the ML work maybe. The Apple chips claim to be very performant with ML workloads.


Since these won't ship in non-Apple products, I don't see really the point. They're only slightly ahead of AMD products when it comes to performance/watt, slightly behind performance/dollar (in an Apples to apples comparison on similarly configured laptops), and that's only because Apple is head of AMD at TSMC for new nodes, not because Apple has any inherent advantage.

I have huge respect for the PA Semi team, but they're basically wasting that talent if Apple only intends to silo their products into an increasingly smaller market. The government really needs to look into splitting Apple up to benefit shareholders and the general public.


> I have huge respect for the PA Semi team, but they're basically wasting that talent if Apple only intends to silo their products into an increasingly smaller market.

They design the SoCs in all iPhones and soon all Macs. They have the backing of a huge company with an unhealthy amount of money, and are free from a lot of constraints that come with having to sell a general-purpose CPUs to OEMs. They can work directly with the OS developers so that whatever fancy thing they put in their chips is used and has a real impact on release or shortly thereafter, and will be used by millions of users. Sounds much more exciting than working on the n-th Core generation at Intel. Look at how long it is taking for mainstream software to take advantage of vector extensions. I can’t see how that is wasting talent.

> The government really needs to look into splitting Apple up to benefit shareholders and the general public.

So that their chip guys become just another boring SoC designer? Talk about killing the golden goose. Also, fuck the shareholders. The people who should matter are the users, and they seem quite happy with the products. Apple certainly has some unfair practices, but it’s difficult to argue that their CPUs are problematic.


"They're only slightly ahead..." and "The government really needs to look into splitting Apple up to benefit shareholders and the general public." doesn't really seem to jive for me.

If they're only slightly ahead, what's the point of splitting them up when everyone else, in your analysis, is nearly on par or will soon be on par with them?


This is a poorly considered take, no offense to you. I think you're failing to consider that Apple traditionally drives innovation in the computing market, and this will push a lot of other manufacturers to compete with them. AMD is already on the warpath, and Intel just got a massive kick in the pants.

There's other arguments against Apple being as big as it, but this isn't a good one. Tesla being huge and powerful has driven amazing EV innovation, for example, and Apple is in the same position in the computing market.


A lot of Apple fans keep saying Apple drives innovation, but I'd love to see where this has actually been true. Every example I've ever been given has a counter-example where someone else did it first and Apple did not do it in a way that conferred a market advantage; the only thing Apple has proven they're consistently better at is having a PR team that is also a cult.


Sure. Here's the basic pipeline: Somebody makes a cool piece of tech, but they don't have the UI/UX chops to make it work. Apple comes along and works some serious magic on the tech to make it attractive for everyday use. Other companies get their roadmap from Apple's release, and go from there.

Some examples:

Nice fonts on desktops

Smartphones

Tablets

Smartwatches (more debatable, but Apple did play a big part here)

In-house SOCs.

I suspect that their future AR offering is going to work the same way. The market is currently nascent, but Apple will make a market.


ARM going mainstream in powerful personal computers was exciting enough as it was, with the release of the Apple Silicon M1. With time hopefully these will be good to use with Linux.


Our local office only uses macOS and Windows for desktop computers, GNU/Linux nowadays only for servers in some Amazon and Azure cloud instances.

Apple will have plenty of customers.


OSX is less than 1/7th of the desktop OS market, and iOS is slightly over 1/4th of the phone market; the major cloud companies are the largest consumers of desktop and server scale CPUs, and buy mostly AMD with some Intel only when cluster compatibility requirements apply; when it is non-x86, it is a mix of things that do not include any Apple ARM offerings but do include larger scale higher performance ARM CPUs and some POWER as well.

The most used architectures of any kind (including embedded, industrial, and automotive) are MIPS, then ARM, then POWER/PowerPC, then x86. Apple is a tiny player in the overall ARM market, and by hyper-focusing on desktop and phone alone, they are giving up important opportunities to diversify their business.

At no point does "Apple will have plenty of customers" make sense in a context where Apple is a $2.45T company: either they have a large majority of possible customers in multiple industries at multiple levels, or reality is going to come crashing in and drive them back down to sub-$T levels. You cannot convince a company that only has a net income of $22B a year is worth that much, no matter how much "goodwill" and "brand recognition" and other nonsense intangibles they have. Steve Jobs died exactly ten years ago on Oct 5th, and the RDF died with him.


You are forgetting that GNU/Linux desktop will never happen, it is always going to be Windows or macOS for 98% of the world.

Even if we take ChromeOS and Android into account, ChromeOS is largely irrelevant outside North America school system, and Android will always be a phone OS.

In both cases, the Linux kernel is an implementation detail.

So that leaves Apple with its 10% market share for all creatives, which someone has to develop software for, and iOS devices, which also require developers to create said apps.

Everyone else will stay on Windows as always.


Apple's net income is not $22B a year -- that's way off.


Sorry, I meant per quarter.


Also not really sensible as a basis for arguments about the valuation of the company, as their business is seasonal.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: